Show cover of Beneficial Intelligence

Beneficial Intelligence

A weekly podcast with stories and pragmatic advice for CIOs and other IT leaders.


Other People's Failures
In this episode of Beneficial Intelligence, I discuss other people's failures. They can affect you, as the recent Amazon Web Services outage showed. Cat owners who had trusted the feeding of their felines to internet-connected devices came home to find their homes shredded by hungry cats. People who had automated their lighting sat in darkness, yelling in vain at their Alexa devices for more light. More serious problems also occurred as students couldn't submit assignments, Ticketmaster couldn't sell Adele tickets and helpless investors watched their stocks tank while being unable to sell.  On a personal level, this dependency is an occasional inconvenience. But for companies, it is a problem.When you buy cloud services directly from Amazon, Microsoft, or Google, at least you know what you depend on, and can take your own precautions. But your SaaS vendors depend on one of the big three cloud providers. You will find that most of them consider using two different data centers with the same cloud vendor to be plenty of redundancy. It isn't. Another problem is your "smart" devices that all communicate via the internet to a server controlled by the device vendor. The vendor is running that server in one of the three big clouds. That means an Amazon outage can lock you out of your building. Some of your systems are business crucial. For these, you need to find out what your vendors depend on. Otherwise, you will be blindsided by other people's failures.------Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at 
07:35 12/10/21
People Shortage
In this episode of Beneficial Intelligence, I discuss the people shortage. It isn't real. Complaining about a lack of people is what is known as a "half argument." You say what you want, but not what you are willing to give up. That's like a politician promising to build a new public hospital but won't say where the money will come from. The full argument for missing people is "we cannot get the people we want at the conditions we are willing to offer."  If you had a crucial project that will make the business millions of dollars, you would be able to find the resources you need. You could simply offer three times the market rate, full benefits, and a 40-hour workweek with no overtime. Allocating resources is a basic leadership task. You rank your tasks and projects in order of descending business value and allocate available resources to the most valuable. It doesn't make sense for a CIO to say that the organization is "missing" a hundred programmers. A full argument would be that if we had a hundred extra programmers, we could build a specific IT system that is less valuable than all the current projects.There might be a real shortage of money or copper or clean water. But there is no shortage of people. ---Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at 
05:43 11/26/21
Data Hoarding
In this episode of Beneficial Intelligence, I discuss data hoarding. Gathering too much data costs money and doesn't add value.  We think we need all this data to train our AI, but hoarding data is the wrong place to start. Using a counterproductive metaphor, some say that "data is the new oil." That is a dangerous metaphor with no less than four problems:First, data is not fungible like oil is. One barrel of oil is just as valuable as the next barrel. But one data record does not have the same value as another data record.  Second, data hoarding shows diminishing returns. The value of 100 million barrels of oil is 100 times the value of 1 million barrels. But the value of 100 million transaction records is not 100 times the value of  1 million transaction records. Third, the process of refining data into valuable business insight is not repeatable. Anybody can build an oil refinery. That's just a question of money. But extracting value from data is more art than science, and even with the best data scientists, you might still not be able to extract any value from your data. Fourth, the value density in data is very low. Everything in a barrel of oil becomes a useful product. But most data records do not provide any business insight. Gathering data in the hope of extracting value is putting the cart in front of the horse. The right way to work with data is to start with a business goal and a hypothesis about which data might provide insight. Gather the data, run the experiment and evaluate. Don't just hoard data. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:29 10/29/21
In this episode of Beneficial Intelligence, I discuss monoculture. Just like in farming, monoculture is efficient and dangerous.Modern farmers will plan hundreds or thousands of acres with the same crop. That gives efficiency because the entire crop will respond identically to fertilizer and pesticides. It also means that the entire harvest will be lost if some new pest or disease suddenly appears. Monoculture cost more than a million lives in Ireland in the Great Famine of the 1850s. There is also monoculture in your IT landscape. If all your systems have the same hardware and run the same software, they will all be vulnerable to the same bugs and malware. Your servers are probably many different types because they have been added over the years. But if you run the same virtualization software on most of them, your entire infrastructure is vulnerable to a bug in your virtualization.Your workstations are monoculture, and if something takes out Microsoft Windows, you are dead in the water.But the really dangerous monoculture is found in your network equipment. You probably buy all your gear from one vendor so your network people only need one skill stack. But that means that a vulnerability will expose your entire network. You don't want to put all your eggs in one basket. If you are concerned with robustness and business continuity, beware of monoculture.  Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
09:04 10/15/21
Trust, but Verify
In this episode of Beneficial Intelligence, I discuss trusting your vendors. You trust them to make their best effort at producing bug-free code. You probably trust that their software will perform at least 50% of what they promise. You might trust them to eventually build at least some of the features on their roadmap. But can you trust them to not build secret backdoors into the software they give you?Snowdon showed we cannot trust any large American tech company because they send our data straight into the databases of the National Security Agency. Apparently, you cannot trust Chinese smartphone vendor Xiaomi. The Lithuanian National Cyber Security Centre just published the results of their investigation, and they recommend that people with such phones replace them with non-Xiaomi phones "as fast as reasonably possible."  It turns out these phones send some kind of encrypted data to a server in Singapore, and that it has censorship built in. Phrases such as "Free Tibet" simply cannot be rendered by the browser or any other app. Right now, that feature is not active in Europe, but it might be enabled at any time.  During the nuclear disarmament discussions between the United States and the Soviet Union in the 1980s, Ronald Reagan was fond of quoting a Russian proverb: Doveryay, no proveryay - Trust, but verify. The ability for both parties to verify what the other was doing became a defining feature of the eventual agreement. In software, we can verify Open Source. If you cannot find open source software that does what you need, many enterprise software vendors will make their source code available to you under reasonable non-disclosure provisions. In your organization, there should be both trust and verification. Don't simply trust your software vendors. Trust, but verify.  Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at 
09:34 10/1/21
Time to Recover
In this episode of Beneficial Intelligence, I discuss time to recover. The entire network of the justice ministry of South Africa has been disabled by ransomware, and they don't know when they'll be back. Do you know how long it would take you to recover each system your organization is running? When you have an IT outage, what the business wants most is a realistic timeline for when services will be back. If IT can confidently tell them that it will take 72 hours to restore services, the business knows what they are dealing with. They can inform their stakeholders and make informed decisions about in which areas manual procedures or alternative workflows should be implemented. The worst thing IT can do in such a case is to keep promising "a few hours" for days in a row. In the 1980s, I was working for Hewlett-Packard. They had a large LED scrolling display mounted over their open-plan office. The only time it was ever used was when their main email and calendar system was unexpectedly down, telling everyone when it would be back up.  In the 1990s, I was doing military service in the Royal Danish Air Force as a Damage Control Officer. After an attack, I had to tell the base commander how much runway we had available. I had planned our reconnaissance and could confidently say that I would know in less than 28 minutes af the all-clear.  In the early 2000s, I was working with database professionals. These people spent much of their time preparing to recover their databases. They had practiced recovery many times and knew exactly how long recovery would take. As the CIO, take a look at the list of your system. It needs to list the expected time to recover for every system. The technical person for the system should verify that this time has been tested recently, and the business responsible should verify that this time is acceptable. If you don't have a documented time to recover per system, you need to put your people to work to create it. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
08:28 9/17/21
Goal Fixation
In this episode of Beneficial Intelligence, I discuss goal fixation. Richard Branson almost didn't make it back from space. His pilots had a problem and flew very close to the limit. They should have aborted. But the future of commercial spaceflight was resting on their shoulders. They were fixated on the goal, and that causes problems. The reason we are finding out is that authorities noticed the flight was outside its designated airspace because stronger winds than expected caused the flight to have a different profile. The pilots got a red light ENTRY GLIDE CONE WARN. That means the spacecraft is so far from the planned course that it might not reach the place it has to be to glide to the landing site. The correct checklist approach is to abort the mission. But this was a highly-billed first commercial flight with the founder on board. The pilots pressed on. They managed to go to space and return safely. But they were dangerously close to the edge. People die because they get fixated on the goal and push on. Mountaineers continue towards the summit after the safe turnaround time, and pilots fly into bad weather. Some of the well-known people we've lost to pilot goal fixation include basketball legend Kobe Bryant and Polish President Lech Kaczyński. We see the same thing in failed IT projects. Multi-year, million-dollar projects keep collapsing ignominiously without anything to show for all the effort. This happens due to goal fixation. Tragically, the problem is completely invisible to project sponsors who feel part of their reputation is on the line as sponsors of the project. It is also invisible to the program management and project leaders inside the project.  There are two solutions. One is to listen to the people on the ground. The programmers and testers know that a project will fail. The other is to get an independent outside opinion. That's what I provide to my customers. If you don't have a process for gathering in-the-trenches information, or an outside advisor, or preferably both, you are likely to fall prey to goal fixation. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
09:10 9/3/21
Narrow Focus
In this episode of Beneficial Intelligence, I discuss the narrow focus of IT professionals. This is an unavoidable consequence of the complexity of the technology we use. We've had to learn to give our computers very exact instructions, and that informs our thinking. The app from my local supermarket is obviously built by people with a narrow focus. If I search for "sugar," the first hit is "pickled cucumber (sugar-free)." The Amazon app, on the other hand, is built by people with a wider focus. Whatever you search for, Amazon will always give you a suggestion. When IT organizations try to hire, they will come up with a long list of technologies and programming languages. Unfortunately, nobody matches the entire list, and no one is hired. Successful organizations instead ask for recommendations and interviews to find people with energy and willingness to learn. In the IT industry, we call something Artificial Intelligence if it can succeed at some very narrow task like recognizing cats in videos. Unfortunately, the word "intelligence" means something much wider to everyone else. When Tesla talks about "autopilot," they mean something that can stay on the road at a constant speed. In a narrow sense, a Tesla has an autopilot. In the wider sense, drivers expect that word to mean a car that drives itself. A narrow focus is a quality in an IT professional. There is no need to change them, and they do not get a wider focus by being sent on a User Experience (UX) boot camp or a three-day Product Owner course. Your teams need people with a wider focus. That's something that UX professionals and real product owners from the business can give you. That's why the best IT organizations employ anthropologists to study users. It is your job as an IT leader to ensure you have people with both narrow and broad focus.  Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at 
08:28 8/20/21
Back to the Office
In this episode of Beneficial Intelligence, I discuss whether you should force people back to the office. This will be your most important leadership decision this year.  Apple told everyone to report back to the office. Apple CEO Tim Cook says that "in-person collaboration is essential to our culture." Google is expecting 20% of employees to work from home in the long term, while Facebook is expecting 50% remote work. The big Wall Street banks, on the other hand, require everyone back in their New York offices five days a week.  Remote working presents two problems: Culture and promotion.Management guru Peter Drucker said, "culture eats strategy for breakfast." That means your culture and the implicit knowledge is much stronger than anything you write down. Your culture is how you actually do things, which is sometimes very different from the written rules. New hires can only assimilate the culture in person through small talk and watching others. We are already seeing lower retention among new hires that joined during the pandemic.The second problem is promotion. You are an enlightened leader who will not let your promotion decisions be influenced by whether someone is in the office every day. But other leaders in your organizations will be less able to overcome the bias towards people they see in person. The data is clear. The more you are in the office, the quicker you will be promoted. Unfortunately, the people who say they want to work more at home are women and minorities. Exactly the people you need more of in your leadership team. Deciding on a remote working policy is your most important leadership task right now. Not making a decision and letting people work it out for themselves is the worst option. It will mean that experienced employees will stay at home and the new hires will wander the halls of empty offices, quickly quitting again. And your leadership team will become less diverse.Your job as a leader is not to be popular. Your job is to do make decisions that ensure your organization meets its goals. This year, that is likely to involve forcing some people back to the office.Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
08:38 8/6/21
Humans and Computers
In this episode of Beneficial Intelligence, I discuss humans and computers. Jeff Bezos went to space in a fully autonomous computer-controlled rocket. Richard Branson went to space last week, and he had humans flying his spacecraft. The Silicon Valley mindset is that you can program or train computers to do anything. However, as the continuing struggle to build truly self-driving cars has shown, some things are still very, very hard for computers. Even Elon Musk, who claims his Teslas are self-driving, has manual controls on his spacecraft, SpaceX Crew Dragon.Jeff Bezos remains fully committed to the power of computers, and computers will fire Amazon workers automatically if they don't perform as the algorithm expects. Richard Branson, on the other hand, is an entrepreneur. He has founded dozens of companies and made them successful by believing in humans. He hires good people, gives them resources and direction, and lets them do their thing.  The first human spaceflight program of the United States was Project Mercury. NASA initially subscribed to the computer-centric school of thought. But the highly trained astronauts rebelled and demanded a window so they could fly the spacecraft if needed. Fortunately, they got their way. On the last Mercury mission, astronaut Gordon Cooper saved his life and the U.S. space program by hand-flying his craft back to earth after multiple equipment failures.You can implement IT systems in two ways. Either the computer is in charge, and the human can intervene. Or the human is in charge, and the computer assists. What's your approach? Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
06:42 7/23/21
In this episode of Beneficial Intelligence, I discuss competition. Billionaires Jeff Bezos and Richard Branson are competing who gets to space first, with both likely to blast off within the next two weeks. Competition is one of the great forces propelling the world forward. Richard Branson's Virgin Galactic spacecraft is based on SpaceShipOne that won the Ansari X Prize back in 2004. That prize was for a private spacecraft that could go to the edge of space twice in two weeks. It seemed impossible, but aerospace genius Burt Rutan with funding from Microsoft billionaire Paul Allen claimed the prize. In the early part of the 20th century, the Schneider Prize similarly spurred innovation in aviation. The 1931 winner became the basis of the Spitfire fighter aircraft that won the Battle of Britain in 1940. Self-driving cars come from the DARPA Grand Challenge. In 2004, no car could autonomously drive more than 7 miles. The next year, competition between especially Stanford University and Carnegie Mellon University resulted in their two cars completing the 150 miles route within 9 minutes of each other.  If you have clear competitors in your space, identify them. Have someone examine your competitors' products, and share that knowledge with the entire team. Making sure that everyone knows what the bar is can release energy and creativity that will allow you to leapfrog the competition. If you don't have a good external competitor to benchmark yourself against, commission two competing products inside your organization. That costs more money, but it releases energy and gives you speed and creativity. Once a winner has been declared, incorporate the best ideas from the losing project in the winning one.Competition has been a great force for progress all through human history. Use it in your organization for increased creativity, energy, and speed.  Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
10:18 7/9/21
In this episode of Beneficial Intelligence, I discuss pseudo-security. The lock on your front door is not secure. It takes an experienced locksmith an average of 7.1 seconds to manually an average door lock, and it's even faster with a "pick gun." If locks are so bad, why don't we have even more burglaries? Because your total security does not only depend on the lock. A would-be burglar has to contend with the risk of somebody being home, neighbors noticing you, a camera on someone else's house recording you, and cops grabbing you and putting you in jail.Like locks, passwords also do not protect you. At least one of your thousands of users has re-used the company password somewhere else. That means it will end up in one of the large hacker databases where identities can be bought for pennies. Then a hacker can sit comfortably in a basement in Moscow and run software to try thousands of username/password combinations with zero chance of being caught.In the military, I learned that barbed wire that was not constantly observed was dangerous pseudo-security. You think you are protected, but when the enemy attacks, you will discover that your wire has long since been cut. Barbed wire cannot stand alone. Your door lock cannot stand alone. Your passwords cannot stand alone. You need to complement password security with two-factor authentication, IP address verification, time restrictions, network segmentation, anomaly detection, and all the other tools in the IT security toolbox. Passwords alone are pseudo-security. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:53 6/25/21
Good Enough
In this episode of Beneficial Intelligence, I discuss how to choose what is good enough. How do you know when something is good enough? That requires good judgment, which is unfortunately in short supply. IT used in aviation, pharma, and a few other life-and-death industries are subject to strict standards. We can lean on standards like the GxP requirements that anyone in the pharma industry loves to hate. However, in the general IT industry, we have lots of standards, but none of them are mandatory. That's why each week seems to bring a new horror story of an organization that believed their IT was good enough and found out it wasn't. Southwest Airlines learned that first-hand this week. On Monday, they couldn't fly because the connection to their weather data provider was down. On Tuesday, they couldn't fly because the connection from airports to the central reservation system was down. If you don't know who is supposed to be on the plane, you can't fly. They ended up canceling more than 800 flights over two days. Obviously, the CIO of Southwest Airlines decided that a single network was good enough. That can be a valid business decision. But you need to make a full comparison. On one side is the cost of redundant network connections and data sources. On the other side is the loss resulting from canceling 800 flights and delaying thousands more. This outage probably cost them around $20 million. If you believe the risk of a $20 million network outage is 0.1%, standard risk calculation says you can only spend $20,000 to avoid it. But if the risk of an outage is 5%, it is worth spending $1 million on redundant connections or other alternatives. Everybody in your IT organization who makes major architectural decisions have to know what constitutes "good enough." There might be hard regulatory requirements about data security, privacy, and access control. But there are also judgment calls based on estimates of risk probability and impact.  As CIO or CTO, it is your job to teach your organization how to determine what is good enough.  Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:55 6/18/21
Unnecessary Roadblocks
In this episode of Beneficial Intelligence, I discuss unnecessary roadblocks. Amazon has a problem finding enough workers, and they have decided to get rid of an unnecessary roadblock: They will no longer test people for marijuana use. As marijuana becomes legal in more and more states, Amazon decided they only need to test truck drivers and forklift operators, not everyone. IT organizations are also always complaining that they can't find the people they need. There are three reasons for this: Bad business cases, unrealistic requirements, and unnecessary roadblocks.  If you don't have a good business case, you can't pay what talent costs. In this case, it's better for the world that IT professionals go somewhere where they can create more value. If you are requiring a laundry list of database architectures, programming languages, and architecture patterns, you are indicating to prospective applicants that you don't really know what you want. That's a turnoff for most professionals.  Finally, you might have set up roadblocks that keep people from applying. Mandatory drug testing is one, requiring security clearance for everyone is another, and requiring a certain education is a third. Requiring a college degree for an IT position is simply an outdated practice. Many good IT professionals are self-taught, and spending two years working for a scrappy startup teaches you much more than four years of college does. The problem with talent roadblocks is that they are glaringly obvious to the potential applicant, but invisible inside the organization. If you have a hard time finding the talent you need, you need to have someone external identify your unnecessary roadblocks.  Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
09:08 6/4/21
Expectation Management
In this episode of Beneficial Intelligence, I discuss expectation management. I was doing a small renovation project in our summer cottage, and I needed a special type of hinge. I found it on the website of our local building supplies store, but when I got to the store, it wasn't there. It turned out that this store was part of a co-branded chain. They had an aspirational website showing all the items a shop could potentially carry, but each shop would actually sell only their own idiosyncratic collection of items. The store did not meet my expectation, and I will not go back there.   You also want to meet or exceed the expectations of the users of your IT systems, no matter if they are internal users, external partners, or customers. The problem with achieving that is that IT professionals are notoriously bad at putting themselves in the users' place. The secret to meeting user expectations is to ask real users. You don't need a fancy usability lab to do that. Usability guru Jakob Nielsen has popularized the term "discount usability engineering" where you grab five random people in the hallway (outside the IT department) and show them your system. His research backs his claim that these five people will find almost as many of the issues as a much larger and more professional study.As CIO or CTO, you have the ultimate responsibility for the success of all projects. That means you have to remind each project to communicate continually to the entire organization what the project will achieve. In that way, you can manage expectations and make your projects successful.Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:50 5/28/21
Gaming the Metrics
In this episode of Beneficial Intelligence, I discuss gaming the metrics. We measure things to be able to manage them. But when we start using metrics to reward individual employees and teams, people will start gaming them. Newton's third law for business says that for every system the organization implements, the employees will implement an equal and opposite workaround that negates the system.  Amazon is managing a huge workforce of delivery drivers. To ensure they drive safely, they require drivers to be logged in to a mobile phone app. The app uses the accelerometer to measure acceleration, braking, and other parameters and gives each driver a score. But because Amazon is also ruthlessly pushing their small subcontractors to deliver a lot of packages very quickly, the delivery companies have started instructing their drivers to game the metrics. Drivers say they are instructed to drive very carefully for the first two hours each day to achieve a good score. After that, they are also instructed to put their phones into airplane mode and drive like the devil for the rest of their 10-hour shift to achieve the number of deliveries required. Andy Grove, who used to be the CEO of Intel back when they were successful, was known for understanding productivity. He formulated the rule that for every metric, there should be another ‘paired’ metric that addresses the adverse consequences of the first.  As an IT leader, getting your measurements right is one of the most important parts of managing your IT organization. If your metrics are used in any to praise or blame individuals and groups, you can be sure people will try to optimize for them. If you are not carefully establishing paired metrics, you can be sure your metrics are being gamed. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
10:31 5/7/21
Accidental Publication
In this episode of Beneficial Intelligence, I discuss accidental publication. There are two ways organizations lose data: Through break-ins and through carelessness.  It is hard to protect your systems against determined hackers, but it should not be hard to protect yourself against carelessness. Strangely, this is just as big a source of data leaks as determined hacker attacks. Some accidental losses are the result of individual failures to follow procedures. The British MI6 is famous for losing classified laptops in taxis and having them stolen from unattended cars. In Denmark, the health authorities produced two unencrypted CD-ROMs with data on every Danish citizen and their illnesses. They were accidentally sent to the Chinese embassy instead of the national statistics authority. Other losses happen because organizations are accidentally publishing data to the entire world. By now, every tech journalist who sees a ?id=48375 in a web address will try to change the number to something else. That's how the State of California accidentally published information about all donations Californians made to NGOs and political organizations. Another way is through badly secured APIs. A 19-year old college student shopping for student loans found he could check whether he qualified for a loan by simply entering his name, address, and date of birth. Looking at the web page source, he quickly discovered that the website was calling an unsecured API at credit scoring company Experian. As a CIO or CTO, you can no longer allow the security strategy of your IT organization to depend on a lack of IT skills in the general public. Are you sure every system your organization rolls out has been subject to a security review? If not, you might be the next organization to find that you have accidentally published confidential data. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:55 4/30/21
Irrational Optimism
In this episode of Beneficial Intelligence, I discuss irrational optimism. IT people are too optimistic. It is a natural consequence of our ability to build something from nothing. Our creations are not subject to gravity or other laws of physics. A builder cannot decide halfway through a construction project that he wants to swap out the foundation, but IT regularly changes the framework in mid-project. Similar optimism informs our project plans. For some reason, we assume that everything will go the way we plan it. Fred Brooks first wrote about programmer optimism in his classic "The Mythical Man-Month" back in 1975. He points out that there is indeed a certain probability that each task will be completed on schedule. But because modern IT projects consist of hundreds of tasks, the probability of every one going right is low. Even with an unrealistic 99% chance of success, having only 100 tasks reduces the overall probability of all tasks to finish on schedule to 37%. Sadly, our irrational optimism also extends to the business cases we present to management for our projects.  I am regularly presented with drafts of investor presentations that hopeful startups want to pitch. The optimism is palpable, but there is never any realistic consideration of all the things that can go wrong. As a CIO or CTO, you need to make sure you have some pessimists on your team. Not the kind of pessimists you find in Legal and Compliance, who are fighting tooth and nail to ensure no new project ever gets off the ground. But a kind of pragmatic pessimist who can look at your projects and business plans and tell you what might go wrong. These people are rather rare in IT organization, which is why this is one of the things I'm helping my customers with. Unless you add a counterweight to your IT organization, your projects will continue to fail due to irrational optimism. ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
08:05 4/23/21
Risk Aversion
In this episode of Beneficial Intelligence, I discuss risk aversion. The U.S. has stopped distributing the Johnson & Johnson vaccine. It has been given to more than 7 million people, and there have been six reported cases of blood clotting. Here in Denmark, we have stopped giving the Astra Zeneca vaccine because of one similar case. That is not risk management, that is risk aversion. There is a classic short story from 1911 by Stephen Leacock called "The Man in Asbestos." It is from the time where fire-resistant asbestos was considered one of the miracle materials of the future. The narrator travels to the future to find a drab and risk-averse society where aging has been eliminated together with all disease. People can only die from accidents, which is why everybody wears fire-resistant asbestos clothes, railroads and cars are outlawed, and society becomes completely stagnant. We are moving in that direction. Large organizations have departments of innovation prevention, often called compliance, risk management, or QA. They point out all the risks, and it takes courageous leadership to look at the larger benefit and overrule the objects of the naysayers. Smaller organizations can out-innovate larger ones because they spend their leadership time on innovation and growth and instead of on fighting organizational units dedicated to preserving the status quo. As an IT leader, it is your job to make sure your organization doesn't get paralyzed by risk aversion. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
05:23 4/16/21
Biased Data
In this episode of Beneficial Intelligence, I discuss biased data. Machine Learning depends on large data sets, and unless you take care, ML algorithms will perpetuate any bias in the data it learns from.  The famous ImageNet database contains 14 million labeled images. However, 6% of these have the wrong label. The labels are provided by humans paid very little per image, so they will work very fast. Unfortunately, as Nobel Prize winner Daniel Kahneman has shown, when humans work fast, they depend on their fast System 1 thinking that is very prone to bias. Thus, a woman in hospital scrubs is likely to be classified "nurse" and a man in the same clothes is likely to be classified "doctor." Google Translate was showing its bias when translating from Hungarian. Hungarian only has a gender-neutral pronoun, but the English translation was given a pronoun. The original gender-neutral phrases became "she does the dishes" and "he reads" in English.As CIO or CTO, you need to make sure somebody ensures the quality of the data you use to train your machine learning algorithms. If you don't have a Chief Data Officer, maybe you have a Data Protection Officer who could reasonably be given this purview. But you cannot foist this responsibility on individual development teams under deadline pressure. It is your responsibility to ensure that any machine learning system is learning from clean, unbiased data. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:29 4/9/21
Price transparency
In this episode of Beneficial Intelligence, I discuss price transparency. In the U.S., a coronavirus test can cost $56 if you pay yourself, but $450 if your health insurance pays it. This lack of price transparency makes the U.S. healthcare system the most expensive in the world, costing the US 17% of GDP. Every other industrialized country is below 12%. There are now laws requiring hospitals to publish their prices, but they deliberately hide them from the search engines. You see the same kind of price obfuscation with cloud vendors, who carefully charge separately for CPU, RAM, storage, network traffic, and make sure that one vendor's CPU is not equivalent to another vendor's CPU.  When you buy proprietary software, you have better price transparency, unless you sign up for one of the "unlimited" licenses the vendors are pushing. Open source has an initial cost of zero, but unless you go with a well-known and popular tool, the true cost can be hard to gauge. When you are looking for IT products, avoid vendors who wilfully obfuscate their prices. If they are not willing to be honest about the cost, what else will they not be honest about? As a CIO or CTO, make sure your evaluation criteria for software also include price transparency.  Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:00 3/26/21
Blaming the Humans
In this episode of Beneficial Intelligence, I discuss blaming the humans. It often happens that a system failure is attributed to fallible humans. In that way, you don't have to admit embarrassing shortcomings in your system. A recently declassified report showed that a weapons officer blamed for accidentally firing a missile back in the 1980s was actually the victim of a system error. Boeing initially tried to pin the blame for the 737 MAX-8 crashes on pilot error. Last year, Citibank accidentally paid out $900 million instead of just the few million they intended. They blame a back employee, not the archaic bank system that allowed the error. If we look only at the last link of an accident chain, we find a human. But behind the human error is a system that created the situation where the human could err. The Harpoon missile system was eventually fixed. The Boeing 737 flight control software was fixed. And Citibank is looking at a long-overdue replacement of their arcane backend systems. As a CIO or CTO, you need to make sure your organization extracts maximum learning when something goes wrong. Check some of the post-mortem reports from unfortunate incidents. If the error is blamed on a human that should just have acted differently, the analysis has not reached the root cause. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
07:35 3/19/21
Wasting Money
In this episode of Beneficial Intelligence, I discuss wasting money. The business always complains that IT is costing too much. That is because we are wasting so much money. We're on track for worldwide IT spending of about 4 trillion, and surveys show that at least 25% of that is wasted. That's one trillion dollars we waste. IT organizations waste money in two ways: With what we build, and with what we run. It's a time-honored tradition in IT to never retire an old system. We keep everything running forever, even though the business benefit has long since evaporated. We might suspect that a specific system is not providing business benefit. But we don't know if anybody might be using the system for something. So we keep it running. To address this kind of waste, have someone examine the system logs to find who uses the system. If you have an old system without proper instrumentation, you might need have your network people to look at the network traffic to identify unused or barely-used systems.  But old systems is only one way we waste money on the systems we are running. The new and much more efficient way to lose money running systems is to move something to the cloud. In the history of cloud computing, there has never ever once been a case where the cloud resource cost was less than anticipated. More often it is 50%, 100% or 200% larger than expected. Cloud pricing will charge you for CPU cycles, storage, data traffic, messages passed, API calls, and a million other things. Nobody is able to calculate all the costs, and there are always surprises. Your business case for moving something to the cloud needs to show a dramatic cost saving in order to justify the move. It's bad enough we lose a lot of money running things. But the money we waste running things pales in comparison with the money we waste building things. If you are still running waterfall projects, you know that they will never build what the customer wants. Many agile IT organizations don't fare much better, because the business can't be bothered to pay attention to the biweekly demonstrations of new features. Only the few organizations where both IT and business is agile don't waste money building something nobody needs. I've saved the largest money-waster for last: Unnecessarily building your own. IT organizations want to build systems. They don't want to buy systems. The reason is that the business always wants something changed, and if you have all of the code in your own repository, you can change anything you like. If you have a standard system, you will sometimes have to tell the business that something cannot be changed. Because of this, and because we don't calculate the business benefit of IT systems, we default to always building our own. Building your own is correct for systems of differentiation and innovation where you can do things differently from your competition. But building your own accounting system because you want to accomodate a very specific kind of invoicing you have in your business will never pay off. You can save some money by finding and retiring under-utilized systems. You can some some more by being very careful about doing lift-and-shift of existing systems to the cloud. You can save some more by getting the business on board with Agile. And you can really save money by not building your own.  ------ Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at
08:51 3/12/21
Moving Fast
In this episode of Beneficial Intelligence, I discuss moving fast. Mark Zuckerberg is famous for saying "Move fast and break things." That was his way of communicating a preference for high speed, accepting high risk. It has become an unofficial motto of Silicon Valley, but Facebook now has billions of users and today have a different risk profile.   Elon Musk, on the other hand, moves fast and breaks things. He is launching SpaceX Starships as a furious pace, and the landings often end up in spectacular fireballs. He had one rocket blow up on landing in December, and another in February. This month, he managed to get one to land, only to blow up shortly after landing. But he is in a hurry, and he can afford to lose dozens of rockets.  As CIO or CTO, you also need to move fast. Speed is what the business most wants from IT, and what we are least able to deliver. If we don't deliver speed, the business will run crucial business processes in faulty spreadsheets, or swipe a credit card in an impulsive purchase of some cloud service. You want to know what the worst thing that could happen is. If nobody will get hurt, and you can handle the financial and reputational effects of failure, that risk is below your speed limit. And you need to move as close to your speed limit as you can. ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at 
05:55 3/5/21
User Experience Disasters
In this episode of Beneficial Intelligence, I discuss User Experience disasters. Danes consistently rank among the happiest people in the world, but I can tell you for sure that it is not the public sector IT we use that make us happy. We have a very expensive welfare state financed with very high taxes, but all that money does not buy us good user experience. Good User Experience (UX) is not expensive, but it does require that you can put yourself in the user's place and that you talk to users. That is a separate IT specialty, and many teams try to do without it. It doesn't end well. Systems with bad UX do not deliver the expected business value, and sometimes are not used at all. A system that is functionally OK but that the users can't or won't use is known as a user experience disaster. We have a web application for booking coronavirus testing here in Denmark. First you choose a site, then you chose a data, and then you are told there are no times available at that site on that date. If a UX professional had been involved, the site would simply show the first available time at all the testing centers near you. We now also have a coronavirus vaccination booking site. It is just as bad. As CIO or CTO, some of the systems you are responsible for offer the users a bad experience. To find these, look at usage statistics. If you are not gathering usage, you need to start doing so. If systems are under-utilized, the cause is most often a UX issue. Sometimes it is easy to fix. Sometimes it is hard to fix. But IT systems that are not used provide zero business value.  ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at 
07:20 2/26/21
Contingency Plans
In this episode of Beneficial Intelligence, I discuss contingency plans. Texas was not prepared for the cold, and millions lost power. Amid furious finger-pointing, it turns out that none of the recommendations from the report after the last power outage have been implemented, and suggestions from the report after the outage in 1989 were not implemented either. As millions of Texas turned up the heat in their uninsulated homes, demand surged. At the same time, wind turbines froze. Then the natural gas wells and pipelines froze. Then the rivers where the nuclear power plants take cooling water from froze. And finally the generators on the coal-powered plants froze. They could burn coal, but not generate electricity. You can built wind turbines that will run in the cold, and you can winterize other equipment with insulation and special winter-capable lubricants. But that is more expensive, and Texas decided to save that money.The problem could have been solved if Texas could get energy from its neighbors, but it can't. The US power grid is divided into three parts: Eastern, Western, and Texas. They decided to go it alone but apparently decided to ignore the risk. In all systems, including your IT systems, you can handle risks in two ways: You can reduce the probability of the event occurring, or you can reduce the impact when it occurs. For IT systems, we reduce the probability with redundancy. We have multiple power supplies, multiple internet connections, multiple servers, replicated databases, and mirrored disk drives. But we run into Texas-style problems when we believe the claims of vendors that their ingenious solutions have completely eliminated the risk. That leads to complacency where we do not create contingency plans for what to do if the event does happen. Texas did not reduce the probability, and was not prepared for the impact. Don't be like Texas.  ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs and other IT leaders. To get in touch, please contact me at 
09:22 2/19/21
Risk and Reward
In this episode of Beneficial Intelligence, I discuss risks and rewards. Humans are a successful species because we are good at calculating risks and rewards. Similarly, organizations are successful if they are good at calculating the risks they face and the rewards they can gain. Different people have different risk profiles, and companies also have different appetite for risk. Industries like aerospace and pharmaceuticals face large consequences if something goes wrong and have a low risk tolerance. Hedge funds, on the other hand, takes big risks to reap large rewards. It is easy to create incentives for building things fast and cheap, but it is harder to create incentives that reward quality. Most organizations don't bother with quality incentives and try to ensure quality through QA processes instead. As Boeing found out, even a strong safety culture does not protect against misaligned incentives. As an IT leader at any level, it is your job to consider the impact of your incentive structure. If you can figure out a way to incentivize user friendliness, robustness and other quality metrics, you can create a successful IT organization. If you depend on QA processes to counterbalance powerful incentives to ship software, corners will be cut.  ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs and other IT leaders. To get in touch, please contact me at
09:06 2/12/21
Do the Right Thing
In this episode of Beneficial Intelligence, I discuss doing the right thing. Google started out with a motto of "Don't be evil," but that has fallen by the wayside. Occasionally, employees remind Google of the old motto as when they forced Google to stop working on AI for the Pentagon. But they don't seem terribly committed, and their highly touted Ethical AI Team is falling apart after they fired the head researcher.  Amazon never promised not to be evil, and they are forcing their delivery drivers to do 10-hour graveyard shifts starting before sunrise and going until mid-day. They are trying to avoid tired drivers causing accidents by installing cameras and AI in the vans so the computer can detect when the worker is falling asleep behind the wheel and can wake him up. Consulting giant McKinsey don't consider themselves evil either. They are just good at increasing profits for companies. While they claim no wrongdoing, they just settled a lawsuit paying $600 million for the advice they gave Purdue Pharma about aggressively encouraging doctors to over-prescribe opioids.  As a CIO, you're engaged in a war for talent. But you also need to meet your budget, implement hot new technologies like AI and maintain IT security. There is always an opportunity to cut a corner, roll out inadequately tested technology, or squeeze employees so you can hit your goals this quarter. But if you want to be able to attract and keep top IT talent, you need to do the right thing. ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs and other IT leaders. To get in touch, please contact me at 
07:34 2/5/21
Amateurs and Professionals
In this episode of Beneficial Intelligence, I discuss amateurs and professionals. Recently, Gamestock shares have gone through the roof. That's because professionals were betting that the stock would fall, and amateur investors meeting on the internet decided to buy up all the stock they could. The amateurs seem to have won this battle, inflicting billions of dollars of losses on the professionals. Amateurs also build IT systems, but in IT, the amateurs always lose. At JP Morgan, a trader built a model in Excel. He made a small error in his formula, made bad trades, and the bank lost $6 billion. In the UK health service, a coronavirus tracking system was built on an ancient version of Microsoft Excel, and people got ill and died because of bad contact tracing.  The amateurs build systems because they can do it faster than IT. They are faster because they have less testing, failure handling and other things professionals implement. As the CIO, it is your job to establish a good collaboration between the IT amateurs in the business and the IT professionals in the IT department. Some systems you can safely let business users build themselves. Other systems need a bit of IT supervision to make sure they are tested and robust. And some systems are critical and should be left to the professionals. ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs and other IT leaders. To get in touch, please contact me at
07:09 1/29/21
In this episode of Beneficial Intelligence, I discuss robustness. Robustness is a system's ability to keep running when parts of it are knocked out. This week we saw Parler, a social media platform used by Trump supporters, being taken out by their cloud provider Amazon. They did not have robustness.The Pirate Bay, on the other hand, has robustness. Governments and big media companies have tried to put them out of business for two decades, and they are still up and running. On-premise systems had a certain robustness built in - even if you got into an argument with your database vendor over licensing, your application would continue to run. But if you get into a dispute with a cloud vendor, your system can be terminated at any time. As the CIO, you need to take a look at your systems list and determine which systems are the priority systems that are essential for your business to run. Then as your architects to verify that these systems are robust, and ask what it would take to move them to another platform. If you cannot easily take your application and your data elsewhere, you don't have robustness. ------Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs and other IT leaders. To get in touch, please contact me at
07:29 1/15/21