Strategic Thinking & Strategic Actiontag:typepad.com,2003:weblog-675742018-04-23T20:12:49-05:00Fostering strategic thinking and strategic action by organizational leadersTypePadDespite our growing ignorance, we must decidetag:typepad.com,2003:post-6a00d8341c594453ef0224df2d3695200b2018-04-23T20:12:49-05:002018-05-03T07:15:02-05:00Note: This post on our inability to access the information we need to make big decisions is part of a chapter, "The madness of not knowing," in my upcoming book, Big Decisions: Why we make decisions that matter so poorly....Lee Crumbaugh

My mother encouraged me to read and learn. But she cautioned me, “The more you learn the less you will know.” That was her way of saying that learning opens whole domains about which we were previously ignorant and shows us how much more there is to learn.

That thought leads to the necessary understanding that our decision making is dependent on knowledge we do not have and can never completely know.

An estimate by Google illustrates the impossible challenge of knowing everything: In 2010 Google estimated that 129,864,880 different books had been published since Gutenberg invented the printing press in 1440. An updated estimate suggests the number is even higher: 134,021,533 unique book titles as of 2015.[i]

356 millennia to read

One wag estimates that it would take a person 356,164.39 years to read every book ever written, not accounting for new books published after he or she started reading. The estimate is based on reading 12 hours a day and the average book taking 12 hours to read.[ii]

The share of what we don’t know is relentlessly increasing, as evidenced by a UNESCO estimate that 2,200,000 new book titles are now being published annually worldwide.[iii]

How can we be expected to keep up with this torrent when, on average, Americans only read 12 books a year and when more than a quarter of Americans don’t read any books at all?[iv]

Internet salvation?

But what about the internet? Is its expanding usage a positive in accessing information?

Surely it is. But this access method is not used by everyone. Worldwide, only about half of the population has internet access.[v][vi]

Internet users per 100 inhabitants

International Telecommunications Union

Those who do use the Internet are busily searching for information. Daily web searches on Google alone total 3.5 billion, which equates to 1.2 trillion Google searches per year worldwide.[vii]

And we are spending more and more time on our devices, around six hours a day in the U.S and even more in some countries, much of it seeking and consuming information.

World Bank

Too deep to know

Yet, our web searches are not scratching the surface of what’s posted on websites. The indexed World Wide Web contains at least 4.52 billion pages.[viii] This estimate does not include non-indexed pages and pages on the deep web which search engines will not pick up. There could be as many as 180 quadrillion web pages on the internet if both indexed and non-indexed, that is, surface and deep web pages, are counted. [ix]

If you think reading every book would be impossible (remember the estimate that it would take 356,164 years reading 12 hours a day?), the task of reading every web page would be 100 times more daunting. It would take you 23.8 million years, without any time for rest![x] (Of course, as already noted, we individually can’t even find or access the great bulk of the web.)

The impossibility of knowing all that we need to know for our big decisions is not just a matter of our ignorance of the existing body of human knowledge. It’s recognizing that what humanity collectively knows is expanding exponentially.

The explosive growth in scientific knowledge demonstrates the unbounded nature of knowledge and our ignorance, how little we really know. Columbia University biologist Stuart Firestein observed, “We should remember that when a sphere becomes bigger, the surface area grows. Thus, as the sphere of scientific knowledge increases, so does the surface area of the unknown.”[xi]

Explosive, accelerating growth

One indicator of the growth of knowledge is offered by the explosive growth in scientific papers and citations in these papers. A study of the number of papers cited between 1908 and 2012 and their age suggests that global scientific output is increasing 8% to 9% annually and, thus, scientific knowledge is doubling every nine years.[xii] And this growth has accelerated: The study looked at the rate at which science has grown in terms of number of publications and cited references since the mid-1600s. The researchers “identified three growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars and 8 to 9% to 2012.”[xiii]

That means someone with who graduated from high school 18 years ago is faced with double the amount of scientific knowledge than their high school classes potentially considered.

The increase in scientific knowledge is mirrored by – or better said, enabled by – the relentless increase in data processing and storage capacity.

Gordon Moore, Intel’s founder, observed in 1965 that the power of computers – specifically, the number of transistors per square inch on a computer chip – roughly doubled every two years. This observation has stood the test of time as the 40-year chart of transistor count on chips shows.[xiv]

For example, Intel has been able to double chip density annually and can now pack more than 100 million transistors in each square millimeter of chip (10 nm chip density).[xv] One observer notes, “from the introduction of the 22 nm node in late 2011 to the ramp-up of Intel's 10 nm in 2018 we have observed close to 7x density increase over the span of 7 years.”[xvi]

This mushrooming computing capability is enabling us to create and store data at an exponential rate.

10 times more data

By 2025 the world will be creating ten times as much data as it is now, 163 zettabytes of data a year, according to research firm IDC. (One zettabyte is one trillion gigabytes!) Now our current data creation rate is 16.3ZB a year. But even with increased storage capacity, IDC predicts that of the total amount of data that will be generated between now and 2025, less than 1%, only 19ZB, will be stored.[xvii]

This tidal wave of so much data and knowledge that we never can know and access means that we as decision makers must make our big decisions without possessing all the existing knowledge that could make our decisions better. “Unknowability” is an unavoidable characteristic of real-world decision-making.

[v] "Individuals using the Internet 2005 to 2014", Key ICT indicators for developed and developing countries and the world (totals and penetration rates), International Telecommunication Union (ITU). https://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2017.pdf

Can “Big Data” deliver “the right decision”?tag:typepad.com,2003:post-6a00d8341c594453ef01b7c960441b970b2018-04-12T16:23:02-05:002018-04-15T15:10:13-05:00Note: This post on the limitations of using algorithms to make big decisions is part of a chapter, "The madness of not knowing," in my upcoming book, Big Decisions: Why we make decisions that matter so poorly. How we can...Lee Crumbaugh

One idea for maximizing the gain we get from decisions is to use machines to help us make them or even have machines make the big decisions for us.

But can an algorithm be perfected to always yield “the right decision”?

An algorithm is a process or set of rules used in calculations or problem-solving.[i] “Artificial intelligence” (AI) algorithms which process “Big Data” use logic rules and mathematics to solve problems and produce answers.

These algorithms engage in “machine learning” or “deep learning.” Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example or training data and a desired output.[ii]

Algorithmic limitations

In 2016, Pew Research Center and Elon University surveyed 1,302 technology experts, scholars, corporate practitioners and government leaders for their views on the impact of algorithms over the next decade. On net, the respondents were about equally divided on whether the effect of algorithms, big data and artificial intelligence would be positive or negative.

What’s most relevant for the potential of using algorithms for making or helping us make our big decisions is that, at least on the level of current application, the output of machine learning algorithms is shaped by:

Such “deep learning” AI algorithms only work well where the problem domain is well-understood and training data is available. They require a stable environment where future patterns are similar to past ones. Their decisions are only as unbiased as the data with which they were trained, and, because of the grounding in past data, “this supposedly disruptive technology cannot cope well with disruptive change.”[iv]

To illustrate, Justin Reich, executive director at the MIT Teaching Systems Lab, responded in the Pew survey that algorithms will inevitably benefit the people who design them — namely, educated white and Asian men.

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied to the survey, “The goal of algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences. My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.”

“Facebook’s struggle with fake news demonstrates that algorithms don’t always have the discernment a human would,” one observer notes.

By their very nature, machine-learning algorithms are effectively programming themselves. They learn through what they get wrong and approximation – which means that using these algorithms requires us to accept the possibility of errors.[v]

And, at this juncture, algorithms have made some whopper errors.[vi][vii]

YouTube’s algorithm placed advertisements from some of the biggest global brands on videos with hate speech.

Facebook’s algorithm posted violent videos in its users’ feeds.

Google’s algorithm directed people looking for information about the Holocaust to neo-Nazi websites

Microsoft’s Tay algorithm was designed to learn to speak like millennials by interacting with people on Twitter and messaging apps — in less than a day it sent out such abhorrent misogynistic and racist messages that it had to be taken down.

Amazon’s algorithm for determining where it would roll out same-day delivery excluded poor urban ZIP codes.[viii]

COMPAS, a proprietary risk-assessment algorithm used to decide on the freedom or incarceration of defendants in the US criminal justice system, was alleged by ProPublica to be systematically biased against African Americans as compared to whites.[ix]

For a United Airlines flight from Chicago that was not overbooked, a corporate scheduling algorithm gave a deadheading flight crew priority over passengers, a corporate financial algorithm authorized gate employees to offer passengers up to $800 to take a later flight, for which there were no volunteers, and a customer value algorithm calculated the value of each passenger to United and flagged the lowest value customers for removal from the flight. One of the passengers asked to leave the plane had to be forcibly removed, creating a nationally covered incident and bad press for United.[x]

But let’s not just dismiss the current level of effectiveness of algorithms. Behavioral scientist Jason Collins observes that algorithms are showing their value beyond what humans can quickly and fairly achieve in “domains that involve regular decisions in a largely constant environment about which we are able to gather data.” These domains include – or will soon include – routine medical diagnosis, predictive policing, games (e.g. chess and Go), risk score analysis and even self-driving cars. It’s “in complex, dynamic, and uncertain domains” where algorithms may not be trustworthy.[xi]

Yet, this is the state of AI at the present time. Given more data, more computing power and more machine learning, can we expect that at some point that we can safely outsource our high-level decision-making to algorithms? Can the future Siri or Alexa or Watson decide for us?

Two great 20th Century mathematicians show us that “Big Data” and AI will never provide the full answer to our need for the perfect decision.

Complete, consistent and decidable?

In 1900, Viennese mathematician David Hilbert set out a series of problems for mathematicians to solve. Among his 26 problems, he asked whether there was a set of “basic truths” (axioms) from which all the statements in mathematics could be proven, without giving any contradictory answers (such as 2+2=5). He also asked if there was an algorithm that could determine if a statement was true or false, even if no proof or disproof was known. In other words, Hilbert was asking whether mathematics was “complete,” “consistent” and “decidable.”[xii]

In 1931, Austrian mathematician and logician Kurt Gödel proved that within a formal system questions exist that are neither provable nor disprovable on the basis of the axioms that define the system. That is, there are true statements that are unprovable within the system, that more is true in mathematics than can be proven.[xiii][xiv] This is known as Gödel 's Undecidability Theorem.[xv]

He also showed that in a sufficiently rich formal system in which decidability of all questions is required, there will be contradictory statements. That is, the system’s consistency cannot be proven within the system.[xvi] Paradoxically, the only way to rid the system of incompleteness appeared to be to select rules that contradict one another.[xvii] This is known as Gödel 's Incompleteness Theorem.[xviii] In essence, “the theorem proved, using mathematics, that mathematics could not prove all of mathematics.”[xix]

Cognitive scientist Douglas Hofstadter gives a non-mathematical example to help us see what Gödel 's Incompleteness Theorem shows. He asks us to ponder how can we figure out if we are sane. “Once you begin to question your own sanity, you get trapped in an ever-tighter vortex of self-fulfilling prophecies, though the process is by no means inevitable. Everyone knows that the insane interpret the world via their own peculiarly consistent logic; how can you tell if your own logic is ‘peculiar’ or not, given that you have only your own logic to judge itself? I don’t see any answer.”[xx]

Are algorithms the answer?

The essence of Hilbert’s “decidable” question (the Entscheidungsproblem in German) was whether an algorithm could be created to decide in a finite number of steps if any given mathematical statement was true or not.

Brilliant young British mathematician Alan Turing took on the “decidable” question.

You may know about Turing from the movie “The Imitation Game.” He left a stunning list of achievements in mathematics, computing, cryptology and even biology. During World War II, he was instrumental in cracking German messages encrypted by the Enigma machine, enabling the British to anticipate Germany's actions, ultimately helping the Allies win the war.[xxi] He also developed ideas in non-linear biological theory, which paved the way for chaos and complexity theories.[xxii]

In 1936, Turing developed the idea of an idealized computer, a hypothetical machine now called a “Turing machine.” He imagined it reading an endless tape imprinted with symbols, one at a time, then either rewriting or erasing the symbol and shifting the tape to the left or right, based on a pre-determined set of rules.[xxiii]

A Turing machine is essentially an algorithm. If it solves the problem, it stops and gives the answer. If it doesn’t solve the problem, it keeps trying forever.[xxiv]

Will it halt?

Turing used his hypothetical computing machine to solve the so-called “halting problem,” to know if there were a way to know whether the algorithm would eventually halt or run forever.[xxv] He proved the impossibility of devising a Turing Machine program that can determine infallibly (and within a finite time) whether or not a given Turing Machine will eventually halt given some arbitrary input.[xxvi] He proved that a general algorithm to solve this problem, for all possible program and input pairs, cannot exist – thereby answering the “decidable” question in the negative.[xxvii]

Turing showed that once you stray beyond the most elementary areas of mathematics, it’s simply not possible to design a finite computing machine capable of deciding whether formulae are provable. Turing showed that it is definitely not the case that all well-defined mathematical tasks can be done by computer—not even in principle. He showed that "there exist problems that no decision process could answer." Some tasks just can’t be performed by computing machines, no matter how good the programmers or how powerful the hardware.[xxviii][xxix]

Turing proved “that there were questions that were beyond the power of algorithms to answer.” His triumph was spectacular and devastating to those who believed (as Hilbert did) that all problems could be solved.[xxx]

Intelligence, maybe, but never certainty

So, to answer the question we started with, no, we cannot reliably leave or ever expect to leave our big decisions to “Big Data” and artificial intelligence. Certainly, we can use this technology to aid our decision making, but, summarizing:

The results from “deep learning” algorithms can be wrong or biased because they are produced using fallible human designs, rules and data choices.

The effectiveness of AI algorithms depends on understanding the problem, data availability and unbiased data.

AI algorithms give more reliable answers in an environment where future patterns are similar to past ones. They don’t cope well with disruptive change.

Algorithms fit some of our preferences, but not necessarily all of them.

AI algorithms don’t necessarily have the discernment that a human would have.

Because “deep learning,” AI algorithms learn through their mistakes, we need to accept the possibility of errors.

Gödel 's Undecidability Theorem shows us that questions exist that an AI decision-making system cannot answer.

[xxi] Turing and colleagues, in essence, guessed the meaning of a stretch of letters in an Enigma message, used Bayesian inference to measure their belief in the validity of their guess, and then updated the probabilities that their guesses were correct as clues in more messages emerged. The importance of Bayesian methods as a tool in decision making will emerge in later chapters of this book.

The worst accident: “We’re going!”tag:typepad.com,2003:post-6a00d8341c594453ef01b7c946c971970b2018-01-18T22:00:51-06:002018-01-19T19:20:25-06:00A tragic story about the consequences of bad decisions The Captain of KLM Flight 4805 had to be irritated. His plane, carrying 235 passengers and 14 crew members, had taken off from Amsterdam’s Schiphol Airport at 0900 hours (local time)...Lee Crumbaugh

A tragic story about the consequences of bad decisions

The Captain of KLM Flight 4805 had to be irritated. His plane, carrying 235 passengers and 14 crew members, had taken off from Amsterdam’s Schiphol Airport at 0900 hours (local time) en route to Gando Airport in Las Palmas, Canary Islands (part of Spain).[1] But six hours later, as the plane neared Las Palmas, the pilot, Jacob Van Zanten, was given bad news by the Gando Airport air traffic controller.[2]

Most of the passengers on KLM Boeing 747, named “Rhine” by the airline, a charter flight for Holland International Travel Group, were on their way to meet cruise ships.[3] But then a bomb placed by separatists exploded in the flower shop in the passenger terminal building.[4] Due to the threat of a second explosion, the terminal building was evacuated and the airport closed. Planes scheduled to land at Las Palmas Airport were diverted to Los Rodeos Airport on Tenerife Island, half an hour away.[5] It was the closest alternative that could handle a 747.[6]

Let’s observe here that the conjunction of a terrorist bombing and the pending landing of the KLM flight was the first unforeseen occurrence in the story we are unfolding.

The problems of diversion

Captain Van Zanten’s irritation was likely fed by the knowledge that this diversion could prove to be a significant problem. He and his KLM crew were scheduled to return to Amsterdam later that day; a long delay at Tenerife could put them past their duty time limit, the time at which they are required to get 12 hours of rest before they can go back on duty.[7] If that were the case, the passengers would not get to their destination that day nor would the crew get home. All would have to be accommodated on Tenerife, if overnight accommodations could even be found, at sizeable expense to KLM. And the 747 would not be available to fly other routes.[8]

One might think that a pilot would not be so sensitive to the expense of a diversion, but Captain Van Zanten was no ordinary pilot. He was the top pilot in KLM’s management. He was the head of safety and KLM’s chief flight instructor, with 11,700 flight hours, of which 1,545 hours were on the 747.[9] Van Zanten was an individual whom everyone at that airline looked up to. He was the public symbol for KLM pilots: his face was on KLM’s advertising around the world. Indeed, KLM’s inflight magazine that month featured him in an ad headlined, “KLM. From the people who make punctuality possible.”[10] Van Zanten spent most of his time training other pilots, including the co-pilot who was in the next seat, Klaas Meurs. In fact, Van Zanten issued Meurs his 747 flight certification.[11]

KLM magazine ad featuring Captain Jacob Van Zanten.

Our second observation about this story is that it involves one of the world’s best pilots and a delay, the potential negative consequences of which he may not be able to avoid despite his flight experience and senior stature at the airline.

In the meantime, Pan American Flight 1736, flying with a fresh crew of 18 and 378 passengers from New York’s JFK Airport to Las Palmas, also got the bad news.[12]

FIRST OFFICER ROBERT BRAGG: “Gando, Pan Am one seven three six. Good afternoon.”

GANDO AIRPORT AIR TRAFFIC CONTROLLER: “Pan Am one seven three six, Gando. The airport is closed.”[13]

Pan Am Boeing 747 “Clipper Victor.”

Pan Am called its airplanes “Clippers” and named this plane “Clipper Victor.” It also was a Boeing 747, but no ordinary one: It was the first 747 in service. It made the first commercial 747 passenger flight, New York to London, in January 1970. In another quirk of history, later that year the plane was hijacked to Cuba.[14]

Now, seven years later, on March 27, 1977, Captain Victor Grubbs and First Officer Bragg had to unexpectedly divert their plane to Tenerife. Not surprisingly, the already weary crew and passengers, who had been traveling for 12 hours since take-off in New York, were not happy about the change.[15]

Mt. Teide, Tenerife, Canary Islands.

The new destination airport for both 747s was situated next to Spain’s highest mountain, Mt. Teide, and was prone to fog and mist, often at short notice.[16][17] On this day fog was around and about the airport’s one runway, the major taxiway parallel to it and the four short taxiways connecting the two.[18]

“A traffic jam”

Normally sleepy, Los Rodeos on this day was packed with diverted flights.[19] After the KLM 747 landed, its passengers were deplaned to wait out the delay. The Pan Am 747 landed 30 minutes later and was parked next to the KLM 747. First Officer Bragg later reported, “The ground situation was a traffic jam, because, when we landed there, the ramp was so crowded with other airplanes, and we were directed to taxi down to the end of the ramp area and park behind three other airplanes,” including the KLM 747.[20] The Pan Am passengers could not be deplaned because the terminal building was overflowing with passengers from the other waiting planes.

In the meantime, the KLM crew worried that the limit on their hours on duty in the cockpit could make it impossible to fly to Las Palmas and then complete the return flight to Amsterdam that same day. There would be no possibility of flying after 8 pm: The Dutch aviation authority had introduced stricter flight time limits, a violation of which could result in a fine, loss of license, or even prison.[21]

While the KLM 747 had enough fuel on board to reach Las Palmas airport, Captain Van Zanten decided to take on 55 more tons of fuel while waiting at Los Rodeos, to enable an immediate return from Las Palmas after unloading passengers rather than then waiting to refuel there.[22] The crew and KLM operations had calculated that the fully fueled KLM 4805 could make the return trip to Amsterdam just within their flight operations limit.[23] But a tradeoff was that the amount of speed necessary to lift the 747 off the runway and the takeoff distance were increased by the extra fuel weight.[24]

Old control tower at Los Rodeos Airport on Tenerife. Photo by Aisano, licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

The two controllers on duty had been listening to a soccer game on the radio while 11 aircraft waited on the ground.[25] At around four o’clock, Las Palmas reopened and began accepting traffic. The controllers discussed how to maneuver the planes for takeoff.[26] Given the airport congestion and because the part of the main taxiway near the terminal was too narrow for a 747, they decided that the first 747 would taxi onto the runway, taxi to the opposite end, turn around and take off, a common maneuver called “back taxiing.”[27] The second 747 would taxi on the runway until about midfield, then take a left turn on a short taxiway and then a right turn onto the main parallel taxiway, and take that to the end of the runway. The upshot was that their plan had both aircraft begin taxiing on the runway.

A 747 parked on the apron at Los Rodeos airport.

The Rhine and Clipper Victor sat adjacent to one other, parked at the southeast corner of the apron, their wingtips almost touching.[28] The loaded Pan Am 747 was ready to depart, but its access to the runway was blocked by the KLM 747 and a refueling vehicle. First Officer Bragg remembered, “The engineer and I went out underneath the right wing and, basically, stepped off the distance between our wingtip and the KLM 747's wingtip, and we were 12 feet short of being able to taxi easily around the airplane.”[29]

Schematic showing the KLM 747 blocking the Pan Am 747 on the apron leading to the runway. Diagram by Jussi Pajau, licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

The refueling took about 35 minutes. After that, the KLM passengers were brought back to the plane. A search for a missing Dutch family of four delayed the flight even longer. One passenger who lived on Tenerife chose not to reboard for Las Palmas. The Los Rodeos runway coordinator later recalled, “Boarding was very complicated, because, when the passengers were called for their flights, they were scattered throughout the airport, in the cafeteria or buying souvenirs. We had to go round them up.”[30]

So now added to the factors in this saga are an overwhelmed airport, a plane that’s ready to go but blocked from moving, further delay and a 747 that will take off weighted down by full fuel tanks.

Then the weather got worse.

The fog rolled in

During the delay for refueling the KLM 747, low-lying clouds descended and enveloped the airport. Runway visibility that had been 10 kilometers quickly diminished to 3 kilometers.[31] Captain Van Zanten’s impatience over the possibility of the airport closing because of weather was evident: He urged his crew members, “Hurry, or else it will close again completely.”[32]

After the KLM pilots started their plane’s engines, First Officer Klaus Meurs had a confusing conversation with the ground controller who spoke poor English, Fernando Azcunaga.

MEURS: “We require back track on one two for take-off runway three zero.”

CONTROLLER: “OK, four eight zero five…taxi…to the holding position runway three zero taxi into the runway and – ah – leave runway third to your left.”

MEURS: “Roger sir, entering the runway at this time and the first taxiway we, we get off the runway again for the beginning of runway three zero.”

Meanwhile, the KLM First Officer inquired about the runway centerlights not being lit and was told by the controller that they were out of service.[35] The controller then passed this information on to the Pan Am crew. Captain Grubbs noted in the Pan Am cockpit, “We need 800 meters if you don't have that centerline.” He meant that greater visibility (which other evidence suggests was actually 700 meters) was required for takeoff with centerline lights off. This situation would require extra attention from both crews to assure that their aircraft were aligned properly on the runway.[36]

In a matter of minutes, the KLM 747 had reached the end of the runway and was ready to execute the tricky 180-degree turn that would line it up for takeoff going back down the runway.[37]

Exit confusion

At this time the Pan Am 747 was out on the runway, taxiing slowly, following the KLM 747. First Officer Bragg recollects, “We saw the fog bank come off of the right hill, and proceed down and stop right on the runway. So our visibility went from unlimited to 500 meters. We lost sight of the KLM airplane.”[38] The pressure on the crew to get their 747 off the runway increased. Like the confusion earlier involving KLM 4805’s instructions, similar confusion arose regarding Pan Am 1736’s instructions for exiting the runway.

BRAGG: “Ah – we were instructed to contact you and also to taxi down the runway, is that correct?”

CONTROLLER: “Affirmative, taxi into the runway and – uh – leave the runway third, third to your left.”

Then after more cockpit conversation about the proper taxiway, the controller instructed the crew to report when they were leaving the runway. The plane continued to taxi. Because of the fog the crew was confused about which taxiway was the third one. Some of the confusion arose because the third exit, Charlie 3, required a making a nearly impossible sharp turn that would point the plane in the wrong direction down the taxiway.[41] Using the next exit, Charlie 4, with a 45-degree angle, seemed to make more sense. Bragg later explained, “We couldn't see any taxiways. We couldn't see...barely, the centerline of the runway we were taxiing on, but we knew that the 45-degree angle to the left was the taxiway to take.”[42]

Continuing to the next taxiway would not normally be a problem, but this kept Clipper Victor on the runway for several more seconds.

Back in the control tower, the controllers were understandably tense because they could not see either 747 on the runway and needed to be sure of their precise locations.

Could this situation get even dicier? We have added to the story more confusion in plane-controller communications, no centerline lights, even worse visibility and a crew who could not find its runway exit.

At this time, the KLM 747 made its 180-degree turn. It was just after 5 p.m. The two 747s were now face-to-face a half a mile apart, unable to see each other in the fog.

Cleared for takeoff…or not?

With the plunging visibility, KLM Captain Van Zanten had to have been more worried than ever that the airport would close and the crew and passengers would have spend the night on Tenerife. HIs irritability had been noted by the controllers and other pilots. But suddenly the fog lessened. The captain remarked that visibility had improved to the 700 meters required for take-off. First Officer Meurs interjected that they did not yet have air traffic control (ATC) clearance for take-off. Van Zanten asked Meurs to get clearance.[44]

MEURS: “Uh, the KLM…four eight zero five is now ready for take off…uh and we’re waiting for our ATC clearance.”

The controller responded not with take-off clearance, but with routing clearance that had not yet been relayed to the plane because of the complex situation that the controllers and the crews were dealing with.

CONTROLLER: “KLM eight seven zero five uh you are cleared to the Papa Beacon climb to and maintain flight level nine zero right turn after take-off proceed with heading zero four zero until intercepting the three two five radial from Las Palmas VOR.”[45]

Meurs repeated the routing clearance instructions back to the controller, and then somewhat hesitantly added over the noise of accelerating engines a phrase not normally used in aircraft operations: “…and we are now, uh, at takeoff.” The KLM 747 crew appeared to have thought the route clearance also included takeoff clearance.[46]

Except that all that was heard in the KLM cockpit was “….K” followed by a shrill 3.74 second noise, radio interference because at the very same time the Pan Am first officer was radioing the tower saying “And we are still taxiing down the runway, the Clipper one seven three six…” The Pan Am transmission was also blocked by the interference and thus the KLM crew did not hear that Pan Am 1736 was still taxiing directly ahead them.[48]

The tower was concerned about the location of the Pan Am 747.

CONTROLLER: “Roger alpha one seven three six report the runway clear.”

We now see our story coming to a head with the KLM captain determined to get into the air, engines revving up, confusion about take-off clearance and the other 747 still on the runway.

Tragically, Rhine had already started on its take-off run down the runway, heading straight for Clipper Victor.

The Pan AM 747 was set to turn off the runway at taxiway 4 when the KLM was rolling toward it. Diagram by Xerkes2K, licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

“We’re going!”

Having interpreted the route clearance as take-off clearance, Captain Van Zanten said to his crew, “We’re going!” First Officer Meurs was concentrating on the takeoff. The "Okay" from the tower had come over the radio with seeming clarity, and with that it seemed reasonable to conclude that they were cleared for take-off.[50]

Nonetheless, KLM Flight Engineer Willem Schreuder sounded concerned.[51] The 747 was gaining speed and entering another of the clouds that had been blowing across the runway. He could see nothing ahead. What did the Pan Am crew mean by their message: "We'll report when clear"? They were already clear, weren't they? How could Captain Van Zanten start the takeoff if not? Schreuder asked the captain, “Is he not clear then?” Captain Van Zanten responded in a clipped manner, “What do you say?” The flight engineer asked again, “Is he not clear, that Pan American?” to which the captain responded emphatically, “Oh, yes.”[52]

The KLM 747 continued to pick up speed. Emerging from the band of fog, it became visible to the Pan Am crew just as they were beginning their turn onto the fourth taxiway.[54]

Diagram of how the aircraft wound up on a collision course.

In the Pan Am cockpit, Captain Grubbs exclaimed, “Damn, that son of a bitch is coming right at us!”

In the KLM cockpit, Captain Van Zanten realized that he was on a collision course. He tried to take off early. There was no hope in stopping Rhine in time to avoid the collision or swerving off the runway.[55] The only possibility was to fly over the top of the Pan Am 747. The KLM captain yanked the control column as far back as it would go. The plane reared up in the air. “Damn! Come on! Come on! Come on! Come on!” he yelled. In the other plane, First Officer Bragg likewise yelled, “Get off! Get off!”[56]

Graphic of the collision of the two 747s at Los Rodeos Airport. Graphic by SafetyCard, licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

With the weight of its full fuel load, the KLM 747’s tail struck the runway and scraped 65-feet of concrete until it finally became airborne.[57] But it was still too low. Its main landing gear was ripped away as it sheared off the top of Pan Am 1736’s fuselage.

The KLM 747 returned to the runway at less than 100 knots, skidded and burst into flames. The inferno consumed all 234 passengers and 14 crew members.[58]

Remains of the burnt out KLM 747 Rhine.

Some passengers and crew members were able to jump from the collapsing Pan Am 747’s burning airframe. But 335 were killed in the collision or burned to death.[59]

Of the 644 people aboard the two 747s, 583 were killed and only 61 survived.[60] This was the worst accident in aviation history.

Monument by the Dutch artist Rudi van de Wint erected on Tenerife in memory of the 583 victims of the Los Rodeos Airport disaster. The spiral at the top suggests the spiral stairway up to a 747’s upper deck. Photo by Jesús Manuel Pérez Triana, licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

Why did Tenerife happen?

The massive investigation of and subsequent reports on the Tenerife airport disaster offer deep insight into what happened and what went wrong. As a result, aviation authorities and airlines worldwide changed procedures.

The “what” of the disaster, in retrospect, is obvious, including fog, no centerline lights, language and radio transmission problems, airport overcrowding, delays, a fuel-heavy aircraft and more. But they are insufficient to explain why the KLM 747 collided with the Pan Am 747.

Explaining the “why” of the disaster leads us to our great concern, bad decisions and what to do to avoid them when making a good decision is imperative for a mission-critical outcome.

The “why” questions

Asking “why” with an understanding that people are not perfectly rational beings raises questions about the human factors that led to:

At this juncture, just listing the traps and biases that likely enmeshed the actors in the Tenerife disaster gives us a startling sense of the extent to which we are subject to mental traps and biases, most often without any conscious awareness. The many pitfalls that can betray us underscore how important it is for us to dig deeper to understand why we make bad decisions and learn what we can do to make better decisions when it really counts. Without such understanding, the risk of disasters, perhaps lesser but maybe even the equivalent of Tenerife or even greater, will continue to loom in our personal and organizational lives.

Six categories of mental traps and biases

In research for Big Decisions, I have unearthed hundreds of mental traps and biases that skew decision making and, for better understanding, have catalogued them in six major categories:

Psychological. “Processing problems” – Errors occurring as a result of our cognitive biases and mental shortcuts that can lead to systematic deviations from logic, probability or rational choice.

Perception. “Input problems” – Effects and errors in the organization, identification, and interpretation of sensory information that we use to represent and understand the environment around us.

Memory. “Storage and recall problems” – Errors from the process in which information is encoded, stored and retrieved from our brain.

Logical. “Reasoning problems” – Errors arising from making fallacious arguments that are deductively invalid or inductively weak or that contain an unjustified premise or ignore relevant evidence.

Physiological. “Limbic system problems” – Mental processing and judgment shortfalls caused by physical factors that affect the function of our brain, such as arousal, depression and fatigue.

Social. “Interpersonal problems” – Biases and errors stemming from how we view and interact with the people around us, with causes including social categorization, in-group favoritism, prejudice, discrimination and stereotyping.

How the Tenerife actors were swayed

In the Tenerife disaster, initial analysis suggests that the individuals with a role in the horrible outcome were enmeshed by as many as 68 mental traps and biases, in all categories.

The “why” factors that were likely at work that led to the Tenerife airport disaster included at least 36 psychological biases and traps:

Action-oriented bias

Availability heuristic

Backfire effect

Bad news avoidance

Bald man fallacy

Bayesian conservatism

Cognitive dissonance avoidance

Commitment heuristic

Confirmation bias

Emotion

Epistemic arrogance

Escalation of commitment

Familiarity heuristic

Hyperbolic discounting

Illusion of control

Impulsivity

Isolated problem trap

Loss avoidance

Narrow framing

Normalcy bias

Observer effects

Optimism bias

Overconfidence effect

Power

Primacy effects

Priming effects

Probability neglect

Regret aversion

Restraint bias

Risk blindness

Risk compensation

Selective perception

Semmelweis reflex

Serial position effects

Single-effect trap

Subjective validation

The “why” factors also likely included at least six perception traps and biases:

Change blindness

Contrast effect

Fundamental cognitive error

Inattention blindness

Platonicity error

Salience biases

Three memory traps may also help explain the “why” of the Tenerife disaster:

Conservatism or regressive bias

Illusion of truth effect

Suggestibility

The “why” factors also likely involved at least eight logic traps and biases:

Black Swan blindness

Certainty bias

Conjunction fallacy

Hasty generalization

Irrational escalation

Jumping to conclusions

Narrative fallacy

Rule-based decisions

In addition, very likely at work to explain the “why” of the disaster were four physiological effects:

Decision fatigue

High stress

Sleep deprivation

Stimulated limbic system

Lastly, the “why” of the Tenerife airport disaster likely can also be attributed to at least 11 social traps and biases:

Availability cascade

Bandwagon effect

False consensus effect

Group think

Halo effect

Illusion of explanatory depth

Projection bias

Reactance

Reciprocation

Shared information bias

Sunflower management

How we make decisions matters

This telling of the Tenerife disaster story will anchor the introduction of Big Decisions. It serves as an attention-getting account showing why we ought to give great thought to our decision-making process and how to make the big decisions better.

The chapters of Big Decisions that will follow will explore the origins of our mental traps, biases and shortcuts (“Smartphones on the Savannah”), our inability to make rational, optimal decision amidst undecidability and unknowability (“The madness of not knowing”), what leads us to irrational decisions (“Like a lost shepherd, we lead ourselves astray”), and how so many specific traps, biases, errors and shortcuts plague our decision making.

Then Big Decisions will reveal decision-making best practices discovered through research, examine what constitutes a big decision and how to recognize when one is needed, and give examples of some really good decisions. Finally, I will offer an evidence-based process built on best practices that, hopefully, we can consciously use when the necessity for a big decision confronts us or our organization.

Throughout Big Decisions, I will tell many more stories about individuals and organizations whose decision making went off the rails and led to detrimental and even ruinous results. These real-world cases teem with examples of the pernicious effects of mental traps, biases, errors and shortcuts on decision making. Subjects for the stories I intend to tell and analyze include:

Bank of America – Countrywide

Bernie Madoff

Brian Cullinan

Brutus

Cuban Missile Crisis

Custer

Donald Trump

FedEx

Firestone

Flight 370

The French Panama Canal

Fukushima

Galileo

Gettysburg

Google

The Great Powers before World War I

Henry Ford

Hillary Clinton

Kodak

Lehman Brothers

Marissa Meyer

NASA

Rob Hall

Samsung

Uber

The U.S. Panama Canal

Volkswagen

Wells Fargo

Yahoo

Throughout Big Decisions, as the exploration of bad decision making and what to do about it proceeds, I will unfold yet another story. Even compared with Tenerife aircraft disaster, this story scales up the actors and the consequences of bad decisions. You will have to wait for the book to read about and learn from this saga of incredibly bad decision making and ensuring personal and societal damage.

Note: All images are believed to be in the public domain unless otherwise indicated.

Trump trap: Epistemic arrogancetag:typepad.com,2003:post-6a00d8341c594453ef01b7c93700bc970b2017-12-07T16:55:11-06:002017-12-08T09:01:57-06:00The second in our series on mental traps and biases that lead Donald Trump – and us – astray From the lens of social science and economics we can easily see that Donald Trump is a poster child for mental...Lee Crumbaugh

The second in our series on mental traps and biases that lead Donald Trump – and us – astray

From the lens of social science and economics we can easily see that Donald Trump is a poster child for mental traps and biases that get in the way of strategic thinking and rational decision making. While Trump has made millions of dollars, attained celebrity status and won the U.S. presidency either in spite of or because of his decisions and behavior, we can make the case that the mental traps and biases he continues to exhibit are leading to bad outcomes for him and for us. By examining the traps and biases that Trump seems to display, we can see where we, as fellow humans, are likewise affected, albeit most often less dramatically, by these very same mental traps and biases.

THE EVIDENCE FROM TRUMP

Donald Trump displays continuing ignorance in so many domains despite abundant evidence of the truth, as these reports show:

The environment."He said the Paris climate change accord would deliver the United States to a grim future of shuttered factories, squeezed taxpayers, blackouts and brownouts, and 'vastly diminished economic production.' This extreme dystonic vision doesn’t correlate with the minimal regulations laid out by the agreement."1

Golf and murder. “Trump not only doesn’t know the unknowns but appears to have no interest in even knowing the knowns. Fact-checkers can’t keep up. How often does Obama play golf? Who cares—let’s inflate the number by 50 percent. What’s the murder rate in a major American city? What the hell—let’s multiply it by 10.”2

Legislative success.This past summer Trump stated that "'for the time in office, five months and couple of weeks, I think I’ve done more than anyone else." And he clarifies that he means "not just executive orders' but bills passed by Congress. By this time in his presidency, Bill Clinton had signed the Family and Medical Leave Act and the motor voter bill. George W. Bush had signed his first big round of tax cuts. Barack Obama did a major economic stimulus bill, the Lilly Ledbetter Act, and a significant SCHIP expansion. LBJ signed a big tax cut."3

The Middle East. "Trump’s geopolitical obliviousness spans the globe. He irritated Israel...when he said during a news conference with the prime minister of Lebanon that Lebanon was 'on the front lines in the fight' against Hezbollah, which is about as wrong as wrong can be. Hezbollah—which Trump’s own State Department brands a terror group—has been part of the Lebanese government for decades, and controls that country’s most powerful military force. The Jerusalem Post, sneered: 'Clearly, Trump has a less than satisfactory grasp of geopolitics in Lebanon. And if he does not understand who is against whom in Lebanon, he is probably not too well briefed on what is going on in Syria either.' There were also polite snickers when Trump arrived in Israel from Saudi Arabia and said he had just come from the Middle East. Where on earth does he think Israel is, anyway?"4

Health care. "Health-care policy, Donald Trump has admitted, is more complex than he once assumed. 'Nobody knew that health care could be so complicated,' he said in February as he struggled to cobble together a plan to repeal and replace Obamacare... In an interview in May, shortly after the House passed a bill that would cause an estimated 23 million people to drop or lose their insurance coverage, Trump boasted that he had become an expert on the subject. 'It was just something that wasn’t high on my list,' he told Time magazine. 'But in a short period of time I understood everything there was to know about health care.'”5

Intelligence. "Even before he took office, news broke that Trump was refusing the intelligence briefings meant to get him up to speed. When challenged, he explained he didn’t need them: 'I’m, like, a really smart person.'”6

Auto imports. "President Trump called on Japan to build more cars in the U.S. during his stop in the country as part of his first official tour of Asia as president. 'Try building your cars in the United States instead of shipping them over,' Trump said at an event with Japanese business executives, according to a pool report... In reality, the percentage of Japanese cars sold in the United States that are built in this country has, in the last three decades, gone from 12% to 75%. The average of all American-sold cars is that only 65% of them were assembled here. In fact, Japan exports over 400,000 cars built here to other countries. And the top five most American cars — they must contain 75% American-made parts and be assembled in America — are all made by either Toyota or Honda. So Japan doesn’t have to 'try' to build their cars here that they sell to Americans; they already do that. But Trump doesn’t know that because Trump knows almost nothing about virtually any subject. His claims are almost never informed by the facts, which are completely irrelevant to him."7

And so much more...Trump also makes random factual errors: In one interview "he says Japanese Prime Minister Shinzo Abe’s wife doesn’t speak any English, but she seems to speak English fine. He says Deputy Attorney General Rod Rosenstein is from Baltimore, when he grew up in the Philadelphia suburbs and has lived for years in Bethesda, just outside of DC. He says the FBI director 'reports directly to the president of the United States,' which also isn’t true. He says the Russia investigation is 'not an investigation' (whatever that means) and also that 'it’s not on me' (it is). He says James Comey wrote a letter to him, when he actually wrote a letter to his former colleagues at the FBI. He says 51 Republican senators came to his health care meeting at the White House, when in fact Susan Collins and Rand Paul didn’t attend and John McCain was sick, so the number was 49."8

THE TRAP

Epistemic arrogance (from the Greek “episteme,” knowledge)

“It takes extraordinary wisdom and self-control to accept that many things have a logic we do not understand that is smarter than our own.” – Nassim Nicholas Taleb, The Bed of Procrustes: Philosophical and Practical Aphorisms9

Definition: We think we know more than we really do know. As our learning increases, we grow even more confident in our knowledge and tend to ignore our ignorance and what we still do not know. Nassim Nicholas Taleb labels epistemic arrogance as "our hubris concerning the limits of our knowledge." He explains, "We overestimate what we know and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown)."10

Other researchers label the trap "delusional self-assurance" and “excessive certainty, the tendency we have to believe that our knowledge is more certain than it really is.”11,12

How it works: Humans want to be in control. Uncertainty fuels anxiety and fear and a sense of being out of control. We fool ourselves by wanting to believe that our knowledge is all the knowledge worth having. Through our epistemic arrogance, "to support an illusion of control, we diminish the possibilities and consequences of 'randomness.'" We gain greater confidence that we are in control because we think we can better explain past events and better predict future events. But research shows that while what we know may comfort us, it does not to the same degree increase our capability to control future uncertainties.13

Epistemic arrogance manifests itself in overconfidence, resulting in "overprecision—excessive confidence that one knows the truth." Research, conducted by Albert Mannes of The Wharton School of the University of Pennsylvania and Don Moore of the Haas School of Business at the University of California, Berkeley, revealed that the more confident participants were about their estimates of an uncertain quantity, the less they adjusted their estimates in response to feedback about their accuracy and to the costs of being wrong. "The findings suggest that people are too confident in what they know and underestimate what they don't know," says Mannes.14,15

Why it’s a problem: The prevalence of overconfidence appears to pose a serious obstacle to effective learning, problem solving and decision making.

"Overprecision -- excessive confidence in the accuracy of our beliefs -- can have profound consequences, inflating investors' valuation of their investments, leading physicians to gravitate too quickly to a diagnosis, even making people intolerant of dissenting views," write researcher Mannes and his co-author. "People also cling too fervently to beliefs that are poorly supported by evidence, adjusting their beliefs too little in light of the evidence or the consequences of being wrong."17

Michael Patrick Lynch, Professor of Philosophy at the University of Connecticut, sees epistemic arrogance as "the defining trait of the age," calling it "the arrogance of thinking that you know it all and that you don’t need to improve because you are just so great already." He adds, "We blur the line between what’s inside our heads and what’s not. Some philosophers have argued that this blurring is actually justified because knowing itself is often an extended process, distributed in space. If that’s right, then living in a knowledge economy literally increases my knowledge because knowing is not just an individual phenomenon." However, he laments that "the personalized internet, with its carefully curated social-media feeds and individualized search results" leads to many different knowledge economies, "each bounded by different assumptions of which sources you can trust and what counts as evidence and what doesn’t." This creates "not only an explosion of overconfidence in what you individually understand but an active encouragement of epistemic arrogance. The Internet of Us becomes one big reinforcement mechanism, getting us all the information we are already biased to believe, and encouraging us to regard those in other bubbles as misinformed miscreants."18

"This freedom to doubt is an important matter.... It was born of a struggle. It was a struggle to be permitted to doubt, to be unsure. And I do not want us to forget the importance of the struggle and, by default, to let the thing fall away. I feel a responsibility as a scientist who knows the great value of a satisfactory philosophy of ignorance, and the progress made possible by such a philosophy, progress which is the fruit of freedom of thought. I feel a responsibility to proclaim the value of this freedom and to teach that doubt is not to be feared, but that it is, to be welcomed as the possibility of a new potential for human beings. If you know that you are not sure, you have a chance to improve the situation." - Richard P. Feynman,The Meaning of It All: Thoughts of a Citizen Scientist19

Examples:

Finding validation that was not there. Analysts now generally agree that the U.S.invasion of Iraq was predicated on faulty intelligence, mistruths or lies, ideology and jingoism. The epistemic arrogance of U.S. decision makers led them to not see or discount that both the seeming evidence and the construct in which it was considered were faulty.

Max Fisher has written, "The US primarily invaded Iraq...because of an ideology. A movement of high-minded ideologues had, throughout the 1990s, become obsessed with deposing Saddam Hussein. When they assumed positions of power under Bush in 2001, they did not seek to trick America into that war, but rather tricked themselves. In 9/11, and in fragments of intelligence that more objective minds would have rejected, they could see only validation for their abstract and untested theories about the world — theories whose inevitable and obvious conclusion was an American invasion of Iraq."20

Matt Taibbi has offered a similar analysis revealing epistemic arrogance: "The Iraq invasion was always an insane exercise in brainless jingoism that could only be intellectually justified after accepting a series of ludicrous suppositions. First you had to accept a fictional implied connection between Saddam Hussein and 9/11. Then you had to buy that this heavily-sanctioned secular dictator (and confirmed enemy of Islamic radicals) would be a likely sponsor of radical Islamic terror. Then after that you had to accept that Saddam even had the capability of supplying terrorists with weapons that could hurt us... And then, after all that, you still had to buy that all of these factors together added up to a threat so imminent that it justified the immediate mass sacrifice of American and Iraqi lives. It was absurd, a whole bunch of maybes piled on top of a perhaps and a theoretically possible or two."21

Nothing left to know. Some physicists around the turn of the 20th century predicted that nothing of great value was left to be found in physics. It seems that those who knew the most thought the least was unknown.

In 1900, eminent British physicist Lord Kelvin purportedly declared: “There is nothing new to discover in physics. All that remains is to more accurately measure its quantities.” Lord Kelvin (William Thomson) was among the leading physicists in the world at that time: He had devised the absolute temperature scale. "the Kelvin scale," formulated the second law of thermodynamics and had been instrumental in installing telegraph cables under the Atlantic Ocean. He made his declaration "in the same year quantum physics was born and three decades later it, and Einstein’s theory of relativity, had completely revolutionized and transformed physics."22,23

I'm smart about my money. Research by global asset manager Schroders shows that Millennials are overly confident about their investment knowledge. Although 83% of American investors between 18 and 35 surveyed said they knew more about investments than the average investor, only 28% could correctly identify what an investment management company does.Globally, the figures for millennials were 61% and 32%, respectively.

In fact, overconfidence in investment expertise seems prevalent across many age groups: 51% of all investors surveyed said their investment knowledge was above average. but only 37% could accurately identify what an asset management firm does.24,25

The internet makes us...dumber? That's the implication of a recent set of experiments conducted by Yale University researchers who studied the effect of finding information by searching the Internet. They concluded that searching the internet for knowledge "creates an illusion whereby people mistake access to information for their own personal understanding of the information." They explained, "After using Google to retrieve answers to questions, people seem to believe they came up with these answers on their own; they show an increase in 'cognitive self-esteem,' a measure of confidence in one’s own ability to think about and remember information, and predict higher performance on a subsequent trivia quiz to be taken without access to the Internet."26

The experiments showed that people who acquire information through internet search tend to become irrationally more confident in knowledge unrelated to what they found through search. The researchers found that "searching the Internet may cause a systematic failure to recognize the extent to which we rely on outsourced knowledge. Searching for explanations on the Internet inflates self-assessed knowledge in unrelated domains." Justin Barrett observed in his article about the research in Slate that "after using the Internet to check how glass is made or why there are leap years, a national sample of American adults suddenly regarded themselves as having a better understanding of completely unrelated topics such as the causes of the U.S. Civil War and how tornadoes form."27

Even more bothersome is the finding that "the illusion of knowledge from Internet use appears to be driven by the act of searching. The effect does not depend on previous success on a specific search engine [or] when the queries posed to the search engine are not answered and remains even...where the search query fails to provide relevant answers or even any results at all."28 Barrett wrote, "Just the impression of having access to so much (mis)information seems to swell one’s intellectual self-assessment. The worrisome implication of this study is that those of us who regularly use the Internet to get information may be giving ourselves a steady dose of arrogance enhancement at the same time."29

So when we rely on the internet but believe "we already knew that," we seem to believe that we are smarter than we really are. "Erroneously situating external knowledge within their own heads, people may unwittingly exaggerate how much intellectual work they can do in situations where they are truly on their own."30

I can diagnose the easy cases, so why not the difficult ones? Given their success in diagnosing easy cases, physicians appear to be overly confident that they can come up with the correct diagnosis in more difficult cases, according to multiple studies. The result is a higher likelihood of diagnostic error.

One study concludes that because of their epistemic arrogance "physicians might not request the additional resources to facilitate diagnosis when they most need it.”31 Another study finds, "Other effects of overconfidence that may lead to diagnostic and other errors include widespread non-compliance with clinical guidelines and 'the general tendency on the part of physicians to disregard, or fail to use, decision-support resources.'"32

330,000 lives lost. Starting in the 1980s, AIDS killed millions of people in the prime of their lives (and at present AIDS-related illnesses have killed 35 million people worldwide since the start of the epidemic).33 However, the epidemic was slowed when "virtually the entire medical establishment" agreed with research findings that the human immunodeficiency virus (HIV) caused AIDS. By 1996, a combination of drugs was discovered that could control the virus, allowing infected people to live long and productive lives. Also, "doctors and public health officials saved countless lives through measures aimed at preventing its transmission. " Today, thanks to antiretroviral treatment for HIV and AIDS, the disease is manageable and 37 million people are living with HIV."34

But at the same time that a consensus developed on the cause, treatment and prevention of AIDS, a few "AIDS denalists" continued to challenge the understanding that HIV causes AIDS. One of these denialists is University of California Professor Peter Duesberg, who asserts that HIV is harmless and AIDS results from long-time use of recreational or antiretroviral drugs and malnutrition.35

The story of fringe dissent would just be a footnote in the fight against AIDS, except that it led to deadly epistemic ignorance on the part of a leader of a major nation. A press release on a study by researchers from the Harvard Study of Public Health states what then happened: "In 2000, Duesberg sat on a panel which advised then-President of South Africa Thabo Mbeki about the cause of the AIDS virus. Mbeki later denied that AIDS was caused by a virus and limited the treatments in the country."36 Despite the concerted efforts of scientists and world leaders to dissuade over-confident Mbeki of his false belief, according to the study report "more than 330,000 lives...were lost because a feasible and timely ARV treatment program was not implemented in South Africa [and] thirty-five thousand babies were born with HIV."37

The overconfident get "phished." Researchers from the University of Texas San Antonio and Columbia College had 600 subjects try to identify the emails they thought were legitimate among 25 actual business emails sent by banks or financial institutions in the U.S. and 25 "phishing" emails that were targeted at customers of banks or financial institutions seeking personal information such as bank account, credit card number, and online banking login information. Then the subjects explained why they made their choices.38

What the researchers found was that overconfidence affected the subjects' decision-making processes, leading many to misidentify phishing emails as legitimate. Eighty percent of the participants displayed overprecision: They overestimated the probability that the email was not a phishing email. The results showed that constrained decision time or diverted attention (variability in attention allocation), level of optimism and familiarity with the business entities increased the participants' overconfidence, and only greater time to reach a judgment (cognitive effort) reduced overconfidence.

Researcher Raghav Rao explained in USA Today that people tend to believe they're smarter than the "phishers," which is why so many fall for their schemes. "A big advantage for phishers is self efficacy," Rao said. "Many times, people think they know more than they actually do, and are smarter than someone trying to pull off a scam via an e-mail."39

Nassim Nicholas Taleb, The Black Swan: Second Edition: The Impact of the Highly Improbable: With a new section: On Robustness and Fragility. (Random House Trade Paperbacks, 2010). https://www.amazon.com/Black-Swan-Second-Improbable-Incerto-ebook/dp/B00139XTG4/ref=sr_1_1?ie=UTF8&qid=1511877885&sr=8-1&keywords=the+black+swan+taleb

Matt Taibbi, "Forget What We Know Now: We Knew Then the Iraq War Was a Joke," Rolling Stone, May 18, 2015. http://www.rollingstone.com/politics/news/forget-what-we-know-now-we-knew-then-the-iraq-war-was-a-joke-20150518

"Researchers estimate lives lost due to delay in antiretroviral drug use for HIV/AIDS in South Africa," Harvard School of Public Health, October 20, 2008. https://www.hsph.harvard.edu/news/press-releases/researchers-estimate-lives-lost-delay-arv-drug-use-hivaids-south-africa/

Trump trap: Action-oriented biastag:typepad.com,2003:post-6a00d8341c594453ef01b7c9344c84970b2017-11-17T19:42:05-06:002017-11-21T11:14:04-06:00The first in series on mental traps and biases that lead Donald Trump – and us – astray A debate has raged among psychiatrists about whether it is ethically proper to diagnose Donald Trump without directly engaging with him as...Lee Crumbaugh

The first in series on mental traps and biasesthat lead Donald Trump – and us – astray

A debate has raged among psychiatrists about whether it is ethically proper to diagnose Donald Trump without directly engaging with him as a patient, whether he indeed displays symptoms of mental illness, and, if so, whether these symptoms should be cause for advocating his removal from office.

Sidestepping this critical debate, from the lens of social science and economics we can easily see that Donald Trump is a poster child for mental traps and biases that get in the way of strategic thinking and rational decision making. While Trump has made millions of dollars, attained celebrity status and won the U.S. presidency either in spite of or because of his decisions and behavior, we can make the case that the mental traps and biases he continues to exhibit are leading to bad outcomes for him and for us.

And, by examining the traps and biases that Trump seems to display, we can see where we, as fellow humans, are likewise affected, albeit most often less dramatically, by these very same mental traps and biases.

THE EVIDENCE FROM TRUMP

Let’s begin our series of posts on mental traps and biases that Donald Trump appears to exhibit by observing that Trump and Republicans in Congress continue to have a blind desire to repeal Obamacare without a comprehensive, workable replacement. Trump continues to pursue building a wall on the Mexican border despite clear analysis that shows a wall will not substantially affect illegal immigration.1 His impulsive action to fire FBI Director James Comey has led to Special Counsel Robert Mueller’s investigation that is haunting the Trump White House.

Trump’s biographer Tony Schwartz writes, “Trump can devolve into survival mode on a moment’s notice. Look no further than the thousands of tweets he has written attacking his perceived enemies over the past year. In neurochemical terms, when he feels threatened or thwarted, Trump moves into a fight-or-flight state. His amygdala is triggered, his hypothalamic-pituitary-adrenal axis activates, and his prefrontal cortex — the part of the brain that makes us capable of rationality and reflection — shuts down. He reacts rather than reflects, and damn the consequences. This is what makes his access to the nuclear codes so dangerous and frightening."2

THE TRAP

Action-oriented bias (also known as action bias or intervention bias)

"Whatever failures I have known, whatever errors I have committed, whatever follies I have witnessed in public and private life, have been the consequences of action without thought." — Bernard Baruch

Definition: Action-oriented bias inclines us to act without considering all the potential ramifications of our action. We tend to overestimate the odds of positive outcomes and our ability to produce desired outcomes. We underestimate the odds of negative outcomes and take too much credit for past successes. This bias arises when we feel pressured to act.

How it works:Researchers Anthony Pattu and Richard Zeckahuser write, “One potential source of action bias is that nature may have equipped us with a desire to do something, a desire that is usually beneficial but sometimes clouds decision making. Recognition of adrenaline charges and fight or flight responses, perhaps even the fact that humans get readily bored and hence seek stimulation through new activities, suggest such possibilities. A second potential source is that individuals develop general tendencies toward action as a decision heuristic, but also carry the penchant over to inappropriate circumstances.”3

Why it’s a problem: Action-oriented bias results in non-rational decisions that “are at variance with traditional, economically based normative policy analyses that rely on such techniques as cost-benefit analysis."4 Beyond the realm of economics, this bias leads to non-rational decisions and actions, as well. For example, it is pervasive in medicine.

Andrew J. Foy, M.D., explains, "Intervention bias is an unrecognized problem that affects medical decision making. I use the term to describe the bias on the part of physicians and the medical community to intervene whether it is with drugs, diagnostic tests, procedural interventions, or surgeries when not intervening would be an acceptable choice. I have established a series of conditional arguments to support the existence of intervention bias in medical decision making. If intervention bias exists, then: 1. Physicians, when presented with the option to intervene or not, when not intervening is an acceptable choice, more often choose intervention. 2. Physicians will adopt interventions without rigorous experimentation. 3. Interventions will persist after their benefit has been disproven. 4. Physicians and medical scientists acting as investigators, manuscript reviewers, and journal editors will be more likely to submit or accept manuscripts for publication that have positive findings related to intervention and to ignore or reject negative studies. Proof exists for all of these conditions; therefore, intervention bias exists in medical decision making."5

Examples:

Goalkeepers jump when they shouldn't.Soccer goalkeepers need to choose their response to a penalty kick before they clearly know the direction of the kick. Two analyses, one of 286 penalty kicks in top leagues and championships worldwide and the second of 32 professional goalkeepers, show that while the optimal strategy is to stay in the goal's center, goalkeepers almost always jump right or left.

The research team offers insight on goalkeepers' penalty kick behavior: "Because the norm is to jump, norm theory implies that a goal scored yields worse feelings for the goalkeeper following inaction (staying in the center) than following action (jumping), leading to a bias for action....The seemingly biased decision making is particularly striking since the goalkeepers have huge incentives to make correct decisions, and it is a decision they encounter frequently."6,7

A famous study by Nobel Prize winner Eugene Fama at the University of Chicago and Ken French at Dartmouth College shows that returns from actively trading shares can easily be out weighed by trading costs: "Luck versus Skill in the Cross-Section of Mutual Fund Returns" shows that when fees are included, active managers' performance is no better than chance and that "we expect that a portfolio of low cost index funds will perform about as well as a portfolio of the top three percentiles of past active winners, and better than the rest of the active fund universe."9

Finance professional Jim Cahn assessed the active-versus-passive investing results thusly: "There are some active managers that beat the market, but no more than would be predicted if you had a bunch of random stock picking machines. Putting it another way, if you had 50,000 people flip a coin 10 times, odds are a few would end up with heads 10 times in a row, but that doesn’t mean they are skilled, or that heads is more likely than tails on their next flip."10

Insert a pulmonary artery catheter! (But is it effective? Is it harmful?) The pulmonary artery catheter (PAC,also known as the Swan-Ganz or right heart catheter) was introduced in 1970. According to Andrew J. Foy, M.D., "No significant testing was performed to establish its clinical effectiveness prior to its widespread implementation."11 Today, While PAC usage has declined in recent years, PAC usage continues in cardiac ICUs.12 Foy asserts, "No evidence exists to support this practice, and it may be harmful." Complications can be life-threatening, including arrhythmias, aneurysm formation, rupture of the pulmonary artery, thrombosis, infection, pneumothorax and bleeding.

Foy cites large studies of the use of PACs with myocardial infarctions and concludes, "After adjusting for risk factors, all three found that PAC use was associated with increased mortality rates except in those cases complicated by cardiogenic shock, where it was associated with no difference." Looking at more studies of PAC use, he observes, "There is no evidence from the published literature that PAC monitoring improves outcomes, and it may cause harm."

We have no clue, but that won't stop us. A research study involving primary care physicians also shows the bias for action.13 The physicians were given clinical vignettes describing patients who had unusual complaints, no clear diagnosis, and no apparent need for urgent care. They were asked to provide the most likely diagnosis for each case, rate their level of confidence in that diagnosis, provide a management strategy for each case and give their level of confidence in the chosen approach.

"Physicians proposed, on average, 22 diagnoses for each case. Most indicated that they would choose action (testing, consulting, sending the patient to the emergency department, or prescribing) rather than follow-up only (87% vs 13%)." The study authors conclude, "Uncertain diagnosis is a regular challenge for primary care physicians. In such cases, we found that physicians prefer a workup to follow-up, an inclination consistent with 'action bias.'"

Abrupt change of plans. Just three days after the first explosion at Japan’s Fukushima nuclear plant in 2011, German Chancellor Angela Merkel announced she "was planning to shut down the country’s oldest nuclear plants and bid farewell to a technology she had vowed was critical just six months earlier."14 Roughly half of the nuclear plants were switched off immediately and the German government pledged to close the rest by the end of 2022.

Not surprisingly, the three German utilities with nuclear power plants saw this abrupt action as "a form of expropriation." and subsequently claimed "the accelerated nuclear phase-out resulted in total damages to them of about 19 billion euros."15 In December 2016, Germany's Constitutional Court ruled that the companies were entitled to "'appropriate compensation' for the government's decision to rush the shutdown of their nuclear reactors."

So what if the odds are tiny? Terrorism threats drive action, even when risks are very low. An example cited by Cass R. Sunstein of Harvard Law School and Richard Zeckhauser of Harvard's John F. Kennedy School of Government is the anthrax scare of October 2001.

They wrote that it "grew out of exceedingly few incidents... Only four people died of the infection; only about a dozen others fell ill. The probability of being infected was exceedingly low. Nonetheless, anxiety proliferated; people focused their attention on the outcome rather than the extremely low probability of the harm. The government responded accordingly, investing significant resources in ensuring against anthrax infections. Private institutions reacted the same way, asking people to take extraordinary care in opening the mail even though the statistical risks were tiny."16

Evidence is hard to come by, so let's just act.Summing up their research on environmental interventions, Paul J. Ferraro of Johns Hopkins University and Subhrendu K. PattanayakIn of Duke University write that for "most environmental projects, knowledge of the effectiveness of interventions that will be taken on the ground is rather weak."17 They explain, "Our understanding of the way in which policies can prevent species loss and ecosystem degradation rests primarily on case-study narratives from field initiatives that are not designed to answer the question 'Does the intervention work better than no intervention at all?' As a result, taking action may be evaluated more positively than collecting additional information, partly because of a lack of evidence that actions would be ineffective."

Further evidence of action bias in the environmental realm comes from researchers in Australia who have studied why environmental managers are prone to act: They surmise that those responsible "may feel that they will earn credit from their superiors,the general public, and the media if they take action even when it is not justified or should be of relatively low priority."18

Cass R. Sunstein and Richard Zeckhauser, "Overreaction to Fearsome Risks," Faculty Research Working Papers Series, John F. Kennedy School of Government - Harvard University, December 2008, RWP08-079. https://research.hks.harvard.edu/publications/getFile.aspx?Id=330

Why not plan to be great, like Elon Musk?tag:typepad.com,2003:post-6a00d8341c594453ef01b7c92f5ebc970b2017-10-31T16:05:25-05:002017-10-31T19:34:52-05:00Elon Musk has just announced plans for SpaceX to build a huge reusable rocket (BFR) that he and his firm believe will enable a permanent self-sustaining human presence on Mars in five years. (Watch the stirring YouTube video of SpaceX's...Lee Crumbaugh

Elon Musk has just announced plans for SpaceX to build a huge reusable rocket (BFR) that he and his firm believe will enable a permanent self-sustaining human presence on Mars in five years. (Watch the stirring YouTube video of SpaceX's CEO and Lead Designer announcing the company's audacious plan at the meeting at the International Astronautical Congress on September 29.)

This is not a "pie in the sky" prediction. Musk and his team have seized the vision of inexpensive space travel and settlements on other planetary bodies and have created an incredibly detailed and seemingly realistic plan building on the firm's accomplishments and knowledge for achieving the vision.

Musk and SpaceX, as well as his other ventures including Tesla, stick out as spectacular examples of dreaming big and then making a challenging but potentially achievable plan and going for it.

But just how prevalent are leaders and organizations who formulate a big vision of future success and then formulate strategies and execute a solid plan to achieve the vision? Are Musk, SpaceX and Tesla really outliers?

Even accounting for a less audacious vision, to what extent do leaders and organizations use the vision-strategies-plan-execution paradigm that seems to be carrying SpaceX and Tesla to such monumental achievements?

Let's celebrate those who get it

"It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something." - Franklin D. Roosevelt

We can give half of all organizations points for at least trying to achieve a vision of greater future success by developing and implementing winning strategies and a plan to go after the vision.

It's good news that strategic planning is the number two most used business tool globally. In the U.S., 50% of organizations use it, according to Bain & Company’s latest survey of executives. Strategic planning has scored at or near the top of the most used management tools list for years. (See Bain & Company Management Tools & Trends 2015.)

Here's some more good news. 58% of leaders say strategic planning is extremely or very important in their organization’s success. That’s according to our 2013 Strategic Leader Survey involving organizational leaders in decision-making roles, which tracked closely with the results of an earlier survey we conducted and with a McKinsey & Company survey. (All of our Strategic Leader survey reports, including our latest on strategic decision making, are available for free download at www.forrestconsult.com).

Reality

But, unfortunately, the reality is not nearly as good as it might seem. Unlike SpaceX and Tesla, organizations of all types are not living up to their potential.

Our finding that 58% of organizational leaders say strategic planning is extremely or very important in their organization’s success means that four of every ten organizational leaders say strategic planning has little or no importance in the success of their organization.

If, according to Bain, 50% of organizations don't use strategic planning, it's no surprise that those organizations can't attribute whatever success they are having to intentional strategy development and execution.

That's a shame, because the evidence is clear that developing and implementing a strategic plan is the path to greater success.

It’s a failure of leadership, a critical lack of understanding, what King George III demonstrated when England lost the American colonies under his reign.

Our finding that 42% of leaders say strategic planning is unimportant for success is only part of the problem. The woeful reality is that in a majority of organizations, like Britain during the American Revolution, the top leader or just a small group are making the strategic decisions.

Our 2016 Strategic Leader Survey revealed that organizations most frequently make strategic decisions without the leader obtaining consensus: In 66% of organizations the leader typically makes the strategic decisions.

Here's the problem with leader-centered decision making, as opposed to using the power of the team and insight from the wider group of stakeholders in the decision-making and planning process: Leaders tend to have biases:

Inability to self assess, overstating their abilities, competencies and characteristics.

Illusory superiority, overestimating their desirable qualities and underestimating their undesirable qualities, thinking they can make better choices than others.

Illusion of control, overestimating their influence on events and mistakenly thinking their actions will be effective.

Power, a sense of control that produces overconfidence in their ability to make good decisions and to overestimate what they think they know, potentially leading to excessive risk-taking.

Self-serving bias, evaluating ambiguous information in a way more beneficial to their interests than to the organization.

Evidence suggests that strategy developed through consensus decision making produces better organizational outcomes than that developed through leader-centered decision making. Our research shows that that organizations rated excellent or good are a third more likely to commonly use consensus decision making for strategic decisions rather than leader-centered decision making.

Even when organizations create a strategic plan, there is failure. According to the Balanced Scorecard Collective, more than 70% of organizations with a plan fail to implement it. For these organizations, planning is an exercise, not a core process. (Please note that even Bain & Company calls strategic planning a tool, not a process. I think that’s a disservice.)

The Balanced Scorecard Collective also reports that 60% of organizations don’t link strategy and budgeting. Hand in hand with implementation issues, a disconnect between strategy and resource allocation often exists. Resources are not focused on implementation.

The answer

“By failing to prepare, you are preparing to fail.” - Benjamin Franklin

The bleak picture of the use of strategic planning painted by the data presented above calls for an answer. The response could be, "Who cares?"

If strategic planning coupled with effective implementation were not such a powerful process for driving greater success, one might say "So what?" to the news that many organizations don't plan or implement.

Yet, way beyond the sample of one offered by SpaceX, extensive, repeated, solid research shows that organizations that plan and implement well do better.

Here's the proof

A while back, I uncovered a scholarly analysis I had not seen referenced elsewhere, conducted by a Danish researcher, published in 2010. It's a "meta-analysis" of 45 years of research involving 88 studies of the impact of strategic planning involving 32,472 organizations. The study concludes definitely that strategic planning improves performance, both in quantitative and qualitative terms.

The research was conducted by Anders McIlquham-Schmidt, now at Copenhagen Business School in Denmark. His published work has the imposing title, "A meta-analytical review of the relationship between strategic planning and corporate performance." You can read it yourself at http://www.hha.dk/man/cmsdocs/WP/2010/2010_01.pdf.

For the data geeks in the crowd, here are the correlation coefficients. For the normal people, what you need to know is that the greater the positive number (with 1.00 being the maximum possible), the greater the favorable relationship that shows that conducting strategic planning produces the resulting performance measure.

In McIlquham-Schmidt's analysis, qualitative performance measures (non-data geeks, that means measurable stuff), had very high correlation coefficients, r. To be specific, the analysis shows high positive correlations between planning and key quantitative measures of performance, from earnings per share to return on net worth, as follows:

Earnings per common share +0.79

Return on invested capital +0.64

Return on owner’s investment +0.58

Change in return on invested capital +0.56

Return on net worth +0.42

Impact is more than financial

“It is interesting to note that those businesses that perform at the highest levels usually have some sort of formalized strategic plan in place and have implemented it well. On the other hand, those businesses that struggle usually have no plan in place and seem to flounder in their attempts to be successful." - Chris Arringdale

Even better, from my perspective as a person educated in quantitative analysis but as much a "soft measures" person, the analysis also shows positive correlations between planning and performance for less tangible but very important qualitative measures, including attainment of the organization’s objectives. (Non-data geeks, here's an example of using a qualitative measure. When a supervisor is asked to evaluate an employee's attitude on a 1-5 scale, that's using a qualitative performance measure.)

Attainment of profit objectives +0.51

Community acceptance +0.48

Service efficiency +0.47

Attainment of corporate objectives +0.44

Even better outcomes from a quality process?

Let's note that this conclusive evidence about the efficacy of strategic planning does not even look at the the quality of the planning and implementation process that the organizations studied used.

I think we can surmise that the "better" planners and implementers were more likely to drive desired, positive results than those who indeed planned but planned and implemented poorly. Certainly some of the underlying studies considered in McIlquham-Schmidt's meta-analysis show this to be the case.

Debate settled?

"Planning is bringing the future into the present so that you can do something about it now." - Alan Lakein

Can we now agree that any past debate about the positive impact of strategic planning has been put to rest? Can we get on with making creation of an engaging vision of greater future success and engaging in smart and effective planning and implementation a core process in the organizations we care about?

I would hope so!

Why not envision your own version of SpaceX's "Mars settlement," your shared view of what great success can mean for your organization?

Why not focus your organization on creating and implementing strategies to get to that inspiring future vision?

Why not start now?

Only rarely have I promoted my firm, Forrest Consulting, on this blog. The focus of Strategic Thinking & Strategic Action is thought leadership, not marketing. However, in this case I am compelled to offer up Forest Consulting as a solution, because we are uniquely equipped to help organizations find a stirring vision, "plan to be great" and execute their plan. Turn to us for:

Better strategy. Have us facilitate creation of your strategic plan for greater success. We will tailor our agile planning system to fit your organization's needs. We can obtain stakeholder input; develop an environmental scan and a SWOT analysis, facilitate the planning sessions to create the vision, strategies, action steps and more; and draft the plan report. Our experience and tools will help you drive successful implementation.

Better decisions. Use our counsel and facilitation skills to help your Board, team or group be more effective and reach goals. Our expertise and research uniquely equip us to help leaders make and execute great strategic decisions.

Better results. Ask us to help you craft and execute "best practice" approaches to tackling strategic issues and opportunities. Our value is enhanced by our broad strategy, management, marketing and communications experience.

Let's talk. Give me a shout any time you want to have a 15-minute conversation to explore how we can help your organization craft and execute great strategy for greater success.

Thank you for our connection!tag:typepad.com,2003:post-6a00d8341c594453ef01b7c92d9575970b2017-10-25T14:05:57-05:002017-10-25T14:05:57-05:00Twenty five years ago, before social media, British anthropologist Robin Dunbar found a correlation between primate brain size and average social group size. He theorized that humans can only maintain about 150 current stable relationships with other individuals. (Dunbar, 1992)...Lee Crumbaugh

Twenty five years ago, before social media, British anthropologist Robin Dunbar found a correlation between primate brain size and average social group size. He theorized that humans can only maintain about 150 current stable relationships with other individuals. (Dunbar, 1992)

Today, in the age of social media, our close relationships are still limited, but most of us have what sociologist Mark Granovetter calls "weak ties" with many, many more people. (Granovetter, 1973)

STRONG OR WEAK, OUR CONNECTIONS ARE IMPORTANT

Here's the thing: The people with whom we have weak ties can be just as important to us as those with whom we have strong ties.

Granovetter writes, "More novel information flows to individuals through weak than through strong ties. Because our close friends tend to move in the same circles that we do, the information they receive overlaps considerably with what we already know. Acquaintances, by contrast, know people that we do not and, thus, receive more novel information. Moving in different circles from ours, they connect us to a wider world. They may therefore be better sources when we need to go beyond what our own group knows, as in finding a new job or obtaining a scarce service.” (Granovetter, 2005)

So, whether you are a close friend or an acquaintance, I value our connection. Thank you for being connected with me.

THANKING PEOPLE IS RIGHT...AND POWERFUL

A sincere "thank you," being grateful for the others in our life and what they mean to us, can go along way, both for the person we thank and for us.

Do you know that when people are thanked, they tend to take more actions to help others because they feel socially valued? (Grant & Gino, 2010)

Does it surprise you that people who are more grateful have access to a wider social network, more friends, and better relationships, on average? (Amin, 2014)

Research shows that whether being grateful is a trait or is triggered by another person's kindness, gratitude is linked to increased "prosociality," which is behavior intended to benefit others? (Ma, Tunney & Ferguson, 2017)

BEING GRATEFUL IS INFECTIOUS

Here's good news: Your gratitude can infect the group. Gratitude can spread, through group emotional contagion, the transfer of moods among people in a group. In groups experiencing positive emotional contagion, members experience improved cooperation, decreased conflict and increased perceived task performance. (Barsade, 2002)

Cooperative behavior cascades in human social networks. Individuals are influenced by fellow group members’ contribution behavior in future interactions with other individuals who were not a party to the initial interaction. Further, this influence persists for multiple periods and spreads up to three degrees of separation - from person to person to person to person. And clusters of happiness can result from the spread of happiness in groups. (Fowler & Christakis, 2008)

The value of being connected grows when one's connections spread across different groups. People who connect across groups have information, timing and arbitrage advantages that make them more likely to detect and develop rewarding opportunities. These "network brokers" tend to be better compensated, more positively evaluated, more likely candidates for senior positions, and more recognized as leaders in their organization and industry. On average, they are 50% more successful than those who do not engage in brokering but just interact with individuals like themselves. (Burt, 2004)

IMPORTANT FOR PLANNING AND IMPLEMENTATION

The research on the value of connections and impact of gratitude not only has personal and professional implications, it has organizational implications.

Think about it: When we bring loosely connected stakeholders and even outsiders into the planning process for voices and thoughts beyond our own, we are enabling the group to provide new information and access to their networks.

Likewise, making sure those involved in the planning know they have our sincere gratitude for their involvement and contributions seeds improved cooperation, decreased conflict and potent performance gains, that is, greater commitment to and involvement in implementing the plan.

I am grateful for our connection and your interest in strategic thinking and strategic action!

Fowler, J.H. & Christakis, N.A. (2008). "Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study Research." http://www.bmj.com/content/337/bmj.a2338.

Case 12: Not missing the opportunity to lose billionstag:typepad.com,2003:post-6a00d8341c594453ef01bb09c4dc44970d2017-09-19T13:49:01-05:002017-09-19T13:56:03-05:00This post is the 12th in my series looking at cases where it seems that "believing we are right" has led to bad outcomes, sometimes even spectacularly bad results, for leaders, teams and organizations. For my upcoming book, Big Decisions:...Lee Crumbaugh

This post is the 12th in my series looking at cases where it seems that "believing we are right" has led to bad outcomes, sometimes even spectacularly bad results, for leaders, teams and organizations.

For my upcoming book, Big Decisions: Why we make decisions that matter so poorly. How we can make them better, I have identified and categorized nearly 350 mental traps and errors that lead us into making bad decisions. The many high-profile situations that I have examined demonstrate the bad outcomes that can be produced by mental traps and errors. My premise is that, at the least,if we recognize and admit that we don't know the answer, we will put more effort into looking for better decision options and limiting the risks stemming from failure when making important decisions.

Our next true story is a classic case of seeing what one wants to see and going with the herd, where everyone ignored mounting evidence right at hand that could have averted what for so many people was financial devastation.

FEELING THE BERN

Bernie Madoff's investors believed that his fund was rock-solid. He promised consistent annual returns of 10% to 12% and seemingly delivered on this promise for years.

HIs Ponzi scheme has cost more than 40,000 investors who believed in him $6 billion in principal, even after recovery of $11.5 billion in investor's funds.

Decades of high returns from Madoff's funds conditioned investors to expect these exceptional returns, despite the easily discovered evidence that such a long string of strong earnings reports was extraordinarily unlikely. Madoff's investors were subject to Optimism Bias (when making predictions, we tend to overestimate the likelihood of positive events and underestimate the likelihood of negative events, and to give more credence to news that reinforces our belief than news that undermines it).

Investors flocked to Madoff because of the Social Proof Heuristic (the tendency for people in ambiguous situations to believe that a behavior is correct to the extent that other people with whom they associate are engaged in it.) Madoff had a long history of involvement in Jewish philanthropic activities and invested money for important Jewish institutions. These connections and his roster of wealthy Jewish clients led others within the Jewish community to invest with him because they assumed that Madoff must be trustworthy. Also, research by Li Huang and J. Keith Murnighan of Northwestern University suggests people were unconsciously attracted to trust Madoff when they should not have trusted him because his closest relatives and associates invested with him.

Madoff used exclusivity and elusivity to play to people's Regret Aversion (behavior being governed by a desire to avoid the pain associated with making poor decisions, in this case to avoid making errors of omission - not wanting feelings of remorse from missing out on an opportunity that others are enjoying). Madoff's investors chose to invest with him because they thought they would regret not taking the transient opportunity when it was open to them. He presented his fund as an exclusive opportunity to which the investors had be invited. He seldom recruited investors and presented the facade that he did not need their funds. He himself was elusive and rarely spoke with his investors. When he did, he said they were exceptionally smart for wanting to invest with him. Madoff knew that an opportunity appearing to be scarce, special or restricted takes on greater value. He understood that the pressure to "act now or this opportunity will disappear" caused investors to process the opportunity using shortcuts, simplification, and superficial analysis.

Ponzi schemes like Madoff's succeed for longer or shorter periods because they play on greed. "Wanting in" and our desire to "get the payoff" can lead us to suspend disbelief and place trust where, indeed, it is misplaced.

Case 11: Getting bitten by not seeing ittag:typepad.com,2003:post-6a00d8341c594453ef01b7c91abeef970b2017-08-29T12:03:42-05:002017-08-29T11:49:59-05:00This post is the 11th in my series looking at cases where it seems that "believing we are right" has led to bad outcomes, sometimes even spectacularly bad results, for leaders, teams and organizations. For my upcoming book, Big Decisions:...Lee Crumbaugh
<div xmlns="http://www.w3.org/1999/xhtml"><p><em> <a class="asset-img-link" href="http://leepublish.typepad.com/.a/6a00d8341c594453ef01bb09bdffce970d-pi" style="display: inline;"><img alt="Isthmian_Canal_Commission_Map_of_the_Panama_Canal_Zone" class="asset asset-image at-xid-6a00d8341c594453ef01bb09bdffce970d img-responsive" src="https://leepublish.typepad.com/.a/6a00d8341c594453ef01bb09bdffce970d-450wi" style="width: 450px;" title="Isthmian_Canal_Commission_Map_of_the_Panama_Canal_Zone" /></a><br /><br />This post is the 11th in my series looking at cases where it seems that "believing we are right" has led to bad outcomes, sometimes even spectacularly bad results, for leaders, teams and organizations. </em></p>
<p><em>For my upcoming book, </em><a href="http://leepublish.typepad.com/strategicthinking/make-big-decisions-better.html" rel="noopener noreferrer" target="_blank"><strong>Big Decisions: Why we make decisions that matter so poorly. How we can make them better</strong></a><em>, I have identified and categorized nearly 350 mental traps and errors that lead us into making bad decisions. The many high-profile situations that&nbsp;I have examined demonstrate the bad outcomes that can be produced by mental traps and errors. My premise is that, at the least</em>,<em> if we recognize and admit that we don't know the answer, we will put&nbsp;more effort into looking for better decision options and limiting the risks stemming from failure when making important decisions.</em></p>
<p><em>This case shows what can happen when mental traps and logic errors set up a leader to ignore mounting evidence countering common, long-held - and wrong - belief. The result was death and near derailment of a world-changing undertaking.</em></p>
<p><strong>"BALDERDASH!"</strong></p>
<p><strong>Isthmian Canal Commission Chief Engineer John Walker believed</strong>&nbsp;when he took over the U.S. project to build the Panama Canal in 1904 that the theory that mosquitoes transmitted deadly Yellow Fever was "balderdash" and eliminating the&nbsp;mosquitoes was a waste of time, money, and manpower.&nbsp;Walker's belief rejected 50 years of evidence from medical research that&nbsp;the Aedes aegypti mosquito spread Yellow Fever. It ignored the "proof of concept" offered In 1901, when a yellow fever epidemic erupted in Havana: Dr. William Gorgas and his medical team eradicated the mosquito from Havana in eight months, halting the epidemic. &nbsp;</p>
<p><em>In November of 1904,&nbsp;cases of yellow fever began to appear in the Panama Canal Zone. Panic spread: 200 staff members resigned over a two-week span and three-quarters of all Americans left. Ultimately, 246 people came down with Yellow Fever and 84 people died. President Theodore Roosevelt,realized that Walker and other Canal Commission members were a major obstacle to&nbsp;Dr. Gorgas's planned attack on mosquitoes in the Canal Zone, so he forced their resignation. The&nbsp;new leaders he appointed gave medical officer&nbsp;Dr. Gorgas the resources he needed to end the epidemic and ultimately rid the the Canal Zone of Aedes aegypti mosquitoes and the disease.</em></p>
<p>Why did Walker, a highly successful civil engineer who began his his career as Assistant Civil Engineer for the U.S. Engineering Corps, focusing on river and harbor work, and then became Chief Engineer and finally General Manager of the Illinois Central Railroad, reject the mounting scientific evidence that mosquitoes and not "bad air" spread the deadly disease?</p>
<ul>
<li>He had been conditioned by long-made claims to believe Yellow Fever appeared in areas where "miasma" (bad air) and&nbsp;fomites (airborne particles) were prevalent. This was an<a href="http://www.strategicbusinessleader.com/executewell/decisiontraps.html#illusorycorrelation" rel="noopener noreferrer" target="_blank"><strong> Illusory Correlation</strong></a> (perceiving a&nbsp;relationship between variables when no such relationship exists).</li>
<li>His controlling personality led him into denying the evidence: He was a victim of the&nbsp;behavioral trap of<a href="http://www.strategicbusinessleader.com/executewell/decisiontraps.html#selfdeception" rel="noopener noreferrer" target="_blank"><strong> Self Deception</strong></a>&nbsp;(when people deny or rationalize away the relevance, significance, or importance of opposing evidence and logical argument.to maintain&nbsp;a positive self image and the illusion of control&nbsp;over their lives).</li>
<li><span style="color: #111111; background-color: #ffffff;">When Walker dismissed the evidence of the mosquito theory of transmission he made the logic&nbsp;error of an<a href="http://www.strategicbusinessleader.com/executewell/decisiontraps.html#appealtoignorance" rel="noopener noreferrer" target="_blank"><strong> Appeal to Ignorance</strong></a>&nbsp;(assuming&nbsp;a conclusion or fact based primarily on lack of evidence to the contrary, the error of which is described by&nbsp;the statement,&nbsp;“absence of evidence is not evidence of absence"). &nbsp;</span></li>
</ul>
<p>Walker's hard driving approach served him well when he was building railroads and ports. Yet, his charge to "get the canal built" encouraged his narrow "get 'er done" focus and amplified his desire to wave away all but the most apparent obstacles to success. What's important is often subtle and murky. When we are led to not look for what's less apparent, we can be led seriously astray.</p>
<hr />
<p><em>I expect to publish my new book, </em><a href="http://leepublish.typepad.com/strategicthinking/make-big-decisions-better.html" rel="noopener noreferrer" target="_blank"><strong>Big Decisions: Why we make decisions that matter so poorly. How we can make them better</strong></a><em>, later this year. It will be an antidote for bad decision individual and organizational decision making. You can help me get it published and in the hands of decision makers whose decisions not only affect their lives but all of ours. </em></p>
<p><em>Learn more about<strong>&nbsp;</strong></em><a href="http://leepublish.typepad.com/strategicthinking/make-big-decisions-better.html" rel="noopener noreferrer" target="_blank"><strong>Big Decisions: Why we make decisions that matter so poorly. How we can make them better</strong></a><em> and my special half-price pre-publication offer. Thank you!</em></p></div>
Case 10: When what worked didn't worktag:typepad.com,2003:post-6a00d8341c594453ef01b8d2a4d6e7970c2017-08-29T09:21:17-05:002017-08-29T09:14:09-05:00This post is the 10th in my series looking at cases where it seems that "believing we are right" has led to bad outcomes, sometimes even spectacularly bad results, for leaders, teams and organizations. For my upcoming book, Big Decisions:...Lee Crumbaugh

This post is the 10th in my series looking at cases where it seems that "believing we are right" has led to bad outcomes, sometimes even spectacularly bad results, for leaders, teams and organizations.

For my upcoming book, Big Decisions: Why we make decisions that matter so poorly. How we can make them better, I have identified and categorized nearly 350 mental traps and errors that lead us into making bad decisions. The many high-profile situations that I have examined demonstrate the bad outcomes that can be produced by mental traps and errors. My premise is that, at the least, if we recognize and admit that we don't know the answer, we will put more effort into looking for better decision options and limiting the risks stemming from failure when making important decisions.

In the case at hand, past success was not a guarantee or future results: Mental traps seemingly blinded our protagonist to differing circumstances.

"DID IT BEFORE, CAN DO IT AGAIN"

Yahoo CEO and former Google executive Marissa Mayer believed that she could turn around Yahoo when many others before her could not.

Starting in 2012, as Yahoo's seventh CEO in a little over five years, "Mayer put her resources in some of the wrong places, spent a lot of money, and didn't have a lot to show for it," according to analyst Jan Dawson as quoted by CNN Money. In a fire sale, in 2016 Verizon agreed to acquire Yahoo for $4.8 billion. This price was cut to $4.48 billion this year after Yahoo was hit by two massive security breaches involving a million users. When the transaction closed this past June, Mayer left the Board and management, albeit with a $23 million golden parachute. The Yahoo brand will continue to exist alongside AOL and Huffington Post as it it folded into digital media division of Verizon, now named Oath.

Whether turning around Yahoo was possible cannot be known. But what does seem evident is that mental traps made the task even more difficult for Mayer:

Not surprisingly, Mayer applied the mentality and approach she had learned and used at Google to Yahoo, which was a very different organization in terms of mindset, focus and experience. Staying consistent with what worked at Google was a likely outcome of the Commitment Heuristic (the tendency to make decisions that are consistent with our prior beliefs, attitudes and actions, thus affirming their correctness).

One the first 25 employees at Google, Mayer ran the company's flagship search division for many years. Then, according to the New York Times, "she lost a turf battle to a powerful engineer within Google and was quietly reassigned to oversee Google Maps and other so-called local products. Mayer tried to spin the move positively, but that became harder after Larry Page, one of the company’s founders, regained the role of C.E.O. and removed Mayer from the group of executives reporting to him." Did Mayer see Google's successes as more tied to her actions than they actually were and therefore take the Yahoo job to prove that Google erred in demoting her? Whether this was consciously her intent, the evidence suggests that Mayer was afflicted by Memory Bias (when recalled memories are altered so they differ from what actually happened and therefore bias predictions of future experiences based on these altered memories).

According to the Los Angeles Times, in May last year, Yahoo valued its investment stake in China's Alibaba, a giant Google competitor, at $29.4 billion and its 35.5% stake in Yahoo Japan at $8.7 billion. Yahoo’s market capitalization that day was $34.7 billion. "In other words, the stock market valued everything other than those two holdings — that is, everything subject to Mayer’s management — at negative $3.4 billion." Yet, later in the year when Mayer wrote to employees about Yahoo's sale to Verizon, she "put lipstick on the pig" by using Reframing (changing how information is presented to elicit a different point of view). Rather than recognizing that the strategies that had been pursued under Mayer's leadership had not turned around the company, her letter cited the company's acquisition by Verizon as evidence of "the immense value we’ve created" and stated, "We set out to transform this company – and we’ve made incredible progress."

Meyer's story shows us just how extraordinarily difficult it is for us to "unlearn" what has made us successful when new circumstances demand a different approach. We are wired to stay on the path we know and adjust our perspective to keep us on that road, even when it leads us to a cliff edge. Yet, Meyer was in one sense exempted from having to jump off the cliff: Based on her personal financial gain, she succeeded wildly, leaving the now captive company with a fortune in hand.