The View

This post presumes a familiarity with the Prospecting and Fractal Lenses described under the Lenses tabs.

The intersection between the work of Kahneman & Tversky and Mandelbrot is obvious in some respects and not so obvious in others. The obvious connection concerns the systemic failures of System 1 to correctly evaluate risks. The less obvious connection is how collections of such individuals result in the natural distributions described by Mandelbrot's mathematics and power laws.

In Thinking, Fast and Slow, Kahneman devotes Part Three of the book to the subject of Overconfidence, noting at the beginning of Chapter 19:

The trader-philosopher-statistician Nassim Taleb could also be considered a psychologist. In The Black Swan, Taleb introduced the notion of a narrative fallacy to describe how flawed stories of the past shape our views of the world and our expectations for the future. Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative. Taleb suggests that we humans constantly fool ourselves by constructing flimsy accounts of the past and believing they are true.

​Taleb's work is based largely on Mandelbrot. The Mandelbrot-based models are full of complex and so-called "fat-tailed" descriptions of human-created phenomena and failures to assess risks, from financial markets to Hurricane Katrina. Kahneman describes the individual human component of this as products or heuristics of System 1 of his mind model. These include:

The Narrative Fallacy and Hindsight Illusions. System 1 thinking often involves attaching a story, or narrative, to a series of events in hindsight that give an illusion that the outcome of the narrative was predictable in advance. As the popular phrase goes, "Hindsight is 20/20". So people attach certainty or inevitability to the fact that an outcome actually occurred -- such as Google becoming a successful company or Michael Jordan winning all those NBA Championships. In addition to creating narratives to discount the uncertainty, System 1 will often mis-remember what the predictor thought before the event occurred, resulting in post-hoc statements like "I knew it would happen that way" or "I had an intuition". Adding to this problem of mis-perception is "The Halo Effect", or mis-attributing some good traits or decisions of a subject to the entire subject, effecting white-washing bad decisions or traits from existence. Histories of successful companies are replete with this kind of certainty and goodness narrative, yet such companies generally underperform after their success narrative is written.

Kahneman notes that the reality is that due to uncertainty, positive or negative outcomes often do not follow correspondingly good or bad decision-making, but System 1 of our minds cannot process an uncertain reality. He offers these short-hand notes for thinking about this problem with System 2:

​“The mistake appears obvious, but it is just hindsight. You could not have known in advance.”

“He’s learning too much from this success story, which is too tidy. He has fallen for a narrative fallacy.”

“She has no evidence for saying that the firm is badly managed. All she knows is that its stock has gone down. This is an outcome bias, part hindsight and part halo effect.”

“Let’s not fall for the outcome bias. This was a stupid decision even though it worked out well.”

The Illusion of Validity. System 1 also fools people into thinking that the fact that they analyzed something in advance and made a prediction validates the prediction or method. Yet often -- and perhaps more often than not -- the predictions are worthless when analyzed as to whether they were any better than random chance or simple algorithms. Kahneman illustrates this problem with two examples -- his early work trying to assess leadership skills of soldiers and the failure of the vast majority of stock-picking analysts to beat the market. It both cases, even when confronted with the evidence that the analyses were failures, the predictor continues to believe that he or she is actually doing something meaningful. Political pundits and predictors of historical trends do even worse, by and large. Moreover, these experts will generally not even admit that they were wrong, but will rationalize their failures with hedging or excuses about timing or being "partially correct." Taleb, at his most charitable, identifies such people as "charlatans."

Kahneman relies on Philip Tetlock's work and a description of hedgehogs and foxes as proxies for System 1 and System 2 thinking in this area:

[Tetlock] uses the terminology from Isaiah Berlin’s essay on Tolstoy, “The Hedgehog and the Fox.” Hedgehogs “know one big thing” and have a theory about the world; they account for particular events within a coherent framework, bristle with impatience toward those who don’t see things their way, and are confident in their forecasts. They are also especially reluctant to admit error. For hedgehogs, a failed prediction is almost always “off only on timing” or “very nearly right.” They are opinionated and clear, which is exactly what television producers love to see on programs. Two hedgehogs on different sides of an issue, each attacking the idiotic ideas of the adversary, make for a good show.

​Foxes, by contrast, are complex thinkers. They don’t believe that one big thing drives the march of history (for example, they are unlikely to accept the view that Ronald Reagan single-handedly ended the cold war by standing tall against the Soviet Union). Instead the foxes recognize that reality emerges from the interactions of many different agents and forces, including blind luck, often producing large and unpredictable outcomes. It was the foxes who scored best in Tetlock’s study, although their performance was still very poor. But they are less likely than hedgehogs to be invited to participate in television debate.

​​Kahneman offers these short-hand notes for recognizing this System 1 problem:

​“He knows that the record indicates that the development of this illness is mostly unpredictable. How can he be so confident in this case? Sounds like an illusion of validity.”

“She has a coherent story that explains all she knows, and the coherence makes her feel good.”

"What makes him believe that he is smarter than the market? Is this an illusion of skill?”

“She is a hedgehog. She has a theory that explains everything, and it gives her the illusion that she understands the world.”

“The question is not whether these experts are well trained. It is whether their world is predictable.”

The larger point behind all of these is that in a fractal world, uncertainty is always with us. Consequently, we should be wary of anyone who is too confident about his or her ability to predict the future.

Experts are Often Inferior to Algorithms. Relying largely on the work of Paul Meehl, Kahneman notes that using statistical formulas often yield better predictions than expert judgment, and for two reasons: First, experts try to be too clever and overthink problems, looking for hidden insights in the data where there are none. Second, even when presented with exactly the same data, humans are often inconsistent about interpreting it from day-to-day and week-to-week. Kahneman sums up:

The research suggests a surprising conclusion: to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments. In admission decisions for medical schools, for example, the final determination is often made by the faculty members who interview the candidate. The evidence is fragmentary, but there are solid grounds for a conjecture: conducting an interview is likely to diminish the accuracy of a selection procedure, if the interviewers also make the final admission decisions. Because interviewers are overconfident in their intuitions, they will assign too much weight to their personal impressions and too little weight to other sources of information, lowering validity.

​Examples of how this has been successfully implemented include the Apgar test for evaluating newborn infants and other described in The Checklist Manifesto by Atul Gawande. To recognize this System 1 error, Kahneman offers:

“Whenever we can replace human judgment by a formula, we should at least consider it.”

“He thinks his judgments are complex and subtle, but a simple combination of scores could probably do better.”

“Let’s decide in advance what weight to give to the data we have on the candidates’ past performance. Otherwise we will give too much weight to our impression from the interviews.”

On the fractal side of things, these ideas also reflect the inherent uncertainty of the world we live in and our stubborn inability to comprehend it most of the time. We prefer smooth Euclidean forms and certainty through conviction and intuition to rough realities where information is only partially available and our judgment is corrupted by innate biases.

There are exceptions, however, to this general rule. Kahneman notes that expert judgement should be preferred when the following is true:

there is an environment that is sufficiently regular to be predictable; and

the expert has an opportunity to learn these regularities through prolonged practice

This kind of situation exists in controlled environments like chess matches and other activities involving pure skills that can be improved upon with practice. Physicians, nurses, athletes and firefighters, for example, also often perform in these types of environments. But the real question is feedback. Thus, for example, an anesthesiologist's judgment is generally more trustworthy than that of a therapist, because the former almost always receives immediate feedback while the latter has a more difficult time assessing the long-term effects of a particular therapy.

To recognize when the exception might apply, we might consider:

“How much expertise does she have in this particular task? How much practice has she had?”

“Does he really believe that the environment of start-ups is sufficiently regular to justify an intuition that goes against the base rates?”

“She is very confident in her decision, but subjective confidence is a poor index of the accuracy of a judgment.”

“Did he really have an opportunity to learn? How quick and how clear was the feedback he received on his judgments?”

On the fractal side, this illustrates an important dividing line on the fractal/non-fractal world -- what Taleb calls Extremistan and Mediocristan. There are many areas of day-to-day life where environments are controlled enough that we can rely on experts and our own basic intuitions. In fact, one might argue that one of the purposes of modern society is to create as many controlled or "safe" environments as possible.

Yet there is no way to wring complexity and uncertainty out of life such that we can rely exclusively on System 1-influenced judgments. Many of our most important decisions involve situation where we are faced with both "known unknowns" and "unknown unknowns", including commonly encountered phenomena like financial markets and uncommon phenomena like wars and floods.

This is one reason why attempts at utopian societies inevitably fail and often result in horrific dystopias -- uncertainty cannot be eliminated and even the best-intended actions may have serious unintended negative consequences. It also tells us, on the fractal side, that where the downside is potentially very seriously negative, precautions must be built-in. This is why in developed countries almost all buildings are required to be over-engineered.

Discerning what kind of environment that one is in is a critical decision point and one that is the subject of endless policy debates that are beyond the scope of this particular post but will be the subject of future ones.

The Planning Fallacy.This particular System 1 problem arises because most people are too optimistic about their own abilities. Instead of assuming average performance or performance similar to others in terms of planning a project, how long it will take, what it will cost, etc., we assume superior performance and therefore superior results.

The impact of this problem can be severe and is magnified by the relative power and authority of the participants -- i.e., how much their decision-making affects others:

When forecasting the outcomes of risky projects, executives too easily fall victim to the planning fallacy. In its grip, they make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns— or even to be completed.

In this view, people often (but not always) take on risky projects because they are overly optimistic about the odds they face. [This System 1 phenomenon] contributes to an explanation of why people litigate, why they start wars, and why they open small businesses.

Some guideposts for recognizing this problem:

“He’s taking an inside view. He should forget about his own case and look for what happened in other cases.”

“She is the victim of a planning fallacy. She’s assuming a best-case scenario, but there are too many different ways for the plan to fail, and she cannot foresee them all.”

“Suppose you did not know a thing about this particular legal case, only that it involves a malpractice claim by an individual against a surgeon. What would be your baseline prediction? How many of these cases succeed in court? How many settle? What are the amounts? Is the case we are discussing stronger or weaker than similar claims?”

“We are making an additional investment because we do not want to admit failure. This is an instance of the sunk-cost fallacy.”

Kahneman notes that this optimism is frequently what drives entrepreneurs and inventors, as they delude themselves into believing that their performance and success rate will greatly exceed the averages.

He goes on with this reference to Taleb:

As Nassim Taleb has argued, inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid. However, optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers. One of the lessons of the financial crisis that led to the Great Recession is that there are periods in which competition, among experts and among organizations, creates powerful forces that favor a collective blindness to risk and uncertainty.

​And this become the crux of the matter as to how this System 1 trait of individuals affects the individual, his or her organization and the world at large. When the decision-maker or his organization takes all the risk associated with overconfidence. Society is likely to benefit because although most of the entrepreneurs or inventors will fail, a few will succeed and the benefits may be shared by many.

On the other hand, sometimes the risks entailed in overconfident decision-making are not borne by the decision-maker, but by others. Many an improvident war or battle campaign has resulted in unmitigated disaster for an army or a society. Other famous examples, which are too numerous to mention, include the failure of the dam that led to the Johnstown Flood that killed over 2200 people in 1889 and the more recent failure of the levies in New Orleans in the face of Hurricane Katrina.

Taleb famously argues that a fair and just society would ensure that such risk-takers should always have their own skin in the game.

Overconfidence and the other System 1 heuristics described above also result in a less obvious aspect of reality that is described by the power laws of the fractal lens. In particular, the unequal distribution of wealth is most certainly influenced by the fact that there are too many overconfident risk-takers participating and that most of them will fail, while a few will have fabulous success.

This effect is more pronounced in so-called "winner-take-all" markets, such as music, books, movies, and professional spors where most of the popularity and revenues are concentrated into only a few titles, songs, actors and athletes. As the Prospecting Lens would predict, it is much less prevalent in professions that involve lots of training and learned or acquired skills, like physicians or fireman, or most white collar workers in large organizations like big corporations and governments. Such people tend to be a lot more similar in outcomes, financially and otherwise, than participants in the more winner-take-all markets. Thus, one could say that some of the population works in Mediocristan and some in Extremistan.

The fractal lens also tells us that while some of these successes may be skilled-based, there is also a great deal of uncertainty as to which inventions, songs, restaurants and other businesses will succeed while others fail. An interesting discussion of this phenomenon as applied to the movie industry can be found in this paper by Art Devany, Uncertainty in the Movie Industry: Does Star Power Reduce the Terror of the Box Office?

As a note that Kahneman relates, the founders of Google were once willing to sell their business for $1 million, but could not find a buyer at that price. While one could look at the apparent stupidity of potential buyers in hindsight, the valuation might have been fair at the time given the uncertainty surrounding the business. But it also shows that the current wealth of Google's founders was subject to the uncertainty and apparent good fortune -- in hindsight of course -- of NOT being able to find a buyer for their business when they thought they should sell it.

One might surmise that while skill certainly makes success in a busines venture more likely, the magnitude or amplitude of that success is likely subject to an enormous amount of uncertainty relating to both "known unknowns" and "unknown unknowns" about events in the future than are completely unpredictable.

​I am quite fond of the original Star Trek series, which presented classic parables and political commentary in the guise of science fiction. In one episode, the Enterprise has narrowly escaped being cut to pieces by a "Doomsday device" created by a long dead civilization that continues destroying solar systems long after it destroyed its creators and their enemies. There is a colloquy between Science Officer Spock and the ship's doctor:

Barry Ritholtz is currently running a long-form interview series called "Masters in Business" where he talks to all kind of extremely successful, and usually extremely wealthy, people. When asked how they became not just successful, but fabulously successful, almost all attribute "luck" as a major component of it.