One of my concerns when reading history of any sort is that the facts sometimes seem to fit together a little too easily. I’ve read too much about our untrustworthy memories and logical fallacies to believe myself most of the time. I certainly don’t trust anyone else.

In the Undoing Project, Michael Lewis provides the human-interest part of the working partnership of Daniel Kahneman and Amos Tversky, which resulted in a Nobel Prize in Economics, and Kahneman’s popular book Thinking, Fast and Slow. Recently, I read about some talks Tversky gave at the State University of New York at Buffalo. One that stood out to the professor, Irv Biederman, who invited him was on what Tversky called “Historical Interpretation: Judgment Under Uncertainty.” Given to a roomful of historians, it wasn’t necessarily very comforting.

In the course of our personal and professional lives, we often run into situations that appear puzzling at first blush. We cannot see for the life of us why Mr. X acted in a particular way, we cannot understand how the experimental results came out the way they did, etc. Typically, however, within a very short time we come up with an explanation, a hypothesis, or an interpretation of the facts that renders them understandable, coherent, or natural. The same phenomenon is observed in perception. People are very good at detecting patterns and trends even in random data. In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly “exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way.”1

“Historical judgment,” he said, was “part of a broader class of processes involving intuitive interpretation of data.”2 I think we mostly labor under the impression that the history we read is strictly factual, but the judgment part of “historical judgment” can’t be ignored. Take for example the research done by one of Amos’s graduate students at Hebrew University, Baruch Fischhoff.

When Richard Nixon announced his surprising intention to visit China and Russia, Fischhoff asked people to assign odds to a list of possible outcomes—say, that Nixon would meet Chairman Mao at least once, that the United States and the Soviet Union would create a joint space program, that a group of Soviet Jews would be arrested for attempting to speak with Nixon, and so on. After the trip, Fischhoff went back and asked the same people to recall the odds they had assigned to each outcome. “Their memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon “hindsight bias.”3

“Hindsight bias” is a very troubling tendency in a contentious historian, but how many are careless? Even if they’re not, how much of the source material itself is subtly infected by unreliable witnesses to the facts? We all seek coherent narratives to fit facts in evidence.

“All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This “ability” to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe that there is a less uncertain world than there actually is, and that we are less bright than we actually might be. For if we can explain tomorrow what we cannot predict today, without any added information except the knowledge of the actual outcome, then this outcome must have been determined in advance and we should have been able to predict it. “The fact that we couldn’t is taken as an indication of our limited intelligence rather than of the uncertainty that is in the world. All too often, we feel like kicking ourselves for failing to foresee that which later appears inevitable. For all we know, the handwriting might have been on the wall all along. The question is: was the ink visible?”4

Much of the trouble with the various cognitive biases is that knowing they exist does not banish them. For instance, it is natural to stereotype, and it is impossible to avoid subconscious reactions.” You can prime your best-self by seeking disconfirming, positive examples from within a prejudged group, but this is a minor hedge at best. The most effective protection remains vigilance against our worst selves and careful thought before acting or speaking.

The historians in his audience of course prided themselves on their “ability” to construct, out of fragments of some past reality, explanatory narratives of events which made them seem, in retrospect, almost predictable. The only question that remained, once the historian had explained how and why some event had occurred, was why the people in his narrative had not seen what the historian could now see. “All the historians attended Amos’s talk,” recalled Biederman, “and they left ashen-faced.”5