We are all dilettantes in many, if not most, areas of life and learning. When we ponder possible futures and appropriate courses of action and we encounter the limits of our own understanding, what can we do but turn to the experts on matters ranging from the weather and the stock market to the health of our bodies and our nations and so much in between? We realize (at least sometimes) that we don’t know what the future holds, but at least the experts have a pretty good idea. Don’t they? For anyone who gains solace or inspiration from the conviction displayed by Sunday morning political pundits or “I told you so” Monday morning quarterbacks that populate every field, Philip Tetlock’s Expert Political Judgment will be sobering.

The results of his painstaking research are complex, nuanced, and contingent, but the bottom line is clear enough. Tetlock’s data “plunk human forecasters into an unflattering spot along the performance continuum, distressingly closer to the chimp than to the formal statistical models.” In fact, “it is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones” (emphasis in original). Worst of all, those experts with the poorest track records are the most likely to show up on TV screens and blogsites everywhere.

Tetlock is a social psychologist by training, a political scientist by choice, and now a business school professor (at the University of California, Berkeley) by avocation. For over 20 years, he has been a pioneer in the relatively young interdisciplinary field of political psychology. His wide-ranging, partially overlapping interests in lay theories of epistemology and philosophy of science, cognitive styles, motivated reasoning, political ideology, domestic and foreign policy decision-making, counterfactual thinking, and accountability are all brought together in this, his most ambitious, profound, and integrative book to date. In many ways, it is a tour de force, providing as it does a vivid, sophisticated illustration of our limitations in forecasting and, at the same time, the analytical power of our psychological tools when applied in retrospect.

Tetlock asked 284 experts with advanced educational and professional training in international relations, political science, law, economics, business, public policy, and journalism to make thousands of predictions between 1988 and 2003. Participants rendered both short-term and long-term subjective probability estimates of hypothetical events that were inside and outside their domain of expertise, including the Persian Gulf War, the transition from Communism in Eastern bloc countries, the fall of apartheid in South Africa, the outcomes of U.S. presidential elections, the existence of weapons of mass destruction, and the bursting of the Internet bubble. These topics are so intriguing that one wants to see detailed information concerning their predictions on a case-by-case basis. Unfortunately, Tetlock keeps the reader fairly removed from the raw, unprocessed data and offers instead more abstract generalizations concerning the characteristics of better and worse judges.

To cope with the mind-boggling complexity involved in processing over 80,000 expert predictions and distilling the concomitants of accuracy, Tetlock boils things down to a single dimension of cognitive style that captures most of the good judgment he could find. Drawing on an essay by Isaiah Berlin, Tetlock distinguishes between “foxes,” who “‘know many little things,’ draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life” and “hedgehogs,” who “‘know one big thing,’ toil devotedly within one tradition, and reach for formulaic solutions to ill-defined problems.”

How does Tetlock measure the location of each of his experts on the fox-hedgehog continuum? In the book’s Methodological Appendix, we learn that he used a factor analysis of responses to a “styles of reasoning” questionnaire comprising 13 items. Eight items were drawn from the “need for cognitive closure” scale [See, e.g., D. M. Webster, A. W. Kruglanski, J. Pers. Soc. Psychol. 67, 1049 (1994)]. One item provides respondents with Berlin’s definition and asks them to classify themselves as either foxes or hedgehogs. The remaining four items focus on relative preferences for simplicity, parsimony, predictability, and decisiveness (all of which are more appealing to hedgehogs than to foxes). Tetlock’s hedgehog-fox score is based on the seven items that had the highest loadings (above 0.25) on the first factor.

Much of the book details the ways in which foxes outperform hedgehogs as prognosticators and Bayesian updaters. Foxes scored higher than others on measures of calibration; their subjective probability estimates were better correlated with the objective frequencies of the events they were predicting, especially in the short term. The worst judges were hedgehog extremists who made long-term predictions in their own areas of expertise. They correctly anticipated war in the former Yugoslavia, but they also predicted several wars that did not happen. Even more than others, they frequently overestimated the likelihood of drastic changes from the status quo.

When unexpected outcomes occurred, hedgehogs were less likely than foxes to revise their beliefs in light of new realities. They were also more likely to display hindsight bias, believing that they “knew it all along,” even when they did not, and they were less charitable toward their competition, exaggerating the extent to which rivals were mistaken. The only advantage hedgehogs enjoyed–other than greater media exposure–was a tendency to swing for the home-run fences. They were almost twice as likely as foxes to declare certain events as either inevitable or impossible, and when they did so they were usually correct.