INTRODUCTION
On the weekend of July 30th, Edge convened one of its "Master Classes."[3] In the past, these classes have featured short courses taught by people such as psychologist and Nobel Laureate Daniel Kahneman[4] ("A Short Course in Thinking About Thinking"); behavioral economists Richard Thaler [5]and Sendhil Mullainathan[6], again with Kahneman ("A Short Course in Behavioral Economics"); and genomic researchers George Church[7] and J. Craig Venter [8]("A Short Course on Synthetic Genomics").

This year, the psychologist and social scientist Philip E. Tetlock [9]presented the findings based on his work on forecasting as part of the Good Judgment Project. In 1984, Tetlock began holding "forecasting tournaments" in which selected candidates were asked questions about the course of events: In the wake of a natural disaster, what policies will be changed in the United States? When will North Korea test nuclear weapons? Candidates examine the questions in teams. They are not necessarily experts, but attentive, shrewd citizens.

Steven Pinker[10], who has written about Tetlock's work on superforecasting, noted that "Tetlock is one of the very, very best minds in the social sciences today. He has come up with one brilliant idea after another, and superforecasting is no exception. Everyone agrees that the way to know if an idea is right is to see whether it accurately predicts the future. But which ideas, which methods, which people have an actual, provable track record of non-obvious predictions vindicated by the course of events? The answers will surprise you, and have radical implications for politics, policy, journalism, education, and even epistemology—how we can best gain knowledge about the world we live in."

Over the weekend in Napa, Tetlock held five classes, which are being presented by Edge in their entirety (8.5 hours of video and audio) along with accompanying transcripts (61,000 words). Commenting on the event, one of the participants wrote:

The interesting thing is that this is not about a latest trend that might scale in one or two years, but about real change that might take a decade or two. Also, these masterclasses are not only much more profound than any of the conferences popularizing contemporary intellectualism. The possibility to spend that much time with the clairvoyants in a setting like this also gives you a sense of community so much greater than any of the advertised.

[36]In the circle of clairvoyants: At a vineyard north of San Francisco, Philip Tetlock of the University of Pennsylvania (left) presented his findings. Initially skeptical was Nobel Laureate Kahneman (third from left). Photo: John Brockman / edge.org

It is as though high status pundits have learned a valuable survival skill, and that survival skill is they've mastered the art of appearing to go out on a limb without actually going out on a limb. They say dramatic things but there are vague verbiage quantifiers connected to the dramatic things. It sounds as though they're saying something very compelling and riveting. There's a scenario that's been conjured up in your mind of something either very good or very bad. It's vivid, easily imaginable.

It turns out, on close inspection they're not really saying that's going to happen. They're not specifying the conditions, or a time frame, or likelihood, so there's no way of assessing accuracy. You could say these pundits are just doing what a rational pundit would do because they know that they live in a somewhat stochastic world. They know that it's a world that frequently is going to throw off surprises at them, so to maintain their credibility with their community of co-believers they need to be vague. It's an essential survival skill. There is some considerable truth to that, and forecasting tournaments are a very different way of proceeding. Forecasting tournaments require people to attach explicit probabilities to well-defined outcomes in well-defined time frames so you can keep score.

Tournaments have a scientific value. They help us test a lot of psychological hypotheses about the drivers of accuracy, they help us test statistical ideas; there are a lot of ideas we can test in tournaments. Tournaments have a value inside organizations and businesses. A more accurate probability helps to price options better on Wall Street, so they have value.

I wanted to focus more on what I see as the wider societal value of tournaments and the potential value of tournaments in depolarizing unnecessarily polarizing policy debates. In short, making us more civilized. ...

There is well-developed research literature on how to measure accuracy. There is not such well-developed research literature on how to measure the quality of questions. The quality of questions is going to be absolutely crucial if we want tournaments to be able to play a role in tipping the scales of plausibility in important debates, and if you want tournaments to play a role in incentivizing people to behave more reasonably in debates.

There's a picture of two people on slide seventy-two, one of whom is one of the most famous historians in the 20th century, E.H. Carr, and the other of whom is a famous economic historian at the University of Chicago, Robert Fogel. They could not have more different attitudes toward the importance of counterfactuals in history. For E.H. Carr, counterfactuals were a pestilence, they were a frivolous parlor game, a methodological rattle, a sore loser's history. It was a waste of cognitive effort to think about counterfactuals. You should think about history the way it did unfold and figure out why it had to unfold the way it did—almost a prescription for hindsight bias.

Robert Fogel, on the other hand, approached it more like a scientist. He quite correctly recognized that if you want to draw causal inferences from any historical sequence, you have to make assumptions about what would have happened if the hypothesized cause had taken on a different value. That's a counterfactual. You had this interesting tension. Many historians do still agree, in some form, with E.H. Carr. Virtually all economic historians would agree with Robert Fogel, who's one of the pivital people in economic history; he won a Nobel Prize. But there's this very interesting tension between people who are more open or less open to thinking about counterfactuals. Why that is, is something that is worth exploring.

A famous economist, Albert Hirschman, had a wonderful phrase, "self-subversion." Some people, he thought, were capable of thinking in self-subverting ways. What would a self-subverting liberal or conservative say about the Cold War? A self-subverting liberal might say, "I don’t like Reagan. I don’t think he was right, but yes, there may be some truth to the counterfactual that if he hadn’t been in power and doing what he did, the Soviet Union might still be around." A self-subverting conservative might say, "I like Reagan a lot, but it’s quite possible that the Soviet Union would have disintegrated anyway because there were lots of other forces in play."

Self-subversion is an integral part of what makes superforecasting cognition work. It’s the willingness to tolerate dissonance. It’s hard to be an extremist when you engage in self-subverting counterfactual cognition. That’s the first example. The second example deals with how regular people think about fate and how superforecasters think about it, which is, they don’t. Regular people often invoke fate, "it was meant to be," as an explanation for things.

The beauty of forecasting tournaments is that they’re pure accuracy games that impose an unusual monastic discipline on how people go about making probability estimates of the possible consequences of policy options. It’s a way of reducing escape clauses for the debaters, as well as reducing motivated reasoning room for the audience.

Tournaments, if they’re given a real shot, have a potential to raise the quality of debates by incentivizing competition to be more accurate and reducing functionalist blurring that makes it so difficult to figure out who is closer to the truth.