Popper dubbed “clocklike” because of their deterministic regularities; on the other side, it is bounded
by problems he dubbed “cloudlike” because of their
uncertainty. 4

Clocklike problems are tractable and stable, and
they can be defined by past experience (as in actuarial tables or credit reports). Statistical prediction
models can shine here. Human judgment operates
on the sidelines, although it still plays a role under
unusual conditions (such as assessing the impact of
new medical advances on life expectancies). Cloudlike problems (for example, assigning probabilities
to global warming causing mega-floods in Miami in
2025 or ascertaining whether intelligent life exists
on other planets) are far murkier. However, what’s
most critical in such cases is the knowledge base of
experts and, more importantly, their nuanced
appreciation of what they do and don’t know. The
sweet spot for managers lies in combining the
strengths of computers and algorithms with seasoned human judgment and judicious questioning.
(See “Finding the Sweet Spot.”) By avoiding judgmental biases that often distort human information
processing and by recognizing the precarious assumptions on which statistical models sometimes
rest, the analytical whole can occasionally become
more than the sum of its parts.

Creating a truly intelligent enterprise is neither
quick nor simple. Some of what we recommend
will seem counterintuitive and requires training.

Breakthroughs in cognitive psychology over thepast few decades have attuned many sophisticatedleaders to the biases and traps of undisciplinedthinking. 5 However, few companies have been ableto transform these insights into game-changingpractices that make their business much smarter.Companies that perform data mining remain bliss-fully unaware of the quirks and foibles that shapetheir analysts’ hunches. At the same time, executiveteams advancing opinions are seldom asked to de-fend their views in depth. In most cases, outcomesof judgments or decisions are rarely reviewedagainst the starting assumptions. There is a clearopportunity to raise a company’s IQ by both im-proving corporate decision-making processes andleveraging data and technology tools.

2. Run Prediction Tournaments

One promising method for creating better corporate forecasts involves using what are known as
prediction tournaments to surface the people and
approaches that generate the best judgments in a
given domain. The idea of a prediction tournament
is to incentivize participants to predict what they
think will happen, translate their assessments into
probabilities, and then track which predictions
proved most accurate. In a prediction tournament,
there is no benefit in being overly positive or overly
negative, or in engaging in strategic gaming against
rivals. The job of tournament organizers is to develop a set of relevant questions and then attract
participants to provide answers.

One organization that has used prediction tournaments effectively is the Intelligence Advanced
Research Projects Activity (IARPA). It operates
within the U.S. Office of the Director of National
Intelligence and is responsible for running high-risk, high-return research on how to improve
intelligence analysis. In 2011, IARPA invited five
research teams to compete to develop the best
methods of boosting the accuracy of human probability judgments of geopolitical events. The topics
covered the gamut, from possible Eurozone exits to
the direction of the North Korean nuclear program. One of the authors (Phil Tetlock) co-led a
team known as the Good Judgment Project, 6 which
won this tournament by ignoring folklore and conducting field experiments to discover what really
drives forecasting accuracy. Four key factors
emerged as critical to successful predictions: 7

ABOUT THE RESEARCH

This article combines insights from strategy, organization theory, human judgment,
predictive analytics, and management science. The ideas described in several of the
five methods are based on what we learned in working with companies, as well as from
our involvement in a geopolitical and economic forecasting tournament that ran from

2011 through 2015, funded by the Intelligence Advanced Research Projects Activity
(IARPA). This tournament required the entrants to develop probabilistic forecasts,
which were then scored based on actual outcomes. Five academic research teams
recruited a total of 20,000 forecasters to participate in four yearly rounds of the IARPA
tournament. The official performance metric for each team was its cumulative Brier
score, a measure that assesses probabilistic accuracy. The scores were compared
across questions, teams, and experimental conditions. Phil Tetlock and Barbara Mellers,
the I. George Heyman University Professor at the University of Pennsylvania, led the
Good Judgment Project team, with Paul Schoemaker serving as one of several advisers.