A limited constitutional government calls for a rules-based, freemarket monetary system, not the topsy-turvy fiat dollar that now exists under central banking. This issue of the Cato Journal examines the case for alternatives to central banking and the reforms needed to move toward free-market money.

The more widespread use of body cameras will make it easier for the American public to better understand how police officers do their jobs and under what circumstances they feel that it is necessary to resort to deadly force.

Americans are finally enjoying an improving economy after years of recession and slow growth. The unemployment rate is dropping, the economy is expanding, and public confidence is rising. Surely our economic crisis is behind us. Or is it? In Going for Broke: Deficits, Debt, and the Entitlement Crisis, Cato scholar Michael D. Tanner examines the growing national debt and its dire implications for our future and explains why a looming financial meltdown may be far worse than anyone expects.

The Cato Institute has released its 2014 Annual Report, which documents a dynamic year of growth and productivity. “Libertarianism is not just a framework for utopia,” Cato’s David Boaz writes in his book, The Libertarian Mind. “It is the indispensable framework for the future.” And as the new report demonstrates, the Cato Institute, thanks largely to the generosity of our Sponsors, is leading the charge to apply this framework across the policy spectrum.

Search form

Inside Track: Being Right Matters

Thanks to news cycles and short attention spans, pundits get away with murder. Columnists and talking heads can issue endless prognostications about what Iraq will look like in another six months, and because nobody’s going to remember to follow up six months on, it doesn’t matter whether they were right.

Last week, pro-war liberals Michael O’Hanlon and Kenneth Pollack wrote a New York Times op-ed arguing that the Iraq troop surge was working and should be extended into 2008. They may be right, but based on their track records, it’s doubtful. And track records ought to matter. Rather than hiding behind arguments that are couched in conditionals and mushy language, pundits should put specific predictions on the record, in clear, falsifiable language, so that the public can better determine who among us actually knows what he’s talking about.

Foreign-policy analysts have an incredibly difficult task: to make predictions about the future based on particular policy choices in Washington. These difficulties extend into the world of intelligence, as well. The CIA issues reports with impossibly ambitious titles like “Mapping the Global Future”, as if anyone could actually do that. The father of American strategic analysis, Sherman Kent, grappled with these difficulties in his days at OSS and CIA. When Kent finally grew tired of the vapid language used for making predictions, such as “good chance of”, “real likelihood that” and the like, he ordered his analysts to start putting odds on their assessments. When a colleague complained that Kent was “turning us into the biggest bookie shop in town”, Kent replied that he’d “rather be a bookie than a [expletive] poet.”

Kent’s instinct was right. More bookies and fewer poets are what the United States needs, both in intelligence analysis and in foreign-policy punditry. University of California Berkeley professor Philip Tetlock examined large data sets where experts on various topics made predictions about the future. He was troubled to discover “an inverse relationship between how well experts do on scientific indicators of good judgment and how attractive these experts are to the media and other consumers of expertise.” He proposed one way to reform the situation: conditioning experts’ appearance in high-profile media venues on “proven track records in drawing correct inferences from relevant real-world events unfolding in real time.”

Which brings us back to the authors of the New York Times piece. Michael O’Hanlon, for example, argued in February 2004 that the “dead-enders are few in number and have little ability to inspire a broader following among the Iraqi people.” Kenneth Pollack gained notoriety for his publication of The Threatening Storm, a book that argued Saddam Hussein was close to obtaining nuclear weapons and was not a deterrable actor.

So, the argument goes, why should they be revered as authorities, given that they’ve been so wrong in the past?

It’s a fair question. The best way to correct the situation is by developing a predictions database, where experts can weigh-in on specific, falsifiable claims about the future, putting their reputations on the line. Something like this was envisioned in a DARPA program developed under Admiral John Poindexter in 2003. The so-called “policy analysis market” was designed to allow analysts to buy futures contracts for various scenarios. As the value of these contracts went up or down, other analysts could observe and investigate why, determining how and why others were “putting their money where their mouths were”, and whether they should do the same.

But the “policy analysis market” sank beneath a wave of demagoguery from congressmen who had an astonishing lack of understanding how prediction markets are used to great effect in the investment banking, insurance and other industries.

To cite one historic example, if there had been such a market before 9/11, Coleen Rowley, the FBI agent who detected and arrested Zacarias Moussaoui and whose attempts to further investigate the conspiracy were stymied, could have taken her suspicions to the futures market. As her behavior moved the market, other observers would have had an incentive to investigate why she was so certain that a dangerous plot was afoot.

There are a number of similar enterprises that have begun since 9/11. Foreign Policy magazine publishes a “terrorism index” in which foreign policy experts predict the likelihood of various events. The results are not encouraging — in the 2006 version, 57 percent of experts said that an attack on the United States “on the scale of those that took place in London and Madrid” was either “likely or certain” before the end of 2006.

Predicting the future is hard, and if nothing else, pundits are experts at explaining why their failed predictions are somebody else’s fault. It may be the case that even the best experts rarely make accurate predictions of important events. But the only way to better our predictions in the future is to learn not just who gets things right, but why. Putting our reputations where our mouths are would teach us a great deal.