Commentary on Economics, Information and Human Action

Reason book review, Silver and Weatherall

Well, this winter’s been more hectic than I anticipated! Teaching three classes with two new course preps, a paper at the Public Choice Society meetings, helping coordinate our search for two new members of our department’s teaching/lecturer faculty … and I haven’t been here in a while!

A feature of both books that I really liked was the engaging narrative. Silver is very effective at communicating arcane statistical ideas clearly, and his book is structured as a series of topical vignettes in applied statistics, from weather to poker to gambling on basketball to, of course, political prediction. Weatherall tells stories about several mathematicians and statisticians whose work has informed the financial instrument innovations of the past two decades, and they are well-told and illuminating stories.

In Weatherall’s case I wasn’t as persuaded by his original analysis or his conclusions, which I think are a bit glib and narrow-minded. He brings a physicist’s sensibilities and biases to evaluating the financial crisis, and I think he misses the boat there:

So Weatherall’s conclusion is accurate, but the financial crisis was largely a failure of institutions and incentives that made financial markets more brittle, not solely a failure of mathematical modeling per se. While the Warren Zevon fan in me appreciates his epilogue’s invocation to “send physics, math and money!” to enable better outcomes in financial markets, it’s a prescription that overlooks the distorted incentives that existed, and persist, in financial markets.

According to some other reviews of Weatherall posted in the Reason comment thread, I’m not the only one skeptical of his conclusions; here’s a negative review that also includes links to other reviews, both positive and negative.

Since the review was for a non-technical outlet (and given the limited word count!) I didn’t delve too deeply into the statistics aspects of the Silver book, but this New Yorker blog post does so in a way that lines up pretty closely with my thinking. Bayesian statistics isn’t new, and many of the problems on which Silver focuses are problems where there will be a clear outcome: winner or loser of a political contest, sporting event, or poker game. He also focuses on instances in which the subjective estimate of the prior probability of an event’s occurrence is going to fall in a narrow range, or be commonly agreed-upon or understood based on prior data and experience.

Here’s the problem with that: what about the many instances in which those subjective priors are varied, without any common core or data to suggest one? As Marcus and Davis note in their New Yorker blog post:

But the Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be. For example, in a notorious series of experiments, Stanley Milgram showed that many people would torture a victim if they were told that it was for the good of science. Before these experiments were carried out, should these results have been assigned a low prior (because no one would suppose that they themselves would do this) or a high prior (because we know that people accept authority)? In actual practice, the method of evaluation most scientists use most of the time is a variant of a technique proposed by the statistician Ronald Fisher in the early 1900s. Roughly speaking, in this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.

Unfortunately, Silver’s discussion of alternatives to the Bayesian approach is dismissive, incomplete, and misleading. In some cases, Silver tends to attribute successful reasoning to the use of Bayesian methods without any evidence that those particular analyses were actually performed in Bayesian fashion.