A post recently came up in my Facebook feed that is notable for the confluence of three things: (1) a spectacular claim, (2) it’s wrong, and (3) it’s not a journalist’s fault. The combination of (1) and (2) is quite common, but usually it turns out that the actual science is much less spectacular than the headline suggests, because a journalist or editor has misunderstood the science or amplified the claim unjustifiably in order to garner readers. In this case, though, the paper itself is at fault.

The usual story. Not this time!

The claim in question is that “it is highly likely (99.999 percent) that the 304 consecutive months of anomalously warm global temperatures to June 2010 is directly attributable to the accumulation of global greenhouse gases in the atmosphere.”1 The Facebook post linked to an article from The Conversation, but that quote is directly from their paper, published this April in Climate Risk Management.

The authors estimated various contributing factors to global warming, with human CO2 emissions as the dominant factor.

So what’s wrong with their claim? Basically, they’re making the common statistical mistake of inverting probabilities through subtraction. What they did find is that, according to their model,

The chance of observing 304 consecutive months or more with temperature exceeding the 20th century average for the corresponding month is approximately 24.9 percent when eCO2 forcing is included in Model B and 52.9 percent in Model E (Fig. 7a). When eCO2 forcing is excluded from the simulations the probability of this occurring is less than 0.001 percent for both Models B and E.2

How did they arrive at 99.999% certainty that humans are driving global warming? They subtracted 0.001% from 100%. Unfortunately, that’s not how statistics works. What the authors found was that, in their model, the probability of observing 304 anomalously hot months in a row without human intervention is 0.001%. Let’s denote that as p(A|~H) = 0.001%, where A stands for “Anomaly” and ~H stands for “not humanity’s fault”. So: “The probability that we’d observe this anomaly if it wasn’t humanity’s fault is 0.001%.” What we’d like to know is p(H|A): “The probability that the anomaly we observe is humanity’s fault.” The way we get from p(A|~H) to p(H|A) is not through subtraction, but through something called Bayes’ Formula or Bayes’ Theorem:

Using our notation, we’d have something like this:

To get what we’d want, p(H|A)—the probability that the temperature anomalies we’ve observed are due to human causes, we’d first need to know p(A|~H)—the probability that we’d observe these anomalies without human interference. That’s what this study gives us, if we trust their model. But we’d also need to know p(H)—the probability that humans have been interfering with the atmosphere and p(A|H)—the probability that’d we’d see this anomaly given human interference.3 For instance, if we observed the same pattern of temperature changes on a planet identical to Earth but with no human presence, we wouldn’t conclude that there was a 99.999% chance that humans were at fault. Rather, we would conclude that there is a 0% chance, because p(H) is 0. We don’t really know whether human activity is interfering with the atmosphere in a relevant way—that’s the question we want to answer—so let’s take a skeptical position and estimate it as 0.1. That is, let’s say that, if we didn’t know that there had been 334 anomalously hot months in a row, we’d think there was only a 10% chance that human-caused global warming was occurring. The authors do estimate p(A|H), as we can see in the above quotation: in one model it is 24.9% and in the other it is 52.9%. Splitting the difference, let’s say p(A|H) is 38.9%. Plugging in the numbers we get:

99.98% is pretty good, but it’s not 99.999%.4 In fact, we could generate any answer we wished by manipulating p(H) (and that’s accepting the authors’ models). For a skeptic, p(H) might be even lower than 0.1, and they might not accept the modelling assumptions the authors made. That is one of the central conclusions on Bayesian probability: all probabilities of this sort are ultimately subjective. What is the probability that humans are driving global warming? I don’t know, and neither does anybody else. Personally I believe that we are, and I think you need to have some pretty wacky ideas about physics and climate measurement to think that we are not, but nobody can attach an objective probability in the way that these scientists claim to have done. If you could, you’d have solved Hume’s problem of induction. And that would be truly impressive.

p(H) is known as the “prior probability” that humans are causing global warming. It’s the probability we’d assign to human caused global warming if we had access to no data about global temperatures but did know about atmospheric chemistry, physics, and the level of human emissions. For example, we know that CO2 is a greenhouse gas, so if we knew that humans were pumping lots of CO2 into the atmosphere, we’d expect that there would be a chance that global temperatures would rise, even without taking any measurements. ↩

An earlier version of this post made a decimal place error, and so the numbers have been adjusted for illustrative purposes. ↩

Mike is a Ph. D. candidate at the University of Toronto's IHPST. His research concentrates on social epistemology, the use of economics in philosophy of science, and philosophy of economics. Mike's personal website is www.mikethicke.com.

3 Comments

“The chance of observing 304 consecutive months or more with temperature exceeding the 20th century average for the corresponding month is approximately 24.9 percent when eCO2 forcing is included in Model B and 52.9 percent in Model E (Fig. 7a). ”

Doesn’t this mean that the 304 observed months of increase are highly improbable even with CO2? At least in Model B and somewhat in Model E.

Yes, it means that if Model B is a good representation of the atmosphere, the 304 month streak is still unlikely. But it is much less unlikely under Model B than in models that don’t include CO2 forcing. And it’s the comparison that matters.

Your email address will not be published. Required fields are marked *

The Bubble Chamber is a blog written by historians and philosophers of science for discussing contemporary issues of science and society through the lens of historical context and critical analysis.

Founded by the University of Toronto's Science Policy Working Group, The Bubble Chamber is a forum for those interested in a critical assessment of science in society and the development, regulation, and trajectory of science.