Wednesday, March 26, 2014

Have Past IPCC Temperature Projections/Predictions Been Accurate?

The question routinely shows up in climate arguments, with claims in both directions. Evaluating them is hard. The IPCC has given multiple reports with multiple projections depending on different assumptions, so one can almost certainly select reports to show either a good or bad job of predicting. The reports sometimes describe what they produce as predictions, sometimes as projections. For simplicity I will use the former term.

The past reports are webbed. To get a reasonably fair judgement, the obvious approach is to look at each, see what one would expect from reading it and how that compares with what happened. I have now done so. Skeptical readers are invited to check my summary for themselves, starting with the page that links to all of the reports.

The executive summary of the first report, from 1990, contains:

Under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, the average rate of increase of global mean temperature during the net century is estimated to be about 0.3°C per decade (with an uncertainty range of 0.2°C to 0.5°C).

The graph shown for the increase is close to a straight line at least from 2000 on, so it seems reasonable to ask whether the average increase from 1990 to the present is within that range.

Figure 18 from the Second Assessment Report (1995) shows the future temperature through 2020. Through that date, it rises steadily at about .13°C/decade.

For the periods 1990 to 2025 and 1990 to 2050, the projected increases
are 0.4 to 1.1°C and 0.8 to 2.6°C, respectively.

So for the former period, the average increase is supposed to be from .11 to .31 °C/decade.

The Fourth Assessment Report (2007) has "For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios."

Checking a graph on a NASA page, the increase from 1990 to 2013 was about .22°C, for an average rate of increase of about .1°C/decade.

So if we judge IPCC reports by comparing what they said the average increase would be over the period from 1990 to the present with what it actually was, we find that the first report predicted a rate about three times what actually happened. The second report got it a little high. The third report got it substantially high. For both the first and third, the actual value was below the bottom of the predicted range of values.

The fourth report was written in 2007 and predicted temperature change thereafter. Looking at the graph from the NASA page, temperature from then to now has been essentially flat, with the slope positive or negative depending on your choice of end points. It's too short a time period to evaluate the prediction with much confidence, but so far as one can judge it was high.

So it looks as though the IPCC has predicted high four times out of four, two of the four times by a lot. It would take more work than I am willing to put into the project to dig out probability distributions for their predictions and see just how unlikely it is that they would miss by that much that many times, but I would be surprised if the overall pattern was not well outside the .95 confidence interval.

All four reports show a roughly constant rate of increase from 1990 to the present. The actual pattern was an increase to about 2000 and roughly flat temperatures thereafter. That does not prove the IPCC wrong in any strong sense, since their projections are averaging out sources of temperature change that could not be readily predicted when the projections were made. But it does mean that the IPCC failed to be right. Insofar as the pattern is evidence on either side, it is evidence against the accuracy of their predictions (aka projections).

One way of judging how good a job the IPCC has done of modelling global climate is to compare its predictions with a much simpler model, a linear fit of past data. Looking at a webbed graph of the data and fitting by eye, the slope of the line from 1910, when current warming seems to have started, to 1990, when the first IPCC report came out, is about .12 °C/decade. That gives a better prediction of what happened after 1990 than any of the IPCC reports.

27 Comments:

Of course, it's possible that you're comparing weather to climate. I wonder at what point weather becomes climate, and the IPCC's predictions become uniformly invalidated? It seems to me that if there is no criteria for "Yes, we now agree that we have failed", then what the IPCC is doing is not science.

I don't see the need for any complicated statistics here. I would just apply the sign test. When we have a sequence of pairs, prediction vs observation ( prediction being the mid-point of the model spread in this case), we can just go through the sequence saying "what sign is the difference between observation and prediction". If we have n plusses and m minuses it is a simple calculation to generate a p- value. Of course if there are 10 plusses and no minuses it is just 1/1024

Thebeauty of the sign test is its simplicity, its obviousness, and the lack of required assumptions about the underlying distribution.

The weakness of the sign test is that, with only four samples, it can't give a stronger result than "one chance in sixteen of doing this badly in this direction." In this case, since I'm reluctant to count the fourth report due to how recent it is, one in eight. One in four if you would count consistent errors in either direction.

I am beginning to think that Chaos theory- as much as I enjoyed reading about it when I was younger- was actually a sophisticated way of hiding the abject futility of multivariant computer models. The Lorentz attractor happens in computer models; in the real world there are many more variables and feedback loops, so the effect of a butterfly flapping its wings in China probably has very little do with a storm in New York.

I think computer modelling appeals to the bureaucrat's mind in the same way that the five year plan appealed to the socialists. The same fatal conceit is there too.

When I look at the graph of temperature anomaly data, superimposed on the graph of simulated temperature anomaly prediction models, it's striking that most of the lines for the simulation models don't pass through the scatter diagram of observed/measured temperatures anomalies. (For example, http://www.drroyspencer.com/wp-content/uploads/CMIP5-90-models-global-Tsfc-vs-obs-thru-2013.png) Similarly, when 95% confidence bands are placed around a graph of the average of the simulation models, observed/measured temperature anomalies fall mostly outside these bands. (For example, http://www.cato.org/blog/climate-insensitivity-what-ipcc-knew-didnt-tell-us-part-ii?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+Cato-at-liberty+%28Cato+at+Liberty%29) By standard methods of statistics, these results point to poor fits and failed modeling efforts.

Yet, recently I ran across this assertion on a discussion board: "Model accuracy is better than 99.8%." When asked for clarification, the writer offered,"The models produce predictions based on physical phenomena. In the absence of energy input, the models will predict a 0 K temperature anomalies. The average global surface temperature is something like 288 K, but I round to 300 K for an in-head calculation. Eye-balling the difference between anomaly measurements and long term climate model temperature anomaly predictions, I see about a 0.3 perhaps 0.5 K difference. 0.3/300 is 0.1% and 0.5/288 is about 0.2% (give or take). So I say less than 0.2% error, or better than 99.8% accuracy."

Here's my problem: The proposed method sets a pretty low bar for achieving a high "accuracy" score. Relative to a reference point of 300, there has been very little observed warming; likewise, relative to a reference point of 300, there has been relatively little predicted warming via simulation models. Even if anomaly prediction errors were around 10 degrees, that would result in only a 10/300 or 3.3% prediction error, or 96.7% "accuracy" level. Similarly, a thermometer used to take a person's temperature would achieve a "level of accuracy" of about 98% if it mis-measured temperature by 2 degrees F.

As you point out, we must ignore the most recent prediction, because the time scale is too short. What we might ask instead is how much likelihood we should give to that theory being true for the future, given the IPCC's performance in the past.

We have three theories about what global warming would do over the last couple decades. To calculate our final probability of the IPCC being reliable, we must estimate our initial belief in their reliability. Let us say that we start out biased in their favor, assigning 75% probability divided equally over their 3 models, which I take to be Gaussian distributions about the center of their confidence intervals, and 25% probability that they were wrong--let's say the default model has a Gaussian distribution centered at 0 with a sigma of 0.20.

Looking at their predictions, I could find only one that quantified the size of the error bars: the 2001 seems to imply that the error range is between 66 and 90%, meaning roughly one sigma. I will assume that the 1990 range is also one sigma, and that the 1995 theory has a sigma of 0.1 degrees, like their 2001 theory (anything larger, and their theory allows temperature decreases.)

To update our probabilities, we use Bayes' theorem: P(A|B) = P(B|A)*P(A)/P(B). We can call A "the theory is correct", and B "temperature increase is less than or equal to 0.10 degrees".

If the prior probabilities of the theories are [1990,1995,2001,null] = [.25,.25,.25,.25], the final probabilities are [.04,.30,.11,.55]. So it is more likely than not that the IPCC is wrong, and most of their remaining probability mass is clustered in the small-warming (0.13 deg/decade) theory.

I don't think you can draw strong conclusions based on a decade or two: the fluctuation on that time scale is too great. Imagine a prescient climatologist that predicted in 1900 that mean temperatures would be about half a degree higher by the end of the century. He's be right, but the predicted average increase per decade would be very wrong for the next 25 years, and further from the actual results than a linear extrapolation from past data.

Thinking back on my post, there's another way to look at it: instead of saying the measurement is "increase less than or equal to 0.1 deg/decade" we could say the measurement is "increase is exactly 0.1 deg/decade". Likewise, we should choose a null hypothesis that warming will be the same as it was the last century. Total warming was 0.74 +/- 0.18 degrees. This suggests a per-decade warming of 0.074 deg/decade with error bars of +/- 0.06 [0.18*sqrt(10)/10].

Plugging in for the same priors, we get a final probability distribution of [1990,1995,2001,null] = [.05,.29,.17,.49]. Our confidence in the IPCC has dropped from 75% to 50%. We now are 78% confident that warming will be either the same as historical [0.074 deg/decade] or just slightly faster [0.13 deg/decade].

We previously expected warming of around 0.19 deg/decade; now that expectation has dropped to 0.12. The IPCC's hottest prediction, from 1990, has been basically disproved: p=0.05. Their 2001 prediction has taken a hit, cutting its probability nearly in half, but it's still barely hanging on. The IPCC's 2007 prediction of 0.2 deg/decade seems unlikely, given our decreased confidence in the IPCC and our lower expected warming.

One thing that seems important to me is to recognize the dramatic difference between the IPCC's short-term projections versus their long-term projections:

From your summary (correct me if I'm wrong):

First report: BAU (Scenario A) about 0.3 degrees per decade for the period of a century.

Second report: 0.13C/decade to 2020.

Third report: 0.11 to 0.31C/decade for 1990-2025. (So the average of those two numbers is 0.21C/decade.)

Fourth report: ~0.2C/decade "for the next two decades."

So basically, the century-long projection in the first report has the highest rate of increase.

The IPCC basically knows that they can't predict (er, "project"!) a high rate of warming in the near future, because they will be seen to be wrong. So they always push their high rates of warming 3+ decades into the future. That way, if they scare people into acting, and the high rates of warming don't occur 3+ decades in the future, they say, "Thanks to our warnings, we prevented the high rates of warming that would have occurred in the second half of this century."

The first report had a figure for the average over the rest of the century and a graph. The graph showed slightly lower warming for the first decade, more or less a straight line thereafter.

The third report had a graph showing the rate implied by various different assumptions. My memory is that it increased over time, but I'm out of town at the moment, the copy of the graph I saved is on my machine at home, and finding it in the report again would take some effort.

I have my doubts about your theory. For one thing, it's quite unlikely that the IPCC will "scare people into acting" to a level sufficient to have much effect on the level of CO2. For another, being shown up thirty years from now isn't much of a cost. What negative consequences have there been to people who, in the 1960's, made scare warnings about overpopulation that turned out to be false?

David, I think you've made some errors in your presentation of the the data. Since the IPCC First Assessment Report shows greater temperature rises later in the next hundred years, the average temperature rise in the business as usual "best estimate" scenario was .25 degrees per decade for the first two decades, not .3. Second it was only one scenario, and predicted greater greenhouse gas emissions than actually occurred. If you use the same model with the emissions we experienced, the per decade temperature rise drops to .2 degrees.

If you measure the NASA data from the trend line for 1990-2013, you get a .3 degree increase, or ,13 per decade, not .11. That would put it within the uncertainty range for the observed data of plus or minus .08 degrees.

An accelerated rate of temperature increase over time pretty much follows from the mainstream climate models and current trends in greenhouse gas production. It's also been what we've experienced so far: global mean temperature rose much more in the second half of the 20th century than the first.

"An accelerated rate of temperature increase over time pretty much follows from the mainstream climate models and current trends in greenhouse gas production. It's also been what we've experienced so far: global mean temperature rose much more in the second half of the 20th century than the first."

I don't think the pattern supports that.

The increase starts about 1910. From then until 1940, temperature increases about .6°. It is then pretty nearly constant, if anything falling a little, until about 1975. From 1975 to 2000 it increases by about .5°.

So what you really have is two episodes of warming, each at about .2°/decade, separated by a period of roughly stable temperature. I don't think that supports the claim that the data fit an increasing rate over time. Especially if you note that the second period or warming is again followed by a period of roughly stable temperature.

I advise against trusting skepticalscience.com unless you have checked the evidence for their claims reasonably carefully. For reasons, see my earlier posts on John Cook, who runs it.

Will writes: "Since the IPCC First Assessment Report shows greater temperature rises later in the next hundred years, the average temperature rise in the business as usual "best estimate" scenario was .25 degrees per decade for the first two decades, not .3. "

Looking at the graph, it showed a somewhat slower rate for the first decade, pretty much a straight line thereafter.

"If you use the same model with the emissions we experienced, the per decade temperature rise drops to .2 degrees."

Possibly. The more one complicates the test, the easier it is to select an interpretation that supports what you want to believe. I took the simplest version--what they actually predicted if nothing active was done to slow emissions.

How do you know exactly what happens if you rerun their model with what actually happened to CO2? Doesn't that depend on a description of the evidence that you cannot check yourself, provided by someone who has an axe to grind? I know that John Cook is willing to lie about his work, and I expect even relatively honest people to make choices in how they do things that bias the result in the direction they want.

"If you measure the NASA data from the trend line for 1990-2013"

Again, I took the simplest test, which was using the actual data, not someone's fitted line.

i think you are inferring a pattern where none actually exists. My hypothesis is that there is a lot of random and unpredictable variation above and below the long term trend line. 1910 wasn't the start of a radical change in climate, it was just a cold year. if you try to draw a pattern from short term highs and lows you will be led astray.

So picking a reasonably long period to avoid the problem, even if I start in 1910, I find the next 50 years saw a temperature increase of about .3 degrees and the next 50 .6.

The simplest test, which you say you prefer, is simple but noisy. if you measure from 1990, you get the result you prefer. If you start in 1992 and measure the next two decades you get .175 per decade. Such divergent results suggest that your simple test is not ideal.

As a physicist following the literature, I've understood for decades Global Warmism was nonsense-- but the questions is raised for me were: (1) How can they be so confused about the science? and (2) What might I be confused about?

Recently I figured out a second science, about which I was confused, vaccines. Then, I realized that these examples falsify our normal expectation that climate scientists and pediatricians are logical and scientific. Instead, the observed facts are explained much better by the model espoused by Gustav Le Bon in his 1895 book The Crowd, the first work on group psychology, and arguably the most insightful. Although largely forgotten today, this work has had extraordinary influence. By their own accounts it was on Theodore Roosevelt’s bedside table, and dogeared by Mussolini. Lenin and Stalin took from it, and “Hitler’s indebtedness to Le Bon bordered on plagiarism” in the words of historian and Hitler-biographer Robert G. L. Waite. Sigmund Freud wrote a book discussing Le Bon, which we will quote from below, and Edward Bernays, the father of modern public relations, acknowledged his deep debt, as Goebbels did of Bernays’ reflected insights.

I wrote a post describing this at the url below (scroll one screen to second post although top is relevant.)http://whyarethingsthisway.com/

“Based on the IPCC Business as Usual scenarios, the energy-balance upwelling diffusion model with best judgement parameters yields estimates of global warming from pre-industrial times (taken to be 1765) to the year 2030 between 1.3°C and 2.8″C, with a best estimate of 2 0°C This corresponds to a predicted rise from 1990 of 0.7-1.5°C with a best estimate of 1.1C. “

https://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_06.pdf

This comes to .275 degrees per decade 1990-2030, and .325/decade 2031-2070.

You write (3/29, 1:17 PM), "An accelerated rate of temperature increase over time pretty much follows from the mainstream climate models and current trends in greenhouse gas production."

If by "current trends in greenhouse gas production," you mean the trends from 2000 to 2014, those trends are simply not sustainable to the end of this century, from the standpoint of geology and resource economics (the cost to extract fossil fuels).

Here's a graph of coal consumption in China and the rest of the world combined:

From 2001 to 2011 China's coal consumption went from about 1.5 billion tons per year to 3.8 billion tons per year. That's an increase of a factor of 2.5 in 10 years.(!)

If that continued to even 2021, China would be consuming almost 10 billion tons of coal per year. And if that continued to 2031, China would be consuming over 24 billion tons per year.

I would be willing to bet you up to $50, and give you 20-to-1 odds, that China will not be consuming 20 billion tons of coal in 2031...or ever. In fact I'd be willing to bet you up to $50 at 5-to-1 odds that China's coal consumption will never exceed 10 billion tons per year by 2030...or ever.

A somewhat similar situation exists for world oil consumption. From 2000 to 2010 it increased from about 77 million barrels per day to about 87 million barrels per day (an increase of 13%). I know of no one (who knows about the subject) who expects that the world oil consumption could increase by 13% per decade even to 2050, let alone beyond 2050. Fossil oil has to peak before 2050. (It's possible that oil from algae or bacteria or other sources could provide continuing increases in oil production, but oil from such sources would be considered to have essentially zero CO2 emissions, because the lifeforms would consume CO2 before they were converted into oil and burned.)

So it's simply not supportable by geology and resource economics to expect that the coal and oil consumption trends from 2000 to 2014 can continue even to the middle of the century, let alone to the end of the century.