Obesity remains a serious health problem and it is no secret that many people want to lose weight. Behavioral economists typically argue that “nudges” help individuals with various decisionmaking flaws to live longer, healthier, and better lives. In an article in the new issue of Regulation, Michael L. Marlow discusses how nudging by government differs from nudging by markets, and explains why market nudging is the more promising avenue for helping citizens to lose weight.

Armed with a computer model in 1935, one could probably have written the exact same story on California drought as appears today in the Washington Post some 80 years ago, prompted by the very similar outlier temperatures of 1934 and 2014.

Two long wars, chronic deficits, the financial crisis, the costly drug war, the growth of executive power under Presidents Bush and Obama, and the revelations about NSA abuses, have given rise to a growing libertarian movement in our country – with a greater focus on individual liberty and less government power. David Boaz’s newly released The Libertarian Mind is a comprehensive guide to the history, philosophy, and growth of the libertarian movement, with incisive analyses of today’s most pressing issues and policies.

Climate Models’ Tendency to Simulate Too Much Warming and the IPCC’s Attempt to Cover That Up

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

The biggest criticism to emerge so far regarding the new Fifth Assessment Report from the U.N.’s Intergovernmental Panel on Climate Change (IPCC) is that it generally fails to acknowledge how poorly climate model simulations of the earth’s temperature evolution compare with actual observations. If the models cannot accurately simulate known climate variability and change, using them for policy purposes is a fool’s errand.

There are two lines of evidence that converge to show that the climate models are largely failing to accurately simulate observed climate behavior.

The first is that a collection of about ten research papers (including 16 separate analyses) published in the scientific literature beginning in 2011 that collectively indicate that the earth’s equilibrium climate sensitivity—that is, how much the earth’s average surface temperature rises as a result of a doubling of the atmospheric carbon dioxide concentration—is about 2°C, give or take about 0.5°C (Figure 1). You can find details here.

Figure 1. Climate sensitivity estimates from new research published since 2010 (colored), compared with range of estimates from the climate models incorporated into the IPCC Fifth Assessment Report (AR5; black). The arrows indicate the 5 to 95 percent confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. The light grey vertical bar is the mean of the 16 estimates from the new findings. The mean climate sensitivity (3.2°C) of the climate models used in the IPCC AR5 is 60 percent greater than the mean of recent estimates (2.0°C).

The second line of evidence is that climate models have simply predicted too much warming, as evidenced, for example, by the lack of observed warming during the past 15, 16, or 17 years (depending on which temperature history you consult) despite climate models projections that the earth’s average temperature should have warmed up by just over 0.3°C during that time.

You’d think that these two glaring faults should have made the IPCC tear the whole report up and start over, but that’s not what happened.

Instead, while the IPCC did grudgingly concede that climate model simulations of the past 15 years do, in fact, over-predict the amount of warming that took place over that period, they largely deflect the blame for this mismatch away from the models and onto elements of natural variability or a misquantification of the forces of climate change that were input to the models. Blame-shifting in science is usually a bad idea, and the IPCC’s bumbling attempt wouldn’t have passed muster as an undergraduate thesis

Think that’s a bit harsh? Read on…

The IPCC analyzes 114 different climate model simulations of the evolution of the earth’s average temperature for the 15 year period from 1998 to 2012, and compares those simulations to the observed temperature record. Candidly, the IPCC states that:

…111 out of 114 realisations [sic] show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the [trend in observed temperatures even after accounting for statistical uncertainty in the observed trend].

So far, so good.

But, never fear, says the IPCC, because if you look at the preceding 15 year period, from 1984 through 1998, opposite mistakes were being made:

…during the 15-year period ending in 1998, [the trend in observed temperature] lies above 93 out of 114 modelled trends.

We’ll guess that in science two wrongs usually don’t make a right, but that is not how the IPCC sees things.

The IPCC goes on to explain:

Due to internal climate variability, in any given 15-year period the observed GMST trend sometimes lies near one end of a model ensemble…an effect that is pronounced in Box 9.2, Figure 1a,b since GMST was influenced by a very strong El Niño event in 1998.

To seal the deal, the IPCC then claims that, over the longer run these short-term variations all average out and the match between models and observations is outstanding:

Over the 62-year period 1951–2012, observed and [climate model] ensemble-mean trend agree to within 0.02 °C per decade (Box 9.2 Figure 1c; [climate model] ensemble-mean trend 0.13°C per decade). There is hence very high confidence that the [climate] models show long-term GMST trends consistent with observations, despite the disagreement over the most recent 15-year period. [italics in original]

As the IPCC correctly contends, the value of a trend is overly sensitive to its endpoint data (especially over shorter periods). So rather than picking just one cherry that depends on the unusually warm year of 1998, how about we look at the whole tree. Below we show the results for every overlapping 15-yr period during the period 1951-2012 (i.e., starting with the period 1951-1965, 1952-1966, 1953-1967,… and ending with 1998-2012). In Figure 3, the thick red line traces the 15-yr model trend values, the thin red lines are include 95 percent of the model trend values, and the thick blue line traces the evolution of the observed 15-yr trends.

Figure 3. The observed 15-yr moving trend during the period 1951-2012 from the Hadley Center temperature record (blue) compared to the multi-model mean trend (thick red line) and the 95 percent range of individual model simulations (thin red lines). (The model simulations consisted of 108 individual model runs downloaded from Climate Explorer that combined the RCP4.5 scenario with historical simulations).

The result isn’t very pretty for the IPCC.

Most of the time, the observed trend is either near or below the average trend from the models. At the recent end of the record, the observed trend has fallen below the model trend for 12 consecutive overlapping periods, and the discrepancy is growing larger. For the past two periods, 1997-2011 and 1998-2012, the observed trend falls below the 95 percent range of modeled trends, a statistical indication that the observed trend is not a member of the modeled set of predictions (i.e., the models do not statistically capture reality as represented by the observations).

Also notice that there are few excursions of the observed trend above the value of the modeled trend—a few periods ending in the late 1960s, mid-1980s, and again for the period ending in 1998 and 1999. The 1998 endpoint is the one that the IPCC decided to highlight—the fattest cherry they could find. It ends the largest positive discrepancy between observations and models during the past 40 years. Had they chosen to break their analysis in 1997, the observed trend during the period 1983-1997 would have been very close to the multi-model mean, while during the period 1997-2011, it still would have fallen below virtually every climate model simulation.

By carefully choosing the break point in their analysis to be 1998, the IPCC concocts a narrative that sometimes the observed warming trend is less than the model average and at other times it is greater than the model average and that it all works out. The reality of the situation is that the small (in time as well as magnitude) positive excursion during the late 1990s is dwarfed by the large (in time as well as in magnitude) negative excursion of the observed trend beneath the model average trend at the end of the record. Two wrongs do not make a right.

In fact, the El Niño-influenced positive blip for the 15-yr period ending in 1998 is largely smoothed out and disappears if the trends are computed over a longer period of time, say 20 years. Figure 4 shows the observed/model comparisons for each overlapping 20-yr period from 1951-2012.

Notice that there are again very few periods when the observed trend is greater than the model mean trend (and the ones that do exist are small), instead, the observed trend consistently runs either near, or below, the model mean trend. And the negative excursion at the end of the record is long and growing, with the observed trend during the most recent 20 years (1993-2012) again falling outside of the model 95 percent range.

Figure 4. The observed 20-yr moving trend during the period 1951-2012 from the Hadley Center temperature record (blue) compared to the multi-model mean trend (thick red line) and the 95 percent range of individual model simulations (thin red lines). (the model simulations consisted of 108 individual model runs downloaded from Climate Explorer that combined the RCP4.5 scenario with historical simulations).

There is no way that anyone could look at our Figure 4 and not think that the climate models have a generaly tendency to predict more warming than is observed—and that this tendency seems to becoming more distinct (and disturbing) as time goes on. Natural variability isn’t the culprit.

But no such figure appears in the new IPCC Fifth Assessment Report. Instead, the reader is left to take the IPCC’s word for it that natural variability is a primary cause for the recent observed/model mismatch, that the sign of the mismatch routinely swings back and forth from negative to positive, and that soon enough the observations will come back into agreement with the model simulations. None of this is supported by the data in Figure 4.

By not showing all the data, and instead, only that which best supports the narrative the IPCC wishes to spin—that climate models are reliable tools for projecting future climate changes resulting from human emissions of greenhouse gases—the IPCC misleads the public, government entities that defer to the IPCC (such as the EPA and the Supreme Court), and policymakers.

This has got to stop.

The IPCC has outlived whatever usefulness it may ever have had. It is time to disband this central climate “authority” and disperse the assessment of climate science to a broader, more diverse community.