Saturday, October 06, 2012
... /////

Anthony Watts discusses a new paper by M.W. Asten in Climate of the Past that estimates the sensitivity – warming induced by a doubling of CO2 in the atmosphere – as 1.1 ± 0.4 °C from oxygen-18 in microfossils.

The paper also contains a fun list of values of climate sensitivity estimated in various papers published between 2004 and 2012.

I will sort the table from the "most alarming" median values to the least alarming ones. If a median value isn't listed, I will take the average of the low and high values and write it in brackets instead. The sensitivities are expressed in °C.

Year

Authors

Med.

Interval

C.L.

2010

Pagani et al.

[8]

7-9

2012

Dowsett

[6]

4-8

2012

Hansen, Sato with slow feedbacks

6

4-8

66%

2004

Lea

5.2

4.4-6.0

95%

2012

Rohling et al.

3.1

1.7-5

66%

2007

IPCC AR4

3

2-4.5

66%

2011

Annan, Hargreaves

[3]

2-4

95%

2012

Hansen, Sato w/o slow feedbacks

3

2-4

66%

2006

Forest et al.

2.9

2.1-8.9

90%

2010

Kohler et al.

2.4

1.4-5.2

2011

Schmittner et al.

2.3

1.7-2.6

66%

2008

Chýlek, Lohmann

[1.8]

1.3-2.3

95%

2012

Gillett

[1.55]

1.3-1.8

2012

Lewis

1.3

0.8-2.1

90%

2009

Douglas, Christy

1.1

2012

Asten

1.1

0.7-1.5

66%

2011

Lindzen, Choi

0.7

0.5-1.3

95%

Note that the median value of the median values is at those 3 °C. At the top, you see some insane articles claiming that the climate sensitivity may be as high as 9 degrees Celsius. This is clearly incompatible with basic observations we can make. If those figures were true, we would already see a warming by 4 °C plus minus a relatively small "noise" relatively to the "pre-industrial era".

While Hansen and Sato belong among the nuttiest people, I think that their idea to quote the figures "without slow feedbacks" and "with slow feedbacks" separately may be rather good. However, I think that with the slow feedbacks, the sensitivity will be even lower because the long-term feedbacks are mostly negative.

You should also realize that if only 66% confidence intervals (around 1 sigma, but 1 sigma is really 68%) are listed, the 95% confidence intervals (2 sigma) are approximately 2 times wider, and vice versa (half the width).

Asten – the new paper – and especially Lindzen and Choi are not only winners when it comes to the sensible low figures. They also have the narrowest interval i.e. the "most accurate calculation": the width of their 95% C.L. (or 66% for Asten) intervals is just 0.8 °C. This accuracy claimed by Lindzen and Choi should be another reason why people should try to focus on their methods and refine them.

The 2006 paper by Forest et al. is on the opposite side of this spectrum: they quote the interval 2.1-8.9 °C and it is only a 90% interval so the width of their 95% interval would be around 10 °C. Those "super high allowed values" are due to some unreasonably high prior probability for these "super high values".

It's also interesting to notice that the results are "mostly" incompatible with each other. Most of the pairs of papers you may pick in the list mention intervals that don't even overlap! ;-) This is particularly clear if you compare e.g. Pagani et al. which say it's 7-9 °C with Lindzen and Choi who say it is 0.5-1.3 °C.

People who believe in superstitions such as "consensus" could try to build on the average or median of the figures above. But it's like measuring the length of the nose of Emperor of China whom no one has seen – see Feynman's story. You may repeat a totally wrong values many times and affect the "global average" (or the odds) – much like the MIT roulette players from the picture above. It's much more sensible to carefully look which of the papers are more likely to do a good, accurate, and impartial job in their estimates. Well, it's mostly the papers near the bottom of the list above.

snail feedback (7)
:

reader
Cesar Laia
said...

The average between 2011 and 2012 estimates seems to be around 2ºC. Now since this is relativelly close to 0 ºC, for me they are in enough good aggreement between each other (except that data from Hansen & Sato, that must have something wrong). Everyone used different methods, so something else surely drives global temperature change, for me this is the main lesson from these papers.

I wonder how the numbers could be narrowed. I also wonder how much effort should be put to understand the differences from each method. If other effects takes us to another direction, global cooling seems possible even doubling CO2. Perhaps much better estimates of future weather are possible from such comparisons if all those people analyzed data in a honest way.

I am gobsmacked by the photograph at the bottom of this blog post. This is from MIT? Who are these four people? They appear to have borrowed the Wheel of Fortune and relabeled it. Squinting hard, I can make out that they are showing a range of possible increases in global mean temperature through the year 2100 -- from best case (3-4°C) to worst case (more than 7°C). Why, they must be paid shills for the petroleum industry. Surely the worst case is 1000°C increase, a new Venus! (And where is Vanna White?)

I am perplexed over the way that IPCC has reached the values for climate sensitivity. If I have interpreted the procedure correctly (cfr: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/tssts-4-1.html) Figs 22 and 23 show how natural and manmade warming are compared and used to show that CO2 is a major driver of temperature. But this approach is built on the assumption that the warming during the past 100 years is due to humans and relies on the firm belief that the temperature data are correct and that only the known natural drivers are accounted for. This means that solar influence other than TSI are not considered.