Friday, November 25, 2011

More on Schmittner

OK, so the Schmittner paper is out, along with a commentary in Science, and I've had a few days to digest it more thoroughly. What I said before about past v future asymmetry still holds true, but there is another point which may be more interesting.

The model results actually don't fit the land data very well, being generally too warm. A key plot is the sensitivity analysis where they compare results when land and ocean data were used separately, versus together. Clearly, the combined analysis looks almost identical to the ocean-only results, and the land-only results are radically different. In fact, they barely overlap with the ocean-only results.

Of course, there is no reason why these results should match exactly, or even closely - remember, they are not estimates of "the pdf of sensitivity" but rather, probabilistic estimates of the sensitivity - but they do need to overlap in order to be taken seriously (if they don't, at least one has to be wrong). The true value has to lie in their intersection, which is rather narrow in probabilistic terms - the 90% range of the land-only pdf is 2.2-4.6C, that of the ocean-only is 1.3-2.7C.

The explanation for this near-disjoint pair of distributions is that the model does not represent the land-ocean temperature contrast well (this is a characteristic behaviour of this sort of model, as the authors acknowledge), so can only fit one set of data at a time. When faced with both, it prefers the ocean, partly because these data are more plentiful, and partly because it is given the prior belief that the land data are less accurate (which they probably are, to be fair). The poor fit to land data then results in the statistical method assigning even less weight to these data through the spatial error term mentioned in the supplementary on-line material, and in the end result they are almost ignored. In the final analysis, the cooling over land (and perhaps also the polar amplification) seems to be significantly underestimated, leading to their rather warm LGM state which is only 3C cooler than the modern (pre-industrial) climate. One might reasonably expect that their future simulations also underestimate the temperature change over land, meaning the sensitivity estimate is on the low side, too.

Jules has also been looking at some of these data recently, particularly in comparison to the PMIP2 experiments - that is, simulations of the last glacial maximum by several state of the art climate models, most of which also mostly contributed to the CMIP3/IPCC AR4 database of modern/future projections. One telling point is that several of the PMIP2 models actually appear to fit the data better than Schmittner's best model, even though these were not specifically tuned to fit the data. Moreoever, these models are all clearly colder, in terms of global mean temperature anomaly, than the -3C value obtained in this latest paper. We haven't done a thorough analysis of this yet but I think it is safe to say that there is a significant bias in the Schmittner fit and that the LGM was really more than 3 degrees colder than the present. The implication of this for climate sensitivity is not immediate (since there are also well-known forcing biases in the PMIP2 simulations), but this line of argument also seems to suggest that it may be reasonable to nudge the Schmittner et al values up a bit.

It is still hard to reconcile a high sensitivity with the LGM results, though.

At p3.0, Nathan Urban picks this up as a major caveat: http://newscience.planet3.org/2011/11/24/interview-with-nathan-urban-on-his-new-paper-climate-sensitivity-estimated-from-temperature-reconstructions-of-the-last-glacial-maximum/

Predictable wingnut outlets like Forbes aside, the media coverage of this paper was a bit of a train wreck. A big part of the problem is the conflation of climate sensitivity to temperature with temperature sensitivity to CO2, compounded by "burying the lede" in the press release (the lede being that if the temp estimate is right, it means we get all the other climate effects for less of a temp change). IOW, if correct this papers worsens the picture.

Of course Steve, *everything* is always worse than you think, whatever it actually says :-)

One way to reasonably intepret the paper is to think of the temp changes as basically refering much more to the ocean, rather than global, temperature. Ie, ~3C colder ocean at LGM, 1.7-2.8 warmer ocean under 2xCO2. That's probably the right ballpark, anyway.

But then if I actually think that, logically... urk. OK, better not think that, although I should thank Jules and you for showing that things are just as bad as I thought before. :)

Anyway, you should just see me on some of those amateur blogs tamping down the wild speculation, difficult to imagine thought that may be.

I suppose the broader point is that temperature change, while it can't be ignored, still gets much more attention than other known changes, so having a smaller number for the former will tend to make things sound even more innocuous even if they're really not. A lot of the public already thinks +3C doesn't sound so bad. I suppose I could blame Wally Broecker for starting things off on the wrong foot, but I'll blame the crap media instead. Plus whoever wrote and approved the press release.

Okay, so start with the two independent probabilistic estimates of the climate sensitivity (land and ocean), using a model known to have trouble with land vs. ocean. In principle, I see two ways of combining them, based on two different assumptions:

1. Assume that the ocean-based estimate is much more likely because there are many more ocean-based observations than land-based observations. Combine the two PDFs by weighting them according to the number of observations. This effectively yields the published land+ocean PDF.

2. Assume that the land-based estimate is, in principle, just as valid as the ocean-based estimate. Combine the two PDFs by weighting them equally. This yields a PDF centered near 2.5-3.0. This is effectively what you did when you noted the true value must lie in their relatively narrow intersection.

It seems to me that adding more data over land or more data over water is unlikely to alter the land-only PDF or the water-only PDF much (testable by data-withholding over the ocean). So I suspect that the paper's way of combining the data (#1 above) is not as good as the other way (#2 above).

What do you think, James? I trust your thoughts on this more than my own.

Well the standard thing is that the joint likelihood should be the product of the individual ones, assuming independent errors on the data - but note that the likelihood here is actually a 3-dimensional beast cos they also estimate two error terms (land and ocean).

Given the apparent bias, I would be tempted to include a bias term in the model, for land only. They do consider bias terms but the problem is that it is highly confounded with sensitivity (eg a temp anomaly of -10 may be a large bias and small sensitivity, or vice-versa). However this problem is not so serious if it is only the land that is allowed a bias (or perhaps it could be formulated as a multiplicative factor on the land-ocean ratio). Of course, without doing the experiment, I can't be sure it would actually work...

"... This method combines information from a perturbed physics ensemble, a set of international climate models, and observations. Our approach is based on a multivariate Bayesian framework which enables the prediction of a joint probability distribution for several variables constrained by more than one observational metric. This is important if different sets of impacts scientists are to use these probabilistic projections to make coherent forecasts for the impacts of climate change, by inputting several uncertain climate variables into their impacts models. Unlike a single metric, multiple metrics reduce the risk of rewarding a model variant which scores well due to a fortuitous compensation of errors rather than because it is providing a realistic simulation of the observed quantity. We provide some physical interpretation of how the key metrics constrain our probabilistic projections. The method also has a quantity, called discrepancy, which represents the degree of imperfection in the climate model i.e. it measures the extent to which missing processes, choices of parameterisation schemes and approximations in the climate model affect our ability to use outputs from climate models to make inferences about the real system...."

The land curve also seems to have a longer tail on the low end. So combining the land and ocean estimates differently would raise the l+o estimate median and overall range, but still leave the upper end more constrained (in terms of its shorter distance from the median estimate).

So, James, do you think this study might overly constrain the upper end somehow, just as you've previously criticized other studies for the opposite problem? Or is this an irrelevant consideration (very possible)?

It's weird that the response is so multimodal. From the SOM, it seems like the underlying cause of this is the use of an ensemble with only 25 members. That might mean that the real distribution is somewhat undersampled by this experiment. I wouldn't expect that to dramatically change the tails, but it might move the mean around a fair amount.

DC, yes as I've said I think they are clearly too optimistic in the headline result - Nathan Urban basically acknowledges as much in his interview.

Tom, I agree the multimodality is odd, and suspicious. I can't think of a good physical explanation for it, as it seems to require substantial nonlinearity in the climate's response to changes in the sensitivity parameter (at least locally).

James. A question. When you refer to the 6C silliness, is this purely in the context of the Charney definition with fixed boundary conditions (as RC was also noting today in connection with the Schmittner paper)?

When Richard Betts and others start changing the carbon cycle feedbacks, they do produce 6C scenarios (or thereabouts). Do you also view these as unrealistic?

The 6C silliness is almost certainly referring to climate sensitivity of 6C.

Carbon cycle feedbacks would affect observed temperatures increases but not the climate sensitivity.

Probably many choices for 6C silliness among them:

CPDN Stainforth et al - how not to write a press releasehttp://www.realclimate.org/index.php/archives/2006/04/how-not-to-write-a-press-release/(maybe that is 11C silliness but at least paper did not claim it was a pdf)

Or Hansens 3C short term sensitivity = 6C long term sensitivity (maybe that is more 'do we care' than silly)

Or Frame et al uniform prior with high upper cut off to hype the risky end

Or maybe take your pick of (9 out of) 10 studies in IPCC Fig 10.2http://julesandjames.blogspot.com/2011/11/how-not-to-compare-models-to-data-again.html

and quite possibly others.

(If it is temperature rise rather than sensitivity then there are more possibilities: maybe Mark Lynas's book 6 degrees.)

Chris is right, I'm basically talking about the "Charney sensitivity". Once you add in carbon cycle feedback, things get fuzzier and I haven't formed such a clear opinion. The Hadley centre however is very much at the extreme end due to a well-known dry bias in the amazon region (which means the whole rainforest collapses at the slightest drying). Privately many acknowledge that this is unrealistic, but of course it's a useful hook for more funding and research...

I happened to see Friedlingstein give a fairly low (but positive) estimate of carbon cycle feedback at WCRP, but I don't know if this is a new consensus or not...

I would note, however, that 'the bottom line' is the climate sensitivity including carbon cycle feedbacks. Mitigation and adaption policies need to be built around this rather than the 'Charney sensitivity'.

When the AR5 comes out, it would be good to see the two sensitivity concepts disentangled.

And on reflection (and as an economist by training), the focus on Charney sensitivity is eerily reminiscent of the blind alley the economics profession went up in the 1970s.

Robert Lucas got the Nobel prize in economics for in effect arguing that macroeconomic models AND policy-making could be built on microeconomic foundations. But the microeconomic foundations assumed economic actors could form rational expectations in turn based on perfect information and frictionless transactions.

But back in the real world, expectations weren't always rational, information perfect and transactions frictionless. So policy makers were given a very 'tidy' tool kit for a world that didn't exist.

I suspect that few policy makers understand the difference between a Charney sensitivity and one that incorporates changes in carbon cycle feedbacks, but it is the latter that they should be crafting policy against.

For the economics profession, guiding policy makers to where the models and theory were most complete (but not a true representation of the world) proved an absolute disaster. I am curious as to whether you agree that the scientific community is following a similar tack?

Well, how much it matters depends a bit on the policy background - while people are talking about 450ppm or 550ppm, the carbon cycle feedback is likely small and in any case all efforts to stabilise at those levels require a strong reduction in emissions - whether the permitted emissions (for a given stabilisation) is a small negative or small positive value in 2050 is hardly crucial.

We are back to the neo classical economists' opening assumption of 'let us assume perfect competition', but in this case it is 'let us assume we plateau at 450 to 550 ppm'.

The IEA's latest WEO has its central 'New Policies Scenario' stabilising CO2-eq at 650 and the 'Current Policies Scenario' at much higher levels. As things stand, the 450 Scenario is very much in the left hand tail of the distribution of emission outcomes.

The WEO also notes a paper by Schaefer et al (2011) that suggests that the New Policies Scenario will give rise to an additional 58-116 ppm though carbon feedbacks on top of the 650.

Moreover, if you take the UK situation, for example, all the pressure is on the government to come off the New Policies Scenario back to the Current Policies Scenario in the face of mounting economic austerity. George Osborne has already been preparing the ground for a retreat from the 2008 UK Climate Change Act by linking UK mitigation actions to emission reduction achievements made overseas.

What will make policy makers tough out taking difficult mitigation paths is their perception of risk. And a large part of their perception of risk is formed by the concept of climate sensitivity.

Now I am sure that the literature is very thin for the carbon cycle feedbacks, but this seems a very strange reason for focusing almost exclusively on Charney sensitivity.

More broadly, I find it very puzzling why so few scientists are comfortable poking about in the tails of climate outcomes. In the financial community we spend an inordinate amount of time concentrating on the tails, even though the tails are where data and theory is thinnest. This is because it is the tail events they get you carted out the market feet first.

In the risk committee meetings I have attended over the years, probably 75% of the time was spent on the probability tails. If anyone had piped up with the comment that spending so much time and resource on such low risk events was 'alarmist', they would have been regarded as, well, 'silly'.

Ah, it was wonderful to be present more or less at the naming of the "Efficient Climate Hypothesis." :)

Re "bias in the amazon region," those two anomalous droughts and their pinning to North Atlantic SSTs is slightly scary, although I realize it must seem a trifle to those innured to tandeming through the very-nearly-radioactive mean streets of Kamakura nearly every day. The pink cameras are the least of the hazards... :)

I think the analogy is a bit unfair, as the 450ppm is not an assumption, rather a (possible) target. We can choose it if we want, and a concentration target has the advantage that we have a stronger confidence in what it would mean for climate change, relative to if we set an emissions target of say 1GT per year (in whicih case the concentration could either increase indefinitely, or slowly decline over the next century). In terms of what actually has to be done in the next few decades to achieve the target, I doubt it matters much either way, and any policy is hardly going to last for 50y, so I don't really see that it's important.

50-100ppm on top of 650 actually doesn't amount to very much climate change - only .7W extra, so roughly half a degree extra on a warming of more than 3.

Any sort of stabilisation at all requires a radical change in the way we generate energy, and accounting for carbon cycle feedbacks (or not) doesn't alter that fundamental point.

I think it's important to be aware of uncertainty, but also important to ensure that policy is reasonably robust and appropriate for the overwhelmingly probable case that some extremely unlikely event doesn't happen.

James. I fully understand your view that high levels of Charney sensitivity are unrealistic. What I am not sure is where you stand on the likelihood of us experiencing high levels of warming overall (surely the bottom line for any climate prediction scientist).

In your last post, you throw out a 3C number for Charney sensitivity at the higher end. I am not sure how close to the tail we are with that? Let's assume a 95% interval. You then add in 0.5C to give as a carbon cycle feedback inclusive climate sensitivity number of 3.5C.

We can then turn to the likely emissions outcomes. I mentioned that the IEA sees us plateauing at 650 CO2-eq in their New Policies scenario. But the New Policies scenario requires the very active implementation of policies not yet agreed upon. For example, it would require COP17 in Durban to be a success.

Given the current situation, I think we could make a best estimate that emissions will follow a path somewhat worse than the New Policies scenario but somewhat better than their Current Policies scenario. That would give us a plateau of 700-750 ppm-eq before adding in any carbon cycle feedbacks.

So if we take your carbon cycle feedback inclusive sensitivity number and combine it will a near trebling of CO2-eq above preindustrial levels, it appears to me that we are up at 5C without going down the tail of possible outcomes much at all.

Given the impacts of such level of warming, it would appear to me that every climate prediction scientist should be stressing risk aggressively in their interactions with policy makers.

However, in the exchanges I read in your blog comments over the years, it appears that Mark Lynas type high end warming scenarios are the source of knowing smirks. From a risk perspective, I am not sure why.

And I seem to remember that your original paper (Annan and Hargreaves 2006) had the top end constrained at 4.5% with a 95% confidence level. Have you come in from that? If not, it would seem incredibly easy to get to 5C plus without getting anywhere close to the tail of possible warming outcomes.

Re "it appears to me that we are up at 5C without going down the tail of possible outcomes much at all.

Given the impacts of such level of warming, it would appear to me that every climate prediction scientist should be stressing risk aggressively in their interactions with policy makers."

Huh? I don't follow.

Surely if 5C is probable, near unavoidable and action needed imminently to keep impacts avoidable without going into unlikely tails then I would expect scientists should be stressing that catastrophic 5C warming is ***probable*** (and incidentally there are, in addition, unlikely but even worse risks).

Muted climate scientist expression of opinion could possibly indicate that it isn't really approaching unavoidable but there are political choices about how soon and how aggressively and in what ways to start reducing emissions. If this is the actual situation, then the level of risk should be being stressed.

That is not actually what I am saying. And I am definitely not saying that 5C is probable.

What I am saying is that if we take James' constrained upper end of Charney sensitivity and add in carbon cycle feedbacks then based on current emission trends 5C outcomes are not unlikely.

If 5C plus is not an unlikely outcome, and given the impacts associated with 5C plus, it is a major risk. Pick up a copy of the Journal of Finance and you get academics urging that practitioners pay attention to risks an order of magnitude or two less than this in the financial field.

It is easy to sneer at Mark Lynas' 'fireballs tearing across the sky', but at least he talks openly about 5C plus outcomes. OK so his impacts may be wrong (or at least suffering from a severe case of melodrama) but 5C isn't that difficult to get to.

So given the risk involved, I don't understand why scientists involved in climate prediction are so reticent about emphasising the risks involved with 5C type outcomes.

When we talk about such outcomes it seems to me most useful to switch over to discussing paleo-analogs. Of course that begs the question of how fast, which only the models (eventually) can tell us, but as the saying goes if we keep on in this direction we will eventually end up where we are headed, i.e. on a different planet.

In other news, I see that the bulk of journalists were so exhausted by their efforts on Schmittner at al. that they were unable to give much attention to the far more important permafrost expert elicitation (confirming Shaefer et al.'s figure if I recall the latter correctly) just published in Nature. This is pathetic. It wouldn't be hard to come up with a long list of recent papers of much greater importance that got zip or nearly so for coverage. It's the "man bites dog" syndrome at work, I'm afraid.

And we have yet to hear from that ESS expedition, although that should be soon enough. Let's hope that the methane release trend is small.

"So given the risk involved, I don't understand why scientists involved in climate prediction are so reticent about emphasising the risks involved with 5C type outcomes."

They're not being paid to jump up and down and bang cymbals, are they? Actually I think they've been quite clear. The problem is on the paying attention end.

On one level scientists have been the victims of their own success. Usually by the time a specific impact is identified it's been sufficiently anticipated that the thing itself no longer makes much news.

Ocean acidification actually impacting sea life? Ho hum.

Mountain pine beetles entering the eastern boreal forest? Ho hum.

Major changes in ocean circulation observed, e.g. the Agulhas Current accelerating warming of the North Atlantic and Arctic? Ho hum.

High warming is possible if we pump out enough CO2, which may be augmented to some extent by carbon cycle feedbacks (relative to the no-feedback case which already soaks up a lot of CO2). We will have to keep doing this for a long time, all the while watching the temperature rising more and more rapidly, in order to get to 6C in a century. I don't think we will see methane fireballs in any case.

I think quite a lot of people have been talking about the risks of high warming - Stern, IPCC, and many conferences eg explicitly talk about warming of 4C or more.

I think we need to be a bit more open minded on the issue of climate sensitivity.There is abundant data across the PETM. Earth’s surface almost assuredly warmed by at least 6°C; there is little to no terrestrial ice for a major ice-albedo effect; there are no data or arguments to support more than a doubling of atmospheric CO2 (other than the circular reasoning of climate sensitivity). So, here we have the Schmittner et al. study framed on a glacial-interglacial transition with low absolute CO2 and no external carbon forcing and we derive low sensitivity. Then, we have a time in the Paleogene that spans a rapid warming with clear external carbon forcing, major carbon cycle feedbacks, or both, during a time of presumably high CO2 and we derive an extremely high sensitivity. (The graph by Mark Pagani et al., Science, 2006, nicely shows the problem, although without a solution).In my opinion, the community needs to be very careful about defining sensitivity (Charney, equilibrium, earth system …), and that a range of responses are possible depending on time-scale of interest (100, 1000, 10000 years), the rate and timing of carbon addition, and the boundary conditions. It would be great to understand and explain this range as our future lacks any true geological analogs.

"High warming is possible if we pump out enough CO2, which may be augmented to some extent by carbon cycle feedbacks (relative to the no-feedback case which already soaks up a lot of CO2). We will have to keep doing this for a long time, all the while watching the temperature rising more and more rapidly, in order to get to 6C in a century."

James, I have to say I feel less optimistic every time I see something like this or read the news from the latest COP. When exactly will we start correcting course?

Part of the problem is that delay in the present increases the burden on the future people who will actually have to (try to) resolve things, such that they too will find it tempting to delay things yet more. Etc., until the really nasty stuff happens, at which point all possible outcomes are unpleasant.

As J. Dickens will already know, Pagani's latest research points toward not only (Earth system) sensitivity being quite variable depending on the initial climate state (not sure how this complicates out situation since fortunately we just have the one climate state, unless it amounts to an argument for our ESS being higher), but toward Antarctic permafrost being the key factor in the PETM and subsequent Eocene hyperthermals (with, I would assume, clathrates still doing much of the heavy lifting).

But I wonder: Could the Antartica of that time have been host to shallow clathrate deposits similar to what we see off Siberia today? (Do we know enough about the off-shore topography to say anything?) If so the past may have become a much better guide to the immediate future than I had been thinking.

Thanks to James for the blog site, and to Steve for comments and questions

Well, a range of climate sensitivity, however defined, complicates our situation in two ways. First, while true that we are in one climate state at present-day, all indications are that Earth can exist in multiple climate states. Second, the difference from one climate state to another may not lead to a simple linear relationship in terms of climate sensitivity. This would assume that various feedbacks are linear (or fortuitously add together in a linear function). The geological record does not support this. To cast another way: why should the climate sensitivity as Earth moves from the last Glacial to Holocene accompanied by internal changes in the partitioning of carbon so that pCO2 rose about 90 ppmv be the same as that for an interglacial world moving to future state X, Y, … with an external forcing of the carbon cycle so that pCO2 rises to Z. (Now, of course, many of my colleagues will say that certain potential biosphere and geosphere feedbacks are probably not important on the 100 year time frame, so we know the climate sensitivity much better on this short time scale, and it seems close to 3*C/doubling of CO2. This may correct but it is not a very satisfactory answer for understanding how Earth works over longer times, let alone to 3+ generations down the road).

To be honest, I am not exactly sure where the work by Pagani stands with regards to climate sensitivity. For example, in Pagani et al. (Science, 2006 and several abstracts since), the writing begins by excluding certain mechanisms for carbon release across the PETM (e.g., seafloor methane) on reasons of mass balance (too little), climate sensitivity (too large), and triggering (invokes environmental change), and arrives in the end with other carbon release mechanisms (e.g., permafrost) that have much worse balance considerations, that conform to pre-conceived climate sensitivity, and are carbon cycle feedbacks.

If you like the idea of a HUGE permafrost reservoir causing the carbon injections at the PETM and other early Paleogene hyperthermals, ask about the palm and baobob pollen found in the Arctic and around Antarctica during ETM-2, or how this works when surface ocean temperature is 9 to 14 °C around Antarctica, or how to make large amounts of permafrost without an ice sheet to provide wind-blown loess, or the true carbon masses that must be involved, or…? I think we are a ways off from understanding how carbon cycling in the Early Paleogene worked, although we know to first order what happened: we have a series of massive carbon injections into the ocean and atmosphere; somehow associated with these injections are major rises in Earth’s surface temperature and a whole range of other environmental changes.

The shallowest clathrate in any ocean, present through past, is limited by bottom water temperature. If our interpretations for bottom water through time are correct, then the shallowest clathrate would have been around 800 m in the late Paleocene (pre-PETM), which is deeper than the shallowest occurrences today. This is one thing we do know, because it is set by constraints of physical chemistry. Now whether gas hydrates are important to climate change, and whether they are more or less sensitive to being a carbon cycle feedback in the Paleogene or the near future, I have to “pass” without making a really long post. With some irony, my answers would be far more detailed but far less certain circa 2011 than in 2006 than in 2001, now that we have really started studying and arguing about these very issues.