On sensitivity: Part I

January 3rd, 2013 by gavin

And then there are the recent papers examining the transient constraint. The most thorough is Aldrin et al (2012). The transient constraint has been looked at before of course, but efforts have been severely hampered by the uncertainty associated with historical forcings – particularly aerosols, though other terms are also important (see here for an older discussion of this). Aldrin et al produce a number of (explicitly Bayesian) estimates, their ‘main’ one with a range of 1.2ºC to 3.5ºC (mean 2.0ºC) which assumes exactly zero indirect aerosol effects, and possibly a more realistic sensitivity test including a small Aerosol Indirect Effect of 1.2-4.8ºC (mean 2.5ºC). They also demonstrate that there are important dependencies on the ocean heat uptake estimates as well as to the aerosol forcings. One nice thing that added was an application of their methodology to three CMIP3 GCM results, showing that their estimates 3.1, 3.6 and 3.3ºC were reasonably close to the true model sensitivities of 2.7, 3.4 and 4.1ºC.

In each of these cases however, there are important caveats. First, the quality of the data is important: whether it is the LGM temperature estimates, recent aerosol forcing trends, or mid-tropospheric humidity – underestimates in the uncertainty of these data will definitely bias the CS estimate. Second, there are important conceptual issues to address – is the sensitivity to a negative forcing (at the LGM) the same as the sensitivity to positive forcings? (Not likely). Is the effective sensitivity visible over the last 100 years the same as the equilibrium sensitivity? (No). Is effective sensitivity a better constraint for the TCR? (Maybe). Some of the papers referenced above explicitly try to account for these questions (and the forward model Bayesian approach is well suited for this). However, since a number of these estimates use simplified climate models as their input (for obvious reasons), there remain questions about whether any specific model’s scope is adequate.

Ideally, one would want to do a study across all these constraints with models that were capable of running all the important experiments – the LGM, historical period, 1% increasing CO2 (to get the TCR), and 2xCO2 (for the model ECS) – and build a multiply constrained estimate taking into account internal variability, forcing uncertainties, and model scope. This will be possible with data from CMIP5, and so we can certainly look forward to more papers on this topic in the near future.

In the meantime, the ‘meta-uncertainty’ across the methods remains stubbornly high with support for both relatively low numbers around 2ºC and higher ones around 4ºC, so that is likely to remain the consensus range. It is worth adding though, that temperature trends over the next few decades are more likely to be correlated to the TCR, rather than the equilibrium sensitivity, so if one is interested in the near-term implications of this debate, the constraints on TCR are going to be more important.

104 comments on this post.

Guido van der Werf:

January 3rd, 2013 at 12:03 PM

Great overview, thanks. I think it is important to stress that with the current growth of fossil fuel emissions we are above the highest IPCC emissions scenario (RCP 8.5), at least for fossil fuel combustion. If this persists in the future we will be in the 3 degree range in 2100 even with the lowest CS estimates.

January 3rd, 2013 at 12:40 PM

Gavin Schmidt, could you maybe have a look at a catastrophic “paper” by the economist Alan Carlin that got lost in the scientific literature? One of his reasons to claim that “the risk of catastrophic anthropogenic global warming appears to be so low that it is not currently worth doing anything to try to control it” is that he uses a very low value for the climate sensitivity based on non-reviewed “studies”, while ignoring the peer-reviewed work.

[Response: As pointed out by Hank in the comment below, we’ve already wasted enough of our neurons on Carlin. See here. –eric]

January 3rd, 2013 at 1:01 PM

Very illuminating, thank you. I agree with Guido, and would add that it would be helpful to stress how critical constraining ECS is. It’s not necessarily obvious to the uninitiated what a huge effect this ~2ºC uncertainty in ECS estimates has on scenarios that attempt to predict the magnitude and timing of climate change impacts (e.g. the AR5 RCPs). Also I found some possible minor typos:

“strongly dependent [on?] still unresolved issues”

“has been looked at before of course, but [efforts?] have been severely hampered”

“are more likely to follow to be correlated to the TCR” [remove “to follow”?]

It also might be helpful to spell out Aerosol Indirect Effect.

[Response: thanks! – gavin]

January 3rd, 2013 at 1:09 PM

Chris G:

January 3rd, 2013 at 2:26 PM

I think what really matters are the changes we can expect in the world in terms of livability, including the ability to grow adequate food. Using the change in temperature from the last glacial maximum as a ruler, if the change in temperature from that is near 6 Kelvin, then 2 K warmer than pre-industrial means a certain level of effects. If that change is only 4 K, then 2 K warmer means a higher level of change ecologically.

The 2 K limit generally accepted as a dangerous limit would mean something different if the change in temperature from the last GM is 6 K (approximately 1/3 change again) versus 4 K (approximately 1/2 change again). In other words, if climate sensitivity is toward the low end, 2 K is more dangerous than we currently give it credit for, and arguments for low risk because of low sensitivity are less valid because that means that more ecological changes occur for a given temperature change than currently thought.

David B. Benson:

January 3rd, 2013 at 3:11 PM

Quite helpful Gavin. Well done.

Tom Scharf:

January 3rd, 2013 at 4:30 PM

A useful post.

Correct me if I am wrong, but this appears to be walking back the CS numbers a bit. ~3C seems to be heading towards ~2.5C. I am encouraged as I have been a somewhat vocal critic that when models have been over estimating the temps fairly consistently that it somehow wasn’t translating into lower CS estimates, or constraining the upper range. Many forcings were being twiddled to account for observations (namely aerosols), but the main CO2 forcing seemed to be the third rail.

[Response: Huh? Forcing is not the same as sensitivity. For reference, GISS-ModelE had a sensitivity of 2.7ºC, and GISS-E2 has sensitivity of 2.4ºC to 2.8ºC depending on version. All very mainstream. – gavin]

January 3rd, 2013 at 4:33 PM

Thanks for this crucial science on sensitivity. A crucial subject for understanding. This leads to questions about micro-sensitivity. People are thinking about their individual impact to global warming. Generalities of carbon footprint try to package the message but fail. We might want to know the impact of specific actions.

Certainly operating a carbon fueled car has real consequences, although for any one vehicle they are very slight. A single cylinder emission is the lowest unit of micro-sensitivity. One person in a car might have a few million per day – or a pound of CO2 per mile traveled. It’s like pissing in a trout stream, one person may not do more harm than scare away the fish, but with millions along the shores all streaming away all day, pretty soon it is the yellow river of death.

Somewhere we need to measure cognitive sensitivity to human impacts of global warming. Perhaps the visual display would be like a car’s tachometer – it would measure ineffectual use of CO2. The micro-sensitivity meter would indicate how effectively carbon fuel is used to deploy clean energy. It would sit right on the dashboard.

SecularAnimist:

January 3rd, 2013 at 5:20 PM

Tom Scharf wrote: “models have been over estimating the temps fairly consistently”

That’s simply not true.

“In this post we will evaluate this contrarian claim by comparing the global surface temperature projections from each of the first four IPCC reports to the subsequent observed temperature changes. We will see what the peer-reviewed scientific literature has to say on the subject, and show that not only have the IPCC surface temperature projections been remarkably accurate, but they have also performed much better than predictions made by climate contrarians.”

wili:

January 3rd, 2013 at 5:45 PM

This was just posted at SkSc:

“Time-varying climate sensitivity from regional feedbacks

Abstract:”The sensitivity of global climate with respect to forcing is generally described in terms of the global climate feedback—the global radiative response per degree of global annual mean surface temperature change. While the global climate feedback is often assumed to be constant, its value—diagnosed from global climate models—shows substantial time-variation under transient warming. Here we propose that a reformulation of the global climate feedback in terms of its contributions from regional climate feedbacks provides a clear physical insight into this behavior. Using (i) a state-of-the-art global climate model and (ii) a low-order energy balance model, we show that the global climate feedback is fundamentally linked to the geographic pattern of regional climate feedbacks and the geographic pattern of surface warming at any given time. Time-variation of the global climate feedback arises naturally when the pattern of surface warming evolves, actuating regional feedbacks of different strengths. This result has substantial implications for our ability to constrain future climate changes from observations of past and present climate states. The regional climate feedbacks formulation reveals fundamental biases in a widely-used method for diagnosing climate sensitivity, feedbacks and radiative forcing—the regression of the global top-of-atmosphere radiation flux on global surface temperature. Further, it suggests a clear mechanism for the ‘efficacies’ of both ocean heat uptake and radiative forcing.”

Abstract: “Understanding how global temperature changes with increasing atmospheric greenhouse gas concentrations, or climate sensitivity, is of central importance to climate change research. Climate models provide sensitivity estimates that may not fully incorporate slow, long-term feedbacks such as those involving ice sheets and vegetation. Geological studies, on the other hand, can provide estimates that integrate long- and short-term climate feedbacks to radiative forcing. Because high latitudes are thought to be most sensitive to greenhouse gas forcing owing to, for example, ice-albedo feedbacks, we focus on the tropical Pacific Ocean to derive a minimum value for long-term climate sensitivity. Using Mg/Ca paleothermometry from the planktonic foraminifera Globigerinoides ruber from the past 500 k.y. at Ocean Drilling Program (ODP) Site 871 in the western Pacific warm pool, we estimate the tropical Pacific climate sensitivity parameter (λ) to be 0.94–1.06 °C (W m−2)−1, higher than that predicted by model simulations of the Last Glacial Maximum or by models of doubled greenhouse gas concentration forcing. This result suggests that models may not yet adequately represent the long-term feedbacks related to ocean circulation, vegetation and associated dust, or the cryosphere, and/or may underestimate the effects of tropical clouds or other short-term feedback processes.”

That had been my impression, but perhaps this is the result of selective reading on my part.

[Response: I would need to check, but I think this is a constraint on the Earth System Sensitivity – not the same thing (see the first figure). – gavin]

January 3rd, 2013 at 8:35 PM

I’m increasingly thinking that what we really need is an estimate of the sensitivity of the system to an injection of carbon dioxide including the feedback from the carbon cycle etc. I suppose that is the Earth System Sensitivity in this terminology. Using sensitivities where carbon dioxide concentrations is an exogenous variable could underestimate the cost of emissions impacts.

January 4th, 2013 at 4:38 AM

Surely the models described are all lagging behind the real world. The CMIP5 models seem to predict an Arctic free of summer sea ice in a few decades but the real world trend is for this to happen in the next few summers.

So why should policy makers care what these models predict as climate sensitivity? I suppose it is an interesting scientific problem but we should bear in mind that most or all of them are on the optimistic side.

[Response: James — thanks. One of my papers disappeared this way too. Mildly annoying! I added a link in the post which we’ll remove once AGU sorts things out. –eric]

Alex Harvey:

January 4th, 2013 at 6:40 AM

Thanks for this interesting post.

January 4th, 2013 at 11:34 AM

If “sensitivity” is the response to a given injection of CO_2, how can we measure this directly when the CO_2 level is constantly increasing?

[Response: That isn’t the point. Sensitivity is a measure of the system, and many things are strongly coupled to it – including what happens in a transient situation (although the relationship is not as strong as one might think). The quest for a constraint on sensitivity is not based on the assumption that we will get to 2xCO2 and stay there forever, but really just as a shorthand to characterise the system. Thus for many questions – such as the climate in 2050, the uncertainties in the ECS are secondary. – gavin]

Ric Merritt:

January 4th, 2013 at 1:14 PM

Geoff Beacon #13: You may indeed be able to cite cases where models are “lagging behind the real world”. Arctic sea ice measurements below a past prediction do constitute such a case. But comparing different *future* predictions of “an Arctic free of summer sea ice” cannot, logically, be cited today as a discrepancy between a past prediction and the real world, as measured. Please don’t confuse these 2 situations, which are quite different. If you want to bet on an ice-free Arctic, by some appropriate definition, by some date in a couple years, you can probably find a place to do it, but that’s a different thing from pointing out how a past prediction missed something in the real world.

I’d expect to see the Arctic essentially free of ice during September within three years.

What’s your bet?

Neven’s Sea Ice Blog has some pieces that will help :

January 4th, 2013 at 5:03 PM

> Ideally, one would want to do a study across all
> these constraints with models that were capable of
> running all the important experiments – the LGM,
> historical period, 1% increasing CO2 (to get the TCR),
> and 2xCO2 (for the model ECS) – and build a multiply
> constrained estimate taking into account internal
> variability, forcing uncertainties, and model scope.
> This will be possible with data from CMIP5 ….

How soon? Is there any coordination among those doing this, before papers get to publication, so you know what’s being done by which group, and all the scientists are aware of each other’s work so they, taken as a group, can nail down as many loose ends as possible?

Jim Larsen:

January 4th, 2013 at 5:10 PM

Volume has a more immediate signal than extent. In other words, measuring extent masks the problem. Since we now can talk with either term, it is a disservice for the IPCC to speak extent. I suggest the whole sea ice section be re-written with a volume-centric view. I’m betting all those “models more or less worked for extent up to 2011″ would turn into “models were way off on volume through 2012″.

January 4th, 2013 at 5:11 PM

Oops, I see that’s been answered:

http://www.metoffice.gov.uk/research/news/cmip5
“… (CMIP5) is an internationally coordinated activity to perform climate model simulations for a common set of experiments across all the world’s major climate modelling centres….
…. and deliver the results to a publicly available database. The CMIP5 modelling exercise involved many more experiments and many more model-years of simulation than previous CMIP projects, and has been referred to as “the moon-shot of climate modelling” by Gerry Meehl, a senior member of the international steering committee, WGCM…..”

Lennart van der Linde:

January 4th, 2013 at 5:16 PM

Do I understand correctly that this paper suggests a current CS of about 4 degrees C and earth system sensitivity of about 5 degrees, and seems to rule out CS-values lower than 3 degrees?

They also speak about sea level sensitivity as being higher than current ice sheet models show. It seems about 500 ppm CO2 could eventually mean an ice free planet, much lower than the circa 1000 ppm that ice sheet models seem to estimate.

Any thoughts on this approach and these conclusions?

January 4th, 2013 at 7:53 PM

Splendid word, I’d guess a typo, in the Hansen conclusion:

“16×CO2 is conceivable, but of course governments would not be so foolhearty….”

January 5th, 2013 at 4:24 AM

Gavin

This is not the subject but it seems that, in AR5 (sorry it is the leaked version), the mean total aerosol forcing is less (30%) than this same forcing in AR4.(-0.9W/m2 against -1.3W/m2)
On this link, http://data.giss.nasa.gov/modelforce/RadF.txt ,NASA-GISS provides a total aerosol forcing, in 2011, of -1.84W/m2.
I think that, if it is easy to conciliate a 3°C sensitivity with -1.84W/m2, it seems impossible with -0.9W/m2 (the new IPCC mean forcing), maybe a 2°C sensitivity works better.
So, is there another aerosol effect (different of the adjustment) accounted by the models, or other things?

[Response: That file is the result of an inverse calculation in Hansen et al, 2011. You need to read that for the rationale. The forcings in our CMIP5 runs are smaller. – gavin]

Paul Williams:

January 5th, 2013 at 7:56 AM

On the studies of sensitivity based on the last glacial maximum, what reduction in solar forcing is used based on the increased Albedo of the ice-sheets, snow and desert. It doesn’t appear to be outined in the papers.

Jack Wolf:

January 5th, 2013 at 9:31 AM

This is off topic, but I was wondering about the Alaska earthquake this morning and its impact on the methane hydrates along the continental shelf. Info on this would be helpful.

Dan H.:

January 5th, 2013 at 12:07 PM

Geoff,
My bet would be the opposite. Historically, a new low sea ice extent (area) is set every five years, with small recoveries in-between. My bet would be that 2012 was an overshoot, and that the next three years will show higher extents and areas. The next lower sea ice will occur sometime thereafter.

Lennart van der Linde:

January 5th, 2013 at 5:35 PM

Looking again at Hansen’s submitted paper leaves me guessing his earth system sensitivty in the current state a little more than 5 degrees C, more like 6-8 degrees. Any other interpretations?

Jim Larsen:

January 5th, 2013 at 7:06 PM

26 Paul W asked, “On the studies of sensitivity based on the last glacial maximum, what reduction in solar forcing is used based on the increased Albedo of the ice-sheets, snow and desert. It doesn’t appear to be outined in the papers.”

Yes, the obvious questions that make the most sense are often missing. What’s the total watts/m2 of the initial orbital push from LGM to HCO (totally silent on this), and what’s the total increase in temperature (4-6C?)?

Combine the two and you’ve got a total system sensitivity for conditions during an ice age. I’ve heard that sensitivity for current conditions is probably higher, but regardless, isn’t that the first thing one would want answered about climate sensitivity?

1. What was the initial push historically?
2. What was the final result (pre-industrial temps)?
3. What is the current push?

RC often touches on the last two, but the answer to the all-important first question is rarely (if ever – I don’t ever remember seeing an answer) mentioned even though it seems to be the best way to derive some sort of prediction about the future that doesn’t rely on not-ready-for-prime-time systems.

Has anybody ever heard of an estimate of the initial orbital forcing from LGM to HCO?

January 5th, 2013 at 8:20 PM

#28–Dan H wrote:

My bet would be the opposite. Historically, a new low sea ice extent (area) is set every five years, with small recoveries in-between. My bet would be that 2012 was an overshoot, and that the next three years will show higher extents and areas. The next lower sea ice will occur sometime thereafter.

Maybe. But didn’t we have a conversation here on RC, not so long ago, about the virtues and vices of extrapolation?

I’m looking at the winter temps from 80 N this year (continuing toasty, relatively), and thinking about ENSO–neutral is now favored through spring–and remembering a) that the weather last year was rather unremarkable for melt and b) we’re still at the height of the solar cycle, more or less.

Throw in a quick consult with some chicken entrails, and I’ve concluded that I wouldn’t bet on Dan’s extrapolation.

Lennart van der Linde:

January 6th, 2013 at 4:20 AM

Jim Larsen #30,
I think Jim Hansen mentions the initial orbital forcing for glaciation-deglaciation to be less than 1 W/m2 averaged over the planet, maybe just a few tenths of a W/m2. The resulting slow GHG and albedo feedbacks are about 3 W/m2 each, in his calculation.

So what happens if the initial GHG forcing now is about 4 W/m2? Would that mean slow feedbacks would total tens of W/m2? Or less? It seems Hansen thinks less, about 4 W/m2 as well, but I don’t really understand why. Does it have to do with the initial orbital forcing being much stronger or effective locally, at the poles?

It seems to me Hansen is really still struggling to understand this himself, and as a consequence his papers are not fully clear yet. Or maybe I just don’t understand clearly enough myself.

You say- “Does it have to do with the initial orbital forcing being much stronger or effective locally, at the poles?”

The northern hemisphere has much more land than the southern hemisphere and they are therefore affected differentially by orbital forcing.

Steve

January 7th, 2013 at 5:23 AM

Dan H #28: I wouldn’t be so sure that 2012 is an outlier. Look at the second animation here. The last few years show ice thickness consistently below previous levels. The pattern is oscillation with a downward trend but around 2007, the previous record year for minimum extent, there’s a big drop, then another one in 2010. With increasingly less multi-season ice, rebuilding previous sea ice extent gets harder and harder. I’m sure some people also thought 2007 was an outlier.

Of course you could be right that there’s some oscillation before it dips again, but I wouldn’t bet on it. There was nothing special about 2012 conditions to have caused a big dip (e.g. SOI index didn’t show any big El Niño events over the year).

Dan H.:

January 7th, 2013 at 7:23 AM

Kevin,
Fair enough. However, looking at the decrease in sea ice minimum over the past decade or so, both the 2007 and 2012 minima crashed through the preious lows in a typical overshoot pattern. Recently, a new low has been set every five years (2002, 2007, & 2012), with modest recoveries in-between. Last year, looks remarkably similar to 2007.

Paul Williams:

January 7th, 2013 at 8:50 AM

Doesn’t -3.5 W/m2 from the ice age Albedo forcing seem like an awfully low figure.

The Arctic sea ice melting out above 75N would have almost no impact at all if that is the forcing change of glaciers down to Chicago and sea ice down to 45N (at lower latitudes where the Albedo has much more impact).

Paul S:

January 7th, 2013 at 12:26 PM

Paul Williams,

I can’t tell where you got the figure but -3.5W/m2 is about right for current understanding of “boundary condition” land albedo change between pre-industrial and LGM. In LGM simulations land albedo changes are prescribed (at least in regards to ice sheets and altered topography due to sea level; there are feedback land albedo changes) so are a forcing, whereas sea ice is determined interactively by the model climate, so is a feedback in this framework.

Ric Merritt:

January 7th, 2013 at 1:28 PM

Geoff Beacon #19: To answer your question, if you mean you expect the NSIDC to announce a September arctic sea ice minimum below, say 1M sq km, by 2015, I would bet against, but not a huge amount, because of uncertainty.

This is not due to any denialist illness, or any reluctance to put my money where my mouth is. Over the last several years, when irritated by trolls on DotEarth, Joe Romm’s site, or the like, I have repeatedly offered to bet more than my current middle-class salary, indexed to the S&P 500 at time of settling up, on the course of global temperatures over decades. Strangely enough, I never got a serious bite.

But my previous point, which you kinda ignored, was that future expectations, which are of course what folks make bets about, are fundamentally different from pointing out a difference between carefully recorded past expectations and carefully recorded (probably recent) past measurements. I think the conversation is clearer if we keep that straight.

#36–“Last year, looks remarkably similar to 2007.”

Only in terms of the magnitude of the extent drop. But if there’s one thing I’ve learned about watching sea ice melt, it is that it ‘loves’ to confound.

January 7th, 2013 at 2:36 PM

#37–Maybe, but IIRC, I saw an estimate of .7 w/m2 for an ice-free Arctic summer. So, maybe not–though the .7 estimate was probably somewhat of a ‘spherical cow in a vacuum’ deal.

David Lea:

January 7th, 2013 at 3:28 PM

@James Annan (#14). Thanks for posting your paper. I think there is a disconnect between the modeling and paleodata community that is affecting your estimates. The data community (geochemical proxies) would argue that we’ve solidly established the 2.5-3 deg cooling level for the deep tropics during the LGM. The MARGO data is dominated by older foram transfer function estimates, which even its most ardent practitioners would agree do not record tropical changes accurately. This is an important point that is affecting a number of recent estimates of sensitivity using MARGO data.

[Response: David, thanks for dropping by. I take it you mean that the Margo data is resulting in underestimates of climate sensitivity? –eric]

David Lea:

January 7th, 2013 at 4:11 PM

@Eric: Yes, that’s the implication. if you look at Fig. 2 in Hargreaves et al, the observational band for LGM tropical cooling they use, based on MARGO, is -1.1 to -2.5 deg C, equating to a sensitivity of about 2.5 deg. Using an estimate of the mean tropical cooling based on geochemical proxies of 2.5-3 deg would yield a sensitivity closer to 3.5 deg (but perhaps Julia will comment).

David B. Benson:

January 7th, 2013 at 5:21 PM

Paul Williams @37 — The ice sheets become dirtier over time.

January 7th, 2013 at 5:32 PM

Do the ‘older foram transfer function estimates’ make different calculations but using the same original material? Or is this new field data? How did the geochemists come by the ‘geochemical proxies of 2.5-3′ now favored?

David Lea:

January 7th, 2013 at 6:32 PM

@Hank Roberts #45. I believe that the transfer function estimates used in MARGO are based on the traditional method used in CLIMAP, rather than newer approaches. And I also believe it is largely the same data set used in CLIMAP. As for the geochemical data, it is based on Mg/Ca in foraminifera, alkenone unsaturation in sediments and some sparse data from other techniques such as Ca isotopes, clumped isotopes and TEX86. The -2.5 to -3.0 deg cooling value is my subjective estimate based on knowledge of the data and various published compilations. Although LGM oxygen isotope changes cannot be used to independently assess cooling, they provide a useful additional constraint that is difficult to reconcile with a cooling much less than 3 deg.

This helps (I’d heard some of the terms, I’d have to look up all of ‘em again as most of what I know is decades out of date)

Please go on at as much length as you have patience for.

January 7th, 2013 at 11:04 PM

> affecting a number of recent estimates
> of sensitivity using MARGO data.

Time to invite all the authors whose work is affected to a barbeque?

How hard is it to revise a paper if the author (or reviewer, or editor) decides this change should be made? Simple, or complicated?

Bill Woolverton:

January 8th, 2013 at 2:08 AM

Dan H #28:
Not sure where you get the idea that a record low extent is set every five years. The previous record to 2007 was in 2005. As I recall the 2007 record resulted from very favourable weather conditions, so it would have been unlikely for another record to be set for several years (and it wasn’t). I think we can say that the 2007 melt made it more likely that another record smashing melt season would occur eventually given the right conditions, and that we can say the same about 2012.

Dan H.:

January 8th, 2013 at 11:31 AM

Bill,
Sorry, my bad. On my dataset, the line obscurved the 2005 data point. Both the 2007 and 2012 lows were affected by favourable weather conditions, and I concur that it would be unlikely for another record to be set for several years. The data from 2008-2011 fell reasonably well on the linear trend established over the past two decades. I would expect the next few years to follow suit.

Ray Ladbury:

January 8th, 2013 at 12:04 PM

Dan H.,
I would be careful in drawing any conclusions about the temporal dependence of new sea ice minima. The most notable aspect of the graph is the downward trend, and it is arguable that a linear trend no longer cuts it as a fit. What is more, the decline in thickness is even more marked that the sea ice extent decline, and thin ice is easier to melt. While it is true that weather affects the ultimate decline year to year, I don’t think it is an accident that the last two records haven’t just broken their predecessors, but smashed them. The number of aces in the deck has increased.

January 8th, 2013 at 12:53 PM

> on my dataset the line obscurved the 2005 data
But “Dan H.” claimed to be using the data file he pointed to, not a picture

> I concur that it would be unlikely for another record to be set
The familiar “I agree with myself and pretend you said it” bait again

This is the uncanny valley simulation of discourse.

David Lea:

January 8th, 2013 at 1:10 PM

@Hank #49. Not that simple. I agree on getting everyone together, but you can’t go back and revise published papers (fortunately). The way forward is to continue to debate and refine the estimates. In some areas it just takes time to get consensus, but if the problem is solvable, we’ll eventually get there.

[Response: actually you can go back and revise published results using updated datasets and/or calibrations and/or age models. Indeed we should be building archives that allow for that as a matter of course. I would expect that this might be much more cost effective than drilling new cores… ;-) -gavin]

Jim Larsen:

January 8th, 2013 at 3:51 PM

“I would expect that this might be much more cost effective than drilling new cores… ;-) -gavin]”

So you turned something we all want into something “they” can say no to? Thanks…

January 8th, 2013 at 4:57 PM

> David Lea
> …. I agree on getting everyone together

Is there an appropriate umbrella organization (AGU?) that overlaps the two communities? (and hosts barbeques?)

Is there a journal where reviewers are drawn from both modeling and paleo communities?

Can scientists get such early feedback on choices before starting papers, without having their ideas stolen?

> Gavin
> … using updated datasets and/or calibrations and/or age models.
> … we should be building archives

Is there any project to create such archives?
Some sketch of what would be collected, etc.?
(Lists and pointers to lists, rather than copies — Rule One of Databases — if possible)
(I’d guess this might be long discussed but lacking funding — and not yet ready for Kickstarter)

Would authors (and journal editors) cooperate in creating a new layer of science publishing,
to be dedicated to “using updated datasets and/or calibrations and/or age models” for revising/reworking papers?

Would original authors — whose approach was known good — agree with having someone else crank through their
same procedure after the “datasets and/or calibrations and/or age models” were changed? Get credit? Not lose face or funds?

Could recalculating involve citizen/volunteers working with guidance from original authors?

I know reworking previous papers isn’t a high priority for most scientists or grad students.
They ought to be left free to do more interesting and new work.

At the rate the “datasets and/or calibrations and/or age models” are improved — it’d sure be interesting.

MMM:

January 8th, 2013 at 5:23 PM

On sea ice records and the probability of setting new records:

I haven’t done a quantitative analysis, but my guess is that given an old record, “A”, and a new record, “B”, the expectation that a year soon after B will be even less than B (a newer record) is probably less the larger the B minus A difference (eg, reversion to the mean), BUT, the expectation that a year soon after B will be less than A should increase the larger that difference (eg, there is more confidence in a larger decreasing trend due to Bayesian updating).

(I might also guess that the shorter the time period between A and B, the more expectation there should be that a year soon after B will exceed B)

So, to apply this to 2012: the large 2012 minus 2007 difference means that I’d expect the next record to be more years off than I would have had 2012 barely beat 2007. It will be interesting to watch over the next few years… if we don’t see a new record until 2017, I wonder how many “sea ice recovery!” posts from WUWT we’ll have to endure…

January 8th, 2013 at 10:19 PM

“Both the 2007 and 2012 lows were affected by favourable weather conditions…”

The only really favorable 2012 circumstance I can think of, weather-wise, was the cyclone. And that was in effect for about a week.

The melt weather otherwise was fairly ‘middle of the road,’ according to assessments I’ve read. Yet the 2007 record was obliterated, and apparently still would have been without the cyclone.

Moreover, as Tamino and others have pointed out, this is all extent, yet volume is in some ways more to the point, as implied by Ray’s comments in #52. The most remarkable year for volume decline was actually 2010, if the PIOMAS results are correct:

This decline isn’t just weird weather–though goodness knows, we seem to be seeing more of that:

I don’t pretend to know what will happen with next year’s minimum. But it is much more likely to be below 2007 than not–and a new record would sadden but not shock me. (If I had to guess odds, I’d probably say 50-50.)

Jim Larsen:

January 9th, 2013 at 2:29 AM

32 Lennart said, “It seems Hansen thinks less, about 4 W/m2 as well,”

Thanks. I wouldn’t bet too wide of Hansen. Was he assuming cold turkey? If so, we’d end up a lot lower than our current 400ppm, so the persistent modern forcing (on which everything else must pile) is lots lower than the current CO2 forcing.

January 9th, 2013 at 4:24 AM

@David Lea #42-43: thanks for the comment, from where I’m sitting, it seems to be more of a disconnect between two sides of the paleodata community :-) I’m not trying to take sides, just using the most recent and comprehensive proxy compilations. You are certainly right that a colder tropical LGM would result in a higher sensitivity estimate.

Paul Williams:

January 9th, 2013 at 6:52 AM

44.Paul Williams @37 — The ice sheets become dirtier over time.

Comment by David B. Benson — 7 Jan 2013
———

Antarctica and Greenland and mountain glaciers seem to stay white enough. Everytime it snows, they are back to Albedos of 0.8.

Now dirt and material will migrate to the top of a glacier as it is melting back/receding and the edges can even become black. But the main glacial region will still be white as long as it is stable or advancing and snow falls over a long enough period of the year. Sounds like a last glacial maximum.

Lennart van der Linde:

January 9th, 2013 at 9:25 AM

Jim @59,

As I understand Hansen he’s saying: if we double CO2 this century (so upto about 550-600 ppm), that will mean a forcing of about 4 W/m2 and 3 degrees C warming in the short term (decades), and thru slow feedbacks (albedo + GHG) another 4 W/m2 and 3 degrees in the long term (centuries/millennia).

David Lea:

January 9th, 2013 at 11:25 AM

@James Annan #60. Your point is a fair one; what’s in print doesn’t always reflect what’s discussed in the halls. But more importantly, I think your paper provides a way forward on the sensitivity problem by providing a plausible scaling between tropical cooling and sensitivity — something that has eluded me in past papers. I am very confident that we can nail down the tropical cooling (if we havn’t already) and, as I said previously, I would be very surprised if it’s much less than 2.5 deg tropics wide — partly because of the agreement between the various geochemical proxies, partly because of the oxygen isotope constraint. If you adjusted the LGM tropical cooling to 2.8 ± 0.7 deg (my published value from 2000), what would it translate to in terms of sensitivity?

David B. Benson:

January 9th, 2013 at 6:33 PM

Paul Williams @61 — But during LGM it didn’t snow over the entire Laurentide ice sheet. There were two accumulation centers. Around the margins the winds blew loess in great quantities. Much of it lies here in the Palouse but I’m rather sure that much of it ended up on the ice sheet.

Dan H.:

January 10th, 2013 at 1:55 PM

Ray,
The most vulnerable ice has already melted, and was likely enhanced by changes in the AO.

The remaining ice is further from the inflow of warm waters, and closer to the Greenland glaciers. This ice is thicker than the ice that melted previously. Going forward, it is likely not to melt linearly, but less so, as the ice becomes harder to melt.

Nic Lewis:

January 12th, 2013 at 1:27 PM

Gavin Schmidt

I am glad to see that my input into the Wall Street Journal op-ed pages has prompted a piece on climate sensitivity at RealClimate. I think that some comment on my energy balance based climate sensitivity estimate of 1.6-1.7°C (details at http://www.webcitation.org/6DNLRIeJH), which underpinned Matt Ridley’s WSJ op-ed, would have been relevant and of interest.

[Response: Part III. – gavin]

You refer to the recent papers examining the transient constraint, and say “The most thorough is Aldrin et al (2012). … Aldrin et al produce a number of (explicitly Bayesian) estimates, their ‘main’ one with a range of 1.2ºC to 3.5ºC (mean 2.0ºC) which assumes exactly zero indirect aerosol effects, and possibly a more realistic sensitivity test including a small Aerosol Indirect Effect of 1.2-4.8ºC (mean 2.5ºC).”

The mean is not a good central estimate for a parameter like climate sensitivity with a highly skewed distribution. The median or mode (most likely value) provide more appropriate estimates. Aldrin’s main results mode for sensitivity is between 1.5 and 1.6ºC; the median is about halfway between the mode and the mean.

[Response: All the pdfs are skewed – but using the mode to compare to the mean in previous work is just a sleight of hand to make the number smaller. The WSJ might be happy to play these kinds of games, but don’t do it here. – gavin]

I agree with you that Aldrin (available at http://folk.uio.no/gunnarmy/paper/aldrin_env_2012.pdf) is the most thorough study, although its use of a uniform prior distribution for climate sensitivity will have pushed up the mean, mainly by making the upper tail of its estimate worse constrained than if an objective Bayesian method with a noninformative prior had been used.

It is not true that Aldrin assumes zero indirect aerosol effects. Table 1 and Figure 15 (2nd panel) of the Supplementary Material show that a wide prior extending from -0.3 to -1.8 W/m^2 (corresponding to the AR4 estimated range) was used for indirect aerosol forcing. The (posterior) mean estimated by the study was circa -0.3 W/m^2 for indirect aerosol forcing and -0.4 W/m^2 for direct. The total of -0.7 W/m^2 is the same as the best observational (satellite) total aerosol adjusted forcing estimate given in the leaked Second Order Draft of AR5 WG1, which includes cloud lifetime (2nd indirect) and other effects.

When Aldrin adds a fixed cloud lifetime effect of -0.25 W/m^2 forcing on top of his variable parameter direct and (1st) indirect aerosol forcing, the mode of the sensitivity PDF increases from 1.6 to 1.8. The mean and the top of the range goes up a lot (to 2.5ºC and 4.8ºC, as you say) because the tail of the distribution becomes much fatter – a reflection of the distorting effect of using a uniform prior for ECS. But, given the revised aerosol forcing estimates given in the AR5 WG1 SOD, there is no justification at all for increasing the prior for aerosol indirect forcing prior by adding either -0.25 or -0.5 W/m^2. On the contrary, it should be reduced, by adding something like +0.5 W/m^2, to be consistent with the lower AR5 estimates.

It is rather surprising that adding cloud lifetime effect forcing makes any difference, insofar as Aldrin is estimating indirect and direct aerosol forcings as part of his Bayesian procedure.

[Response: Not sure this is true. I think they are starting to do so in subsequent papers. – gavin]

The reason is probably, because the normal/lognormal priors he is using for direct and indirect aerosol forcing aren’t wide enough for the posterior mean fully to reflect what the model-observational data comparison is implying. When extra forcing of -0.25 or -0.5 W/m^2 is added his prior mean total aerosol forcing is very substantially more negative than -0.7 W/m^2 (the posterior mean without the extra indirect forcing). That results in the data maximum likelihoods for direct and indirect aerosol forcing being in the upper tails of the priors, biasing the aerosol forcing estimation to more negative values (and hence biasing ECS estimation to a higher value).

Ring et al. (2012) (available from http://www.scirp.org/fileOperation/downLoad.aspx?path=ACS20120400002_59142760.pdf&type=journal) is another recent climate sensitivity study based on instrumental data. Using the current version, HadCRUT4, of the surface temperature dataset used in a predecessor study, it obtains central estimates for total aerosol forcing and climate sensitivity of respectively -0.5 W/m^2 and 1.6 ºC. This is a 0.9ºC reduction from the sensitivity of 2.5°C estimated in that predecessor study, which used the same climate model. The reduction resulted from correcting a bug found in the climate model computer code. (Somewhat lower and higher estimates of aerosol forcing and sensitivity are found using other, arguably less reliable, temperature datasets.)

> Dan H. says:…
> The most vulnerable ice has already melted

http://nsidc.org/cryosphere/quickfacts/iceshelves.html
“… Ice streams and glaciers constantly push on ice shelves, but the shelves eventually come up against coastal features such as islands and peninsulas, building pressure that slows their movement into the ocean. If an ice shelf collapses, the backpressure disappears. The glaciers that fed into the ice shelf speed up, flowing more quickly out to sea….”

Kevin,

The minimum Arctic sea ice has declined by a little over half since its maximum extent of the past three decades.

The following animation from MIT shows the sea ice changes. Notice the ice repeatedly melts along the Siberian, Alaskan, and the Northern Canadian coastlines. The Ice around Northern Greenland, and northern Canadian islands remains year after year. This is the thick ice to which I was referring. Yes, it is thinner than two decades ago, but it is thicker than the ice around the continents which has melted in the previous summers. Does this adequately explain by preious posts?

Graeme:

January 13th, 2013 at 6:01 PM

I thought James Annan had demonstrated that using a uniform prior was bad practise. That would tend to spread the tails of the distribution nsuch that the mean is higher than the other measures of central tendency. So is it justified in this paper?

January 13th, 2013 at 9:23 PM

Can we quit chasing Dan H.’s red herrings? He is _so_ good at diverting a topic to talking about his mistakes. Paste his claim into Google, and sigh.Large ice age animation
The older ice is not up against the shoreline, and is not protected.

January 14th, 2013 at 12:16 AM

“Does this adequately explain by preious posts?”

No, in view of the fact that the thick multi-year ice which formerly made up a considerable proportion of the sea ice has nearly disappeared.

January 14th, 2013 at 1:59 AM

Dr. Joel Norris of the Scripps Institution gave an excellent colloquium presentation titled “Cloud feedbacks on climate: a challenging scientific problem” to Fermilab National Laboratory, May 12, 2010. Please see the archived video and his powerpoint presentation at this link:

January 14th, 2013 at 6:09 AM

New paper mixing “climate feedback parameter” with climate sensitivity… “climate feedback parameter was estimated to 5.5 ± 0.6 W m−2 K−1″ “Another issue to be considered in future work should be that the large value of the climate feedback parameter according to this work disagrees with much of the literature on climate sensitivity (Knutti and Hegerl, 2008; Randall et al., 2007; Huber et al.,
2011). However, the value found here agrees with the report by Spencer and Braswell 10 (2010) that whenever linear striations were observed in their phase plane plots the slope was around 6Wm−2K−1. Spencer and Braswell (2010) used middle tropospheric temperature anomalies and although they did not consider any time lag they may have observed some feedback processes with negligible time lag considering that the tropospheric
temperature is better correlated to the radiative flux than the surface air 15 temperature. The value found in this study also agrees with Lindzen and Choi (2011) who also considered the effects of lead-lag relations.”

[Response: Another paper confusing short term variations with long-term shifts. – gavin]

January 14th, 2013 at 8:08 AM

Regarding my #74: On sea ice thickness, here is an unreviewed but sensible discussion/analysis of Arctic sea ice volume and thickness as modeled by PIOMAS. Note particularly the plots of June and September thickness time series.

Dan H.:

January 14th, 2013 at 9:41 AM

Kevin,

Not sure about your contention that the thick ice has nearly disappeared (it could be a difference in our definitions of “thick ice”). I am not disputing that the thicker ice has not thinned. Indeed, there has been a general thinning of the entire sea ice. The thickness of the remaining, multi-year ice, along with its geographic location, will make it more difficult to melt than the ice that was spread across the Arctic, and exposed to Pacific and Atlantic ocean currents, along with runoff from fresh water rivers.

Also, usign volume to determine when the Arctic may be ice-free suffers from exponential decay. Volume will also decrease faster than area initially, but volumetric decrease will slow as less ice remains (simple mathematics). Hence, any prediction that volumetric losses will continue exponentially is mathematically flawed.

Any one with better background in this area that want to point out the worst mistakes in the paper in 76 can still do so.

January 14th, 2013 at 5:49 PM

#80–read the discussion linked, Dan. You’ll find that ice more than 3 meters thick now forms an insignificant proportion of the total population. It’s true, of course, that the Transpolar Drift tends to accumulate ice in the area we’ve been discussing, but that hardly means that it is going to slow the overall melt any.

As to your second paragraph, nobody knows, at this point, whether volume will continue to follow an exponential curve right down to zero extent. That’s not a mathematical question, but an empirical one–and it will remain empirical unless and until we can model sea ice physically a lot better than we can now. Therefore, what you say in that paragraph is pure bravado, unsupported by any evidence. It would be nice if it were true, but there’s really no reason to think that it actually is.

Paul S:

January 16th, 2013 at 8:46 AM

The total of -0.7 W/m^2 is the same as the best observational (satellite) total aerosol adjusted forcing estimate given in the leaked Second Order Draft of AR5 WG1, which includes cloud lifetime (2nd indirect) and other effects.

There are only a handful of published estimates for total anthropogenic aerosol forcing, including first indirect and cloud lifetime effects. Of these, the smallest best estimate I can find is -0.85W/m^2, which means the reported -0.7 is unlikely to be representative of total aerosol forcing, whatever else it relates to.

wili:

January 16th, 2013 at 12:49 PM

Kevin, the real reason that sea ice volume will likely not reach zero any time soon is that calving from Greenland and from the Canadian archipelago will continue and will likely increase. That will keep some (comparatively small) amount of ice in the sea for a good while.

That is why informed folks, like those at Neven’s Arctic Sea Ice blog, talk instead about time frames for an ‘essentially’ or ‘virtually’ ice free Arctic Ocean–by this they generally mean anything under one million square meters of sea ice extent. And many see it as likely that we will reach this level very soon–if not this September, then within the next two or three years.

But you are right that, given the massive failure of earlier modelers, we have to conclude that we can’t really know what is coming at us with any certainty.

Steve Jewson:

January 20th, 2013 at 5:23 AM

Following on from the comments by Nic Lewis and Graeme,

Yes, using a flat prior for climate sensitivity doesn’t make sense at all.
Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that.

Nic (or anyone else)…would you be able to list all the studies that have used flat priors to estimate climate sensitivity, so that people know to avoid them?

Ray Ladbury:

January 20th, 2013 at 9:40 AM

Steve Jewson,
The problem is that the studies that do not use a flat prior wind up biasing the result via the choice of prior. This is a real problem given that some of the actors in the debate are not “honest brokers”. It has seemed to me that at some level an Empirical Bayes approach might be the best one here–either that or simply use the likelihood and the statistics thereof.

Mal Adapted:

January 20th, 2013 at 2:07 PM

Hank:

Can we quit chasing Dan H.’s red herrings?

It’s as easy as not reading them at all, which many of us are already doing. Try it! When catching up on a thread, at the point one’s eyes come to “Dan H. says:”, they simply skip past the entire comment without taking it in. Think of it as a mental killfile.

Steve Jewson:

January 20th, 2013 at 2:28 PM

Ray,

I agree that no-one should be able to bias the results by their choice of prior: there needs to be a sensible convention for how people choose the prior, and everyone should follow it to put all studies on the same footing and to make them comparable.

And there is already a very good option for such a convention…it’s Jeffreys’ Prior (JP).

JP is not 100% accepted by everybody in statistics, and it doesn’t have perfect statistical properties (there is no framework that has perfect statistical properties anywhere in statistics) but it’s by far the most widely accepted option for a conventional prior, it has various nice properties, and basically it’s the only chance we have for resolving this issue (the alternative is that we spend the next 30 years bickering about priors instead of discussing the real issues). Wrt the nice properties, in particular the results are independent of the choice of coordinates (e.g. you can use climate sensitivity, or inverse climate sensitivity, and it makes no difference).

Using a flat prior is not the same as using Jeffreys’ prior, and the results are not independent of the choice of coordinates (e.g. a flat prior on climate sensitivity does not give the same results as a flat prior on inverse climate sensitivity).

Using likelihood alone isn’t a good idea because again the results are dependent on the parameterisation chosen…you could bias your results just by making a coordinate transformation. Plus you don’t get a probabilistic prediction.

Steve

ps: I’m talking about the *second* version of JP, the 1946 version not the 1939 version, which resolves the famous issue that the 1939 version had related to the mean and variance of the normal distribution.

Nic Lewis:

January 21st, 2013 at 12:11 PM

Steve, Ray

First, when I refer to an objective Bayesian method with a noninformative prior, that means using what would be the original Jeffreys’ prior for inferring a joint posterior distribution for all parameters, appropriately modified if necessary to give as accurate inference (marginal posteriors) for individual parameters as possible. In general, that would mean using Bernardo and Berger “reference priors”, one targeted at each parameter of interest. In the case of independent scale and location parameters, doing so would equate to the second version of the Jeffreys’ prior that Steve refers to. In practice, when estimating S and Kv, marginal parameter inference may be little different between using the original Jeffreys’ prior and targeted reference priors.

Secondly, here is a list of climate sensitivity studies that used a uniform prior for main results when for estimating climate sensitivity on its own, or when estimating climate sensitivity S jointly with effective ocean vertical diffusivity Kv (or any other parameter like those two in which observations are strongly nonlinear) used uniform priors for S and/or Kv.

This includes a large majority of the Bayesian climate studies that I could find.

Some of these papers also used other priors for climate sensitivity as alternatives, typically either informative “expert” priors, priors uniform in the climate feedback parameter (1/S) or in one case a uniform in TCR prior. Some also used as alternative nonuniform priors for Kv or other parameters being estimated.

Steve Jewson:

January 22nd, 2013 at 1:04 PM

Sorry to go on about it, but this prior thing this is an important issue. So here are my 7 reasons for why climate scientists should *never* use uniform priors for climate sensitivity, and why the IPCC report shouldn’t cite studies that use them.

It pains me a little to be so critical, especially as I know some of authors listed in Nic Lewis’s post, but better to say this now, and give the IPCC authors some opportunity to think about it, than after the IPCC report is published.

If the authors that Nic Lewis lists above had chosen different coordinate systems, they would have got different results. For instance, if they had used 1/S, or log S, as their coordinates, instead of S, the climate sensitivity distributions would change. Scientific results should not depend on the choice of coordinate system.

2) *If you use a uniform prior for S, someone might accuse you of choosing the prior to give high rates of climate change*

It just so happens that using S gives higher values for climate sensitivity than using 1/S or log S.

3) *The results may well be nonsense mathematically*

When you apply a statistical method to a complex model, you’d want to first check that the method gives sensible results on simple models. But flat priors often given nonsense when applied to simple models. A good example is if you try and fit a normal distribution to 10 data values using a flat prior for the variance…the final variance estimate you get is higher than anything that any of the standard methods will give you, and is really just nonsense: it’s extremely biased, and the resulting predictions of the normal are much too wide. If flat priors fail on such a simple example, we can’t trust them on more complex examples.

4) *You risk criticism from more or less the entire statistics community*

The problems with flat priors have been well understood by statisticians for decades. I don’t think there is a single statistician in the world who would argue that flat priors are a good way to represent lack of knowledge, or who would say that they should be used as a convention (except for location parameters…but climate sensitivity isn’t a location parameter).

5) *You risk criticism from scientists in many other disciplines too*

In many other scientific disciplines these issues are well understood, and in many disciplines it would be impossible to publish a paper using a flat prior. (Even worse, pensioners from the UK and mathematicians from the insurance industry may criticize you too :)).

6) *If your paper is cited in the IPCC report, IPCC may end up losing credibility*

These are much worse problems than getting the date of melting glaciers wrong. Uniform priors are a fundamentally unjustifiable methodology that gives invalid quantitative results. If these papers are cited in the IPCC, the risk is that critics will (quite rightly) heap criticism on the IPCC for relying on such stuff, and the credibility of IPCC and climate science will suffer as a result.

7) *There is a perfectly good alternative, that solves all these problems*

Harold Jeffreys grappled with the problem of uniform priors in the 1930s, came up with the Jeffreys’ prior (well, I guess he didn’t call it that), and wrote a book about it. It fixes all the above problems: it gives results which are coordinate independent and so not arbitrary in that sense, it gives sensible results that agree with other methods when applied to simple models, and it’s used in statistics and many other fields.

In Nic Lewis’s email (number 89 above), Nic describes a further refinement of the Jeffreys’ Prior, known as reference priors. Whether the 1946 version of Jeffreys’ Prior, or a reference prior, is the better choice, is a good topic for debate (although it’s a pretty technical question). But that debate does muddy the waters of this current discussion a little: the main point is that both of them are vastly preferable to uniform priors (and they are very similar anyway). If reference priors are too confusing, just use Jeffreys’ 1946 Prior. If you want to use the fanciest statistical technology, use reference priors.

ps: if you go to your local statistics department, 50% of the statisticians will agree with what I’ve written above. The other 50% will agree that uniform priors are rubbish, but will say that JP is rubbish too, and that you should give up trying to use any kind of noninformative prior. This second 50% are the subjective Bayesians, who say that probability is just a measure of personal beliefs. They will tell you to make up your own prior according to your prior beliefs. To my mind this is a non-starter in climate research, and maybe in science in general, since it removes all objectivity. That’s another debate that climate scientists need to get ready to be having over the next few years.

Steve

simon abingdon:

January 25th, 2013 at 5:42 AM

This thread now appears under “Older Entries”. Maybe the dialogue between Nic Lewis and Steve Jewson merits some continuing attention, unless it is accepted (as Ray Ladbury has confidently asserted) that Climate Sensitivity is now a “mature field” with a trend raround +2.8K generally agreed.

simon abingdon:

January 25th, 2013 at 5:57 AM

#66 [Response: Part III. – gavin] Any expected release date?

Ray Ladbury:

January 25th, 2013 at 10:01 AM

Steve Jewson,
I agree that Jeffrey’s Prior is attractive in a lot of situations. However, it is not clear that it would help in this case, is it? I mean in some cases, JP is flat.

Alexander Harvey:

January 25th, 2013 at 2:10 PM

To Steve Jewson:

Steve,

You have clarified many things that I have appreciated but have been unable to express with your clarity. I offer my thanks.

I have been aware that some specific choices of priors are necessary in even the most mundane of statistical issues. E.G. the choice of a prior for the variance (or standard deviation) for the normal distribution. Also that there are desirable properties that should be maintained, be invariant, under coordinate transformations, and that parameter estimators should be unbiased in at least one case of the mean, median, or mode. Such things being elemental requirements based on the generic class of the problem.

At the risk of disagreeing with you, I do have that problem that rejects the notion of anything being non-informative. To me such priors would be better called differently, elementally, generically, ideally, or statistically informed, if that gets my intention across to you.

I doubt I have the reserves of wit to grapple with how the JP is derived from Fisher Information, for my purposes, and I suspect those required for the problem at hand, simpler arguments based directly on the need for invariance under coordinate transformations, specifically under a coordinate flip, and perhaps the desire for well behaved estimators would suffice, and be easier for those such as I to comprehend.

As it happens, I have no objection to completely subjective priors providing people are prepared to hold the ensuing argument, and be clear and transparent in their reasoning, which I can either embrace or dismiss. That said, it seems that the JPs would be the better points for departure (as opposed to flat priors) to argue from.

That 1.9 C for CO2 doubling not so unreasonable after all…

Dan H.:

January 25th, 2013 at 5:08 PM

nvw,
It seems that nature has a greater influence than first thought. So much for Simon’s claim above.

JCH:

January 25th, 2013 at 7:55 PM

If a dominance of La Nina/ocean variability, is causing a hiatus, does that mean climate sensitivity is lower? Doesn’t sound right to me.

JohnL:

January 26th, 2013 at 6:49 AM

#96 from your link:
“When the researchers at CICERO and the Norwegian Computing Center applied their model and statistics to analyse temperature readings from the air and ocean for the period ending in 2000, they found that climate sensitivity to a doubling of atmospheric CO2 concentration will most likely be 3.7°C, which is somewhat higher than the IPCC prognosis.

But the researchers were surprised when they entered temperatures and other data from the decade 2000-2010 into the model; climate sensitivity was greatly reduced to a “mere” 1.9°C.”

Well, doesn’t this just show that the method is not robust? How can a single decade be enough to overturn previous results that much, especially when there are other straightforward methods explaining the differences in trend from empirical data, i.e. Foster/Rahmsdorf 2012? I assume that climate sensitivity does not change with time.

January 26th, 2013 at 2:01 PM

Any one seen the paper #99? I can not find a published article…

Nic Lewis:

January 26th, 2013 at 4:32 PM

Ray Ladbury #93
“I agree that Jeffrey’s Prior is attractive in a lot of situations. However, it is not clear that it would help in this case, is it? I mean in some cases, JP is flat”

The form of the Jeffreys’ prior depends on both the relationship of the observed variable(s) to the parameter(s) and the nature of the observational errors and other uncertainties, which determine the form of the likelihood function. Typically the JP is only uniform where the estimation is of a simple location parameter, with the measured variable being the parameter (or a linear function thereof) plus an error whose distribution is independent of the parameter.

Where (equilibrium/effective) climate sensitivity (S) is the only parameter being estimated, and the estimation method works directly from the observed variables (e.g., by regression, as in Forster and Gregory, 2006, or mean estimation, as in Gregory et al, 2002) over the instrumental period, then the JP for S will be almost of the form 1/S^2. That is equivalent to an almost uniform prior were instead 1/S, the climate feedback parameter (lambda), to be estimated.

The reason why a 1/S^2 prior is noninformative is that estimates of climate sensitivity depend on comparing changes in temperature with changes in {forcing minus the Earth’s net radiative balance (or its proxy, ocean heat uptake)}. Over the instrumental period, fractional uncertainty in the latter is very much larger than fractional uncertainty in temperature change measurements, and is approximately normally distributed.

There is really no valid argument against using a 1/S^2 prior in cases like Forster & Gregory, 2006 and Gregory et al, 2002, and that is what frequentist statistical methods implicitly use. For instance, Forster and Gregory, 2006, used linear regression of {forcing minus the Earth’s net radiative balance} on surface temperature, which as they stated implicitly used a uniform in lambda prior for lambda. When the normally distributed estimated PDF for lambda resulting from that approach is converted into a PDF for S, using the standard change of variables formula, that PDF implicitly uses a 1/S^2 prior for S. However, for presentation in the AR4 WG1 report (Fig. 9.20 and Table 3) the IPCC multiplied that PDF by S^2, converting it to a uniform-in-S prior basis, which is highly informative. As a result, the 95% bound on S shown in the AR4 report was 14.2 C, far higher than the 4.1 C bound reported in the study itself.

Where climate sensitivity is estimated in studies involving comparing observations with values simulated by a forced climate model at varying parameter settings (see Appendix 9.B of AR4 WG1), the JP is likely to be different from what it would be were S estimated directly from the same underlying data. Where several parameters are estimated simultaneously, the JP will be a joint prior for all parameters and may well be a complex nonlinear function of the parameters.

Aaron Franklin:

January 31st, 2013 at 8:20 PM

I’m in need of some clarification on what we should be now using as a GWP for methane.

From Archer 2007:
…..so a single molecule of additional methane has a larger impact
on the radiation 5 balance than a molecule of CO2, by about a factor of 24 (Wuebbles and Hayhoe, 2002)……
…..To get an idea of the scale, we note that a doubling of methane
10 from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and 10 times present methane would be equivalent to about a doubling of CO2. A release of 500 Gton C as methane (order 10% of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2……
…..The current inventory of methane in the atmosphere is about 3 Gton C. Therefore, the release of 1 Gton C of methane catastrophically to the atmosphere would raise the methane concentration by 33%. 10 Gton C would triple atmospheric methane.

-That previous GWP methane figures need x1.8 correction factor….
We should be using 20yr GWP methane of 130 or 180. This is 5.4 or 7.5 times the 24 GWP that Archer 2007 appears to be using?

So maybe the above should say, looking at a 20yr period(using the 100 becomes 180 gwp)?:

…..To get an idea of the scale, we note that a [100% increase/7.5= 13% increase] of methane from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and [10 times/7.5= 1.333times] present methane would be equivalent to about a doubling of CO2. A release of [500/7.5=66.7] Gton C as methane (order [10%/7.5=1.3%] of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2……

February 1st, 2013 at 12:06 PM

Aaron Franklin (102)

I wouldn’t go so far to say that the collective climate science community has completely moved on from the idea, but I’d argue that GWP is a rather outdated and fairly useless metric for comparing various greenhouse gases. It is also very sensitive to the timescale over which it is calculated.

It’s correct that an extra methane molecule is something like 25 times more influential than an extra CO2 molecule, although that ratio is primarily determined by the background atmospheric concentration of either gas, and GWP typically assumes that forcing is linear in emission pulse, which is not valid for very large perturbations. But because there’s not much methane to begin with, it’s not true that 1.33x methane has more impact than a doubling of CO2 (we’ve already increased methane by well over this amount)…a doubling of methane doesn’t even have nearly as much impact as a doubling of CO2.

The key point, however, is the much longer residence time of CO2 in the atmosphere…GWP tries to address this in its own mystical way, but there are much better ways of thinking about the issue. See the recent paper from Susan Solomon, Ray Pierrehumbert, and others.