The AR4 attribution statement

“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”

This is a clear statement that I think is very well supported and correctly reflects the opinion of most climate scientists on the subject (and was re-affirmed in two recent papers (Jones and Stott, 2011;, Huber and Knutti, 2011)). It isn’t an isolated conclusion from a single study, but comes from an assessment of the changing patterns of surface and tropospheric warming, stratospheric cooling, ocean heat content changes, land-ocean contrasts, etc. that collectively demonstrate that there are detectable changes occurring which we can attempt to attribute to one or more physical causes.

Yet, in a paper just out in BAMS (Curry and Webster, 2011) this statement is apparently evidence that IPCC is unable to deal with uncertainty. Furthermore, Judith Curry has reiterated on her blog that the term ‘most’ is imprecise and undefined. For instance:

Apart from the undefined meaning of “most” in AR4 (which was subsequently clarified by the IPCC), the range 50.1-95% is rather imprecise in the context of attribution.

However, Curry’s argument is far from convincing, nor is it well formed (why is there a cap at 95%?). Nor was it convincing when I discussed the issue with her in the comments at Collide-a-Scape last year where she made similar points. Since the C&W paper basically repeats that argument (as has also been noticed by Gabi Hegerl et al who have a comment on the paper (Hegerl et al.)), it is perhaps worth addressing these specific issues again.

Let’s start with what the statement actually means. “Most” is an unambiguous adjective (meaning more than half), and ‘very likely’ in IPCC-speak means that the statement is being made with between 90 to 99% confidence (i.e. for every 10 such statements, the scientists expect 9 or more to pan out). Given that some people have found this confusing, it may help somewhat if the contents of the statement are visualised:

Figure 1: Two schematic distributions of possible ‘anthropogenic GHG contributions’ to the warming over the last 50 years. Note that in each case, despite a difference in the mean and variance, the probability of being below 50, is exactly 0.1 (i.e. a 10% likelihood).

The figure shows two Gaussian distributions, both of which have the probability of x being less than 50 at 0.1. i.e. P(x<50)=0.1. If either of them had been the distribution of the estimated increase in global temperatures due to anthropogenic greenhouse gas increases relative to the observed increase, the IPCC statement would have been almost exactly correct (i.e. if x=100*trend_caused_by_GHG/actual_trend). These distributions show a number of key issues that need to be appreciated. First, the actual increase of temperatures purely due to the rise in GHGs is not precisely known (and therefore there is a distribution of potential values). Note that we are presuming that there is a single ‘true’ answer, so the distribution is a measure of our ignorance, not a claim that the answer itself is a random variable.

Second, the IPCC statement is not a declaration about what the most likely value of ‘x’ is. It states merely that P(x> 50%) is at least 0.9. In the two figures, one has the mean value of x at 80%, while the other has the mean value at 100%. Both fit the IPCC statement equally well. Some people have interpreted the IPCC statement confusing the likelihood of the statement with the actual relative trend (i.e. that the 90% refers to the expected attribution), but that would be a big misreading of the text.

Third, there is certainly a potential for the increase in temperatures due to anthropogenic GHG changes to be greater than the observed trend because we know that there have been both natural (volcanic and solar) and human-caused (reflective aerosols, land use change) factors that are expected to have lead to cooling over the post-1950 period (therefore there is no cut off at 95% of the actual trend). The actual trend will be a function of the warming factors, balanced by the cooling factors. And of the warming factors, the well-mixed greenhouse gas (CO2, CH4, N2O, CFCs) changes are the dominant term (about 75% of the increase in warming factors from 1950, the rest is related to black carbon effects, ozone etc.).

Fourth, the statement clearly encompasses many different estimates of what the actual trends are being driven by and is not therefore a particularly strong conclusion. Myles Allen (Allen, 2011) points out that during the drafting, the text was changed from ‘contributed substantially’ to ‘most’, and focused on greenhouse gases rather than the total anthropogenic effect specifically in order to have a more quantitative conclusion and more justifiable statement.

Now let’s put some real numbers in here. Attribution is fundamentally a modelling task, and the principal models that can be used are the coupled GCMs – at least to start with. What do they estimate the warming trend from the well-mixed GHGs to have been over the last 50 years? The figure below shows this for some of the GISS CMIP5 models (more model data can be downloaded from CMIP5 portal):

The 50 year trends (here, from 1956 to 2005, 5 ensemble members), are 0.84ºC (range [0.79,0.92]) for just greenhouse gas forcing. and 0.67ºC (range [0.54,0.76]) for the all-forcings case (in CMIP3, the envelope of the all-forcing trends is [0.4,1.3], or equivalently 0.74 +/- 0.22ºC (1 sigma spread) using 55 individual model simulations – the wider spread reflecting structural variations in the models and forcings). As in the more recent model simulations, the GISS CMIP3 50 year trends using only well-mixed GHG forcings is around 0.1ºC more than the ‘all-forcing’ case (data here).

The actual observed trend depends a little on the dataset used, but is around 0.6 +/- 0.05ºC (1 sigma uncertainty in the OLS fit). If we then estimate the percentage (as illustrated above), assuming a 0.2ºC sigma in the model spread, ‘x’ is roughly 140% +/- 35% (1 sigma). If we interpreted that range as a Gaussian distribution (not really a good idea, but simple enough for illustration), we’d estimate that P(x<50%) would be less than 1% (even less likely than the IPCC AR4 statement allowed for).

There are good reasons why the IPCC assessed that the probability was not as low as suggested by the models or any individual attribution paper. Specifically, the overall assessment must take into account potential structural uncertainties that don’t come into the straight model analysis. For instance, the models may systematically be overestimating the GHG-driven trend, they may be underestimating the internal variability, and they may be undersampling the structural uncertainty in making models themselves. The first kind of error would cause an overestimate in the mean of the distribution, while the other factors would cause an underestimate in the variance of the trends – all would increase P(x < 50%). On the other hand, the net forcing is almost certainly less than the effect of anthropogenic GHGs alone and so that biases the mean of the ‘all-forcings’ trends low, and some of the spread in the trends is related to different models having different forcings (biasing the spread wide). These elements can be quantified during the attribution (using fingerprint scaling, monte-carlo emulators etc.), but when they are all taken into account, the difference is less than one might think (it turns out that structural uncertainty likely isn’t being underestimated and the internal variability in models comfortably spans the range inferred in the real world (Yokohata et al., 2011; Santer et al., 2011)).

Curry and Webster specifically bring up two issues that, they claim, lessen the confidence one should have in the IPCC statement: that the history of solar forcing is uncertain in scale, and that aerosol forcings have a huge error bar. These two statements are true as far as they go – the scale of solar forcing is not tightly constrained prior to about 1960, and the total aerosol forcing and it’s variation in time is uncertain. But C&W’s specific complaint is that the attribution studies used in AR4 used solar forcing that was too large compared to more recent studies. However, reducing any warming trend associated with solar actually makes the attribution statement more likely which somewhat undercuts their point.

With respect to aerosols, the key thing to remember that regardless of the magnitude of the change, the sign of the forcing is almost certainly negative (i.e. the net aerosol effect has been one of cooling). The dominant anthropogenic aerosols are sulphates (derived from the SO2 emitted during the burning of sulphur-containing fossil fuels), which are reflective, and hence cooling. Other aerosols (black carbon, organic carbon, nitrates) are more uncertain, but have a net effect that is smaller.

Now, the statement in AR4 specifically states that the effect of greenhouse gases is more than half of the observed trend, which is actually independent of the effects of aerosols. But with the high probability of aerosols being a net cooling, this increases the ratio of the GHG-driven trends to the actual forced trend.

The final issue is whether the internal variability of the system on multi-decadal timescales has been properly characterised. For instance, it is possible that all the models grossly underestimate the internal variability, in which case any expected trend due to GHGs would be drowned out in the noise. But there is no positive evidence for this at all – as Hegerl et al point out, the estimates of multi-decadal variability in the models and observational records all overlap within their (substantial) uncertainties (arising from the shortness of the record, and the difficulty in estimating internal variability in the presence of multiple forcings). So while it is conceivable be that there is a bias, it is currently undetectable, which implies it can’t be that large.

In summary then, the IPCC AR4 statement was a fair, even conservative, assessment. There is an unfortunate tendency to reify the particular statements made by IPCC, since there were clearly other correct statements that could have been made. For instance, it might well have been worthwhile to add a statement about the likely range of the anthropogenic trends (i.e 80-120% of the actual trend or similar), so that a better picture of the appropriate distribution could be given (see Huber and Knutti, 2011) for examples). But claims that the statement was unsupported, or that it demonstrated that IPCC was ignoring uncertainty are simply untenable.

The next iteration (IPCC AR5) is now underway, but given the early results of the CMIP5 models (which are on the whole very similar, as discussed at fall AGU), and more recent literature on this issue (see refs below), I see no reasons in the recent literature why the conclusions in AR5 will be much different. But if anyone still finds the assessment confusing, they have an opportunity to make their points via the IPCC review process, and the resulting conclusions will likely be clearer because of them.

References

G.S. Jones, and P.A. Stott, "Sensitivity of the attribution of near surface temperature warming to the choice of observational dataset", Geophysical Research Letters, vol. 38, pp. n/a-n/a, 2011. http://dx.doi.org/10.1029/2011GL049324

If a car was speeding toward a cliff with a severed brake line, the driver stepped on the brakes and the car didn’t stop, what would be Judith Curry’s assigned likelihood that the car went over the cliff because the brakes didn’t work?

I have been fascinated by the unfettered responses in Curry’s blog. I wonder if she has ever explained why she almost never responds to comments on her site that are totally beyond even her purportedly ambivalent approach, whereas she is often right on top of any exaggeration of comments that support ACC.
At first I thought that maybe she was taking a pedagogic approach, allowing people to express their opinions and thinking the openness would allow reasonable examination of evidence to rise to the top. After a few years of this clearly not happening, she doesn’t appear to be interested in her site developing an accurate perspective.
My limited view is that either naively, or for ideological reasons neither her nor Pilke Jr. are willing to accept that there is a rigid mentality among ideologues that does not allow them to modify their views regardless of the mass of evidence against them. Change often comes through a radical reassessment and a bounce to the opposite side. For instance former marxists becoming neocons, or extremely religious becoming hedonist atheists.
to me the pretty easily varifiable observation – that many denialists consider climate scientists the enemy and as such not part of a rational dialogue, make having blogs such as Curry’s and Pilke’s rather meaningless.
On the other hand I have learned a lot from their sites and even extreme denialist sites because I have had to search for information in places like RC that I otherwise would not have done.

It must be very frustrating for you, professors, that you must engage in two debates: the public “debate” and the genuine one. This has been said before many times, but to reiterate, thank you.
AR4 is five years old. Why attack it now? To provide a priori justification for similar criticism of AR5?

(1) “(i) to the greatest or highest degree often used with an adjective or adverb to form the superlative (ii) to a very great degree” — Webster’s Ninth Collegiate Dictionary
(2) “in the greatest quantity, amount, measure, degree, or number” — American College Dictionary
(3) “with reference to amount or degree. as a superlative of comparison: greatest in degree or extent… .” — Compact Edition, Oxford English Dictionary.

I don’t know how good scientist Dr. Curry is but her lack understanding of the English language is abysmal (“immeasurably low or wretched” –Webster)! Her papers must be a real joy to read.

Isn’t the real answer to C&W in this remark by physics Nobelist Murray Gell-Mann: “Is it really, really so extremely difficult to persuade people that climate, which is average weather, can have three contributions that add to one another? That is, some cyclical effects, some random noise and a secular steadily rising trend from human activity?”
H/T Dot Earth

The point is that – Yes it is really difficult! Even climate scientists can’t understand that. Forget the random noise and just think about the cyclical effect. At some points it is adding to the rising trend but at others it is subtracting. At present the cyclical effect is preventing the rising trend from being obvious.

Another way to look at this is that natural effects do not have to be adding to global warming. They could be cooling effects, and it is only the increased forcing from the anthropogenic greenhouse effect that has led to the rise in global temperatures. The natural effect could be -30% and the greenhouse effect 130%. The anthropogenic effect does not have to be less than 100%.

So in my book the IPCC are wrong to quote a 90% probability that the 100% of the global warming is man-made. They should be quoting 100% probability that >100% of global warming is anthropogenic.

I see three themes in Judy’s work since her descent into madness:
1)A desire to find a “compromise” with the denialosphere, which, while noble in spirit, ignores the fact that truth cannot compromise with mendacity.

2)A strong contrarianism, wherein she will embrace just about any nutty idea so long as it runs against the mainstream.

3)A craving for the adulation of idiots even if it costs her the respect of colleagues.

It appears to me that Judy is staking her legacy on the remote possibility that climate science has been flat wrong for 160 years–perhaps with the confidence that if she is right, she’ll be famous, and if wrong no more forgotten than she would have been in any case.

You find some problems with the BAMS paper here, and the linked Hegerl et al. comment notes specific erroneous statements about AR4. Wouldn’t the review process normally weed out such errors in a short paper, especially when they had also been noted on line?

Comment: I believe this is my first comment on the site. Just wanted to say thanks for the great work. I’ve been reading your posts for the last two years and as a recent graduate student, it’s a real treat to read posts from this blog.

Question: Why use a Gaussian distribution? I realize it skews the tail of the distribution to the right, but it’s not sticking….

“It isn’t an isolated conclusion from a single study, but comes from an assessment of the changing patterns of surface and tropospheric warming, stratospheric cooling, ocean heat content changes, land-ocean contrasts, etc. that collectively demonstrate that there are detectable changes occurring which we can attempt to attribute to one or more physical causes.”

It is my understanding, based on questions that I asked here previously that you kindly answered, and assuming that I recall and understood your answers correctly, that (i) the changing patterns of surface and tropospheric warming are a signature of warming from any cause and not in fact a unique GHG signature; that (ii) ocean heat content changes are an indication of externally forced warming (e.g. solar or GHG) but not uniquely a GHG signature; that (iii) stratospheric cooling in the lower stratosphere is a signature both of anthropogenic ozone depletion and GHG increases whereas cooling in the mid stratosphere is uniquely a signature of increased GHG concentrations; that (iv) land-ocean contrasts are also a feature of any warming whatever the cause.

On the subject of stratospheric cooling, the only feature that you say is a unique signature of anthropogenic GHG emissions, I noted subsequently that Prof. Held wrote about stratospheric cooling due to CO2 in passing at his blog (“Ultra fast responses”, 22 Jan 2012):

“The classic example [see the post for complete understanding of the context] is the cooling of the stratosphere due to increasing CO2. This cooling has no direct connection to the surface/tropospheric warming. If there were a strong negative cloud feedback, say, that prevented the surface/troposphere from warming, the stratospheric cooling would be hardly affected. But this stratospheric cooling does have a substantial effect on N, the energy balance at the top of the atmosphere…”

Do you agree with Prof. Held? Because it seems to be implied that if the choice is between the theories of Prof. Lindzen and others who maintain the existence of strong negative feedbacks in the tropics, then the existence of stratospheric cooling can not be used to decide which theory is right (i.e. Lindzen or the IPCC).

[Response: You are conflating the many steps in the argument. OHC increases indicate that changes are related to a net global radiative forcing. Stratospheric cooling rules out other forcings – like solar – from being the dominant cause. Spatial patterns of warming distinguish between aerosol and GHG effects etc. Non-negligible climate sensitivity is mandated by the paleo-climate record. So while any one single issue might be explainable by some other mechanism, there is no individual other mechanism (or combination) that fits everything to the same extent as the GHGs. Thinking otherwise is clutching at straws. – gavin]

I have also studied the recent paper by Gillett et al. 2011 (Attribution of observed changes in stratospheric ozone and temperature. Atmos. Chem. Phys, 599-609) and this paper concluded that the influence of GHGs on stratospheric temperature can not be detected independently of ozone depleting substances – even in the mid stratosphere. (Have I read the paper correctly?)

[Response: Haven’t read it, but if they are looking at the SSU data, I don’t know how they good come to that conclusion. I’ll look when I get a chance. – gavin]

In summary, then, it appears to me that the strongest valid conclusion that we can draw from the lines of empirical evidence that you refer to is that the earth is definitely warming and the warming must be caused by external forcing. It seems to me that in order to draw the stronger conclusion that the IPCC wants to draw, i.e. that,

“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”,

then we really do require the further assumptions (a) the GCM model results are valid; and (b) that absence of evidence of other external forcings than CO2 is evidence of absence of the same.

Am I correct?

[Response: No. It doesn’t require GCMs – attribution requires a model of course, but not necessarily a GCM. And there is no assumption that there are no other external forcings. Rather you are positing that despite the currently known forcings explaining the situation well, there must be some factor which negates all of that, and an additional unknown forcing or forcings that has exactly the same net effectt. I (and IPCC) find that scenario very unlikely. – gavin]

Curry’s problem with the definition of “most” reminds me of Bill Clinton’s “It depends on what the meaning of the word ‘is’ is.”

The difference is that Clinton’s assertion was valid: “is” has different tenses, resulting in ambiguity in a question. For instance, “Is there a king of France” could be answered either “No” because there is currently no king of France, or “Yes” because, there are lists of numerous kings of France, and each person on that list is a king of France. (In Clinton’s case, the question was about a statement by Monica Lewinsky, “There is no sexual relationship of any kind between me and President Clinton” — this was true at the time she said it. You can put this one right up there with Al Gore saying that he invented the Internet — he never said it, but what he did say was true, as is what he said about Love Story and growing up on a farm … also, he won the debate with GWB by at least 10%, according to all the network polls taken that night, with people disgusted by Bush’s dismissive blather about “fuzzy math”; the notion that Gore lost because of arrogant eye-rolling was a false story promulgated by the media and now accepted as fact. Don’t be a gullible anti-skeptic who swallows political propaganda.)

Interestingly in an article yesterday in the UK paper the Mail on Sunday, reported that the UK Met Office and the Climatic Research Unit at the University of East Anglia (headed by Professor Phil Jones) have issued a report based on the data from more than 30000 measuring stations that “confirms that the rising trend in world temperatures ended in 1997″. This is not really consistent with the position taken on this and other sites that global warming is still increasing. Are the Met Office and the CRU mistaken?

[Response: Given a choice between the Met Office being mistaken and the Daily Mail mangling a story, I’d bet on the latter. There is plenty of short term variability in temperatures, and short term trends are not predictive of longer term ones. Thus claiming that the trend from 1997 or last Tuesday proves that global warming has stopped is simply not justifiable – Indeed, there is plenty of evidence that the long term trends have not changed much at all – see here for instance. I will have an update post on this later this week. – gavin]

With respect to aerosols, the key thing to remember that regardless of the magnitude of the change, the sign of the forcing is almost certainly negative (i.e. the net aerosol effect has been one of cooling). The dominant anthropogenic aerosols are sulphates (derived from the SO2 emitted during the burning of sulphur-containing fossil fuels), which are reflective, and hence cooling. Other aerosols (black carbon, organic carbon, nitrates) are more uncertain, but have a net effect that is smaller.

The issue though is not whether the net effect of aerosols is to cool or heat. The real issue is the direction in which this effect been heading. It is for instance arguable that with the clean up of dirty power plants in most of the developed world the negative impact was reduced. Of course more recently China and India may have reversed this trend.

The IPCCs method of formulating probabilistic statements seems pretty clear and unambiguous to me. In fact they are very similar to the PAC (probably approximately correct) bounds that are used in computational learning theory (a rather mathematical branch of statistics concerned with how much can be learned from data). If there were a problem with the ambiguity of such statement of that form, then the COLT crowd wouldn’t be interested in them.

It is ironic that someone with such a weak grasp of probabilistic reasoning should be writing papers on the subject, it is a recipe for the Dunning-Kruger effect.

Curry is indulging in some theoretical quasi-philosophical musings. Maybe-this-and-maybe-that kind of stuff.

People interested in the specific scientific questions – what’s the evidence for x – naturally find her assertions infuriatingly vague, and ask for some kind of evidentiary substantiation. And they are almost always disappointed.

Alex Harvey,
You know, when you have to twist logic to that extent to justify your point of view, I’ve found it’s a pretty good indication that your point of veiw is just flat wrong. Anthropogenic GHGs can explain both the stratospheric cooling and tropospheric warming–in fact they were the basis for predicting these effects well in advance of their observation.

In science we usually subject hypotheses to rather more stringent criteria than the straight-face test.

Re: #22 … Thank you. I’ve become as wearied by the persistent invocations of the Clintonian “is” parsing as I have by the Gore/internet fable. Read the transcript of the interrogation and it’s plain his response was to request a legitimate clarification of what was being asked. OT, I know (although the carelessness exhibited in flaunting the “is” moment is not unlike that of the climate change deniers waving random data).

How can a greenhouse gas cause warming in one layer of the atmosphere but cooling in another? I’ve read the assertions by Dr. Uherek but don’t see compelling logic to support the claim. If CO2 “traps” heat (in the troposphere), then it will trap heat everywhere.

[Response: This is the danger of taking analogies too far. What CO2 does is absorb energy at specific frequencies depending on how much of that energy is around, and emit energy at those same frequencies as a function of the local temperature. In the troposphere, there is plenty of upwelling IR in the right range, and so CO2 does a lot of absorption and the radiation to the surface from the GHGs warms the surface more than it would have been. In the stratosphere, there is less radiation in those specific bands, and so increases in CO2 increase the emission more than the absorption, thus cooling those layers. – gavin]

Permit me a slight tweak. Attribution, writ large, requires a logic structure (engineers call these “fault trees”), but not a model, per se. In this case, the model is used to bridge the gap between the the physics of GHGs, and predictions about future climate. In the vast majority of fields of inquiry, the gap between theory and prediction is bridged with empirical results via input/output testing. Ideally, the empirical results eliminate all possible root causes until only one survives. This is still a “very likely” attribution, since input/output testing results can be misinterpreted. Unfortunately, climate science does not have the luxury of input/output testing of the climate, and so relies almost exclusively on models to yield predictions. With attribution assessed with model output rather than empirical test output, there is simply no way to assess probability in any meaningful way.

[Response: Your distinction between ‘input/output’ testing and ‘models’ is lost on me. A GCM is a model, but so is the equation y = mx+b used to fit data to a straight line. So is any sort of input / output testing that engineers use. Indeed, all of science and engineers depends on models to relate theory with empirical observations. There is nothing distinct or different about climate. The challenge of course is that the timescales are very long, so we cannot test all predictions in real time. The same is true of much of engineering of course. Although it would be ideal, when building a bridge, to drive heavier and heavier trucks over it until it collapses, and then rebuild the bridge with a sign saying what the load limit is, bridge engineers don’t do that. They use prior observations and yes, models, to calculate probabilities. And yes, those probabilities are meaningful.–eric]

The attribution of stratospheric temperature changes to CO2 and ODS changes is quite different from traditional approach, i.e. attribution to CO2 and prescribed ozone changes. The latter is simpler, but it neglects the ozone-temperature feedback; Gillet et al. tried the former, but they were not (yet) able to untangle the influences of CO2 and ODS.

Ian@23 The Met Office has responded to the Daily Mai’s misrepresentation

“Today the Mail on Sunday published a story written by David Rose entitled “Forget global warming – it’s Cycle 25 we need to worry about”.

This article includes numerous errors in the reporting of published peer reviewed science undertaken by the Met Office Hadley Centre and for Mr. Rose to suggest that the latest global temperatures available show no warming in the last 15 years is entirely misleading.”

The “Cycle 25″ stuff I think originates with David Archibald at WTF, ending with a chart displaying a linear extrapolation predicting Cycle 25 will have almost no sunspots. The hype omits mention of Archibald and WTF and attributes it to “NASA” — devious, aren’t they?

“It is important to note that it is always risky to extrapolate linear trends; but the importance of the implications from making such an assumption justify its mention. …. if a large number of sunspots with magnetic field strengths greater than 3000 Gauss do appear, then the extrapolated PDF will be shown to be erroneous. We will see in the coming months and years.”

aside — has anyone used the CMIP data to chart what temperatures would have been -without- the increased use of fossil fuels (removing both the greenhouse warming and the sulfate cooling effects)? It’s worth remembering that natural variation (with or without a sunspot extended minimum) is significant.

[Response: Yes. The CMIP protocol includes “historicalNat” runs which are only using volcanoes and solar as 20th C forcings. I’ll make a figure when I get a chance. – gavin]

[Response: What a strange comment. It mixes up projections with hindcasts, conflates the IPCC statement with a strawman that no other factors have any effect, and finishes up with an apparent claim that because there is month to month variability in the MSU data, the long term trends are not attributable. Plus the confusion about the dominant factors in MSU4 (hint, it isn’t CO2). – gavin]

I was struck by the margin of caution between the AR4 attribution statement, p(x<50%) < 10%, and what you got from the simplified attribution exercise above, p(x<50%) < 1%. Would this reflect the actual margin between that statement and the confidence levels in the formal attribution studies it sums up, too?

I sometimes find myself having to explain that “expert judgment” in this context does not mean simply a poll of scientists’ subjective views, but comes out of doing the math. Would it also be appropriate to describe the expert judgment as ten times more “conservative” than the calculations on which it is based?

At what level of certainty are the terms “could”, “might” and “should” become “can”, “shall” and “does”? If AGW scenarios are so certain, why are they not predictions?

[Response: Because there are a huge number of paths that society might take into the future which will affect emissions. Thus all of the specific model simulations, which rely on scenarios of socio-economic-technological change, can’t possibly be definitive. What can be definitive are statements like, all else being equal, increasing GHGs will lead to significant temperature changes and consequent changes in many aspects of the climate. But note that certainty of a statement is inversely proportional to the detail contained within it. – gavin]

If AGW is not a theory, not up to the level of hypothesis, then uncertainty makes the demonstrative or imperative into the conditional or even subjective. Are we not there yet, with “very likely” and “95%”? What would it take to remove the conditionals? I understand that science always has error bars, but if the error bars are actually small, the “science” is really now “applied engineering”.

If AGW is both settled and certain, I expect, as for Einstein’s “scenarios” of relativity, that there are hard statements to be made about other events and the future that can be checked for accuracy. If the reason is that climate science is young and the subject, complex, then the certainty is not high. “Very likely” is “Possible”, not even “Probable”. We expect little from something possible, though, like drinking contaminated water in the Rocky Mountains, the possibility is sufficiently alarming to make us stick to bottled fluids.

There appears to me to be a disconnect between the actual certainty of process and outcome and the ability to make predictive statements based on the certainty proposed. This is not to say that we should not act on the potential outcome, but that we should be realistic about the actual threat. If settled and certain they are, then let’s test them. If, after 24 years, we still can’t make predictive statements of the next decade, that neither the nominal nor the catastrophic case can be ruled out, we should know.

IPCC AGW scenarios are Noahian deluge, Ark-worthy, with the “very likely” and “95%” certainty subtext. But unlike God’s sudden storms, we are facing an incremental, progressive threat (even if, with time, things get worse). Let’s see what and when things will happen.

You can’t reasonably have it both ways: both settled/certain and impossible to predict.

That post by Pielke senior is just idiotic to the point of being offensive.

Nice smack down of Curry and Webster by Hegerl et al. What a waste of their time having to refute such bad science.

I am still in disbelief that BAMS published Curry and Webster’s paper; also after reading Hegerl et al.’s response I do not know how Curry and Webster made it through peer review. So much for gate keeping.

I haven’t read the C&W paper, but I did look at her Santa Fe power point slides. :) Curry has stopped being a scientist long ago. A scientist concerned about the uncertainties would try to quantify them. A propagandist would simply raise them.

This was in the context of Dr. C’s agreement with Dr. Muller’s incorrect understanding of what the IPCC actually said in the original IPCC PR statement (in fact, at that time I called Dr. Muller a liar, strong words yes, but at that time, I had had just about enough of the “I Don’t Know” five blog posts per week nonsense that Dr. C loves to cultivate).

No correction was ever offered for their incorrect statements (Dr. C’s and Dr. Muller’s).

But The Three Stooges (note that there were actually more than 3 Stooges) did make their presence known at that time. Quite humorous, in a “bite me” sort of way.

You can’t reasonably have it both ways: both settled/certain and impossible to predict.

Actually you can: a trivial but very familiar example is weather prediction, which is based upon very settled science yet doesn’t work (well) more than a few days ahead.

An example unrelated to chaotic behaviour would be from my own experience, in a previous life, where I was expected to predict the location where a Soviet satellite containing a nuclear power plant was supposed to come down.

Such a satellite orbits the Earth in 1.5 hours, while the Earth rotates underneath once every 24 hours. A fair metaphor for a rou lette wheel… and it is indeed impossible even to tell which ocean or continent the thing will come down over, until the very last orbit!

Yet, would you say that celestial mechanics is not ‘settled science’?

But, yes, it is possible to make certain predictions also in climatology. Not very useful, but solidly certain allright. Like: temperatures will not fall back to pre-1970s level over the coming decade. Or over the following one. Or the following. Or at any forseeable time — unless we learn to stop emitting, and start actively pulling CO2 down from the atmosphere.

Gavin Schmidt the RC team – thank you for the clear explanation of why C&W 2011 and Dr Curry’s personal statements on the matter of IPCC statements on attribution are in error, and particular thanks for making it clear and sufficiently plainly-worded that a layman can understand it. Conversely (and I admit it may just be my own limitation) I find Dr Curry’s copious blog posts on this issue impenetrable, despite responses to calls for clarification. As it turns out, the word “most” is not nebulously woolly but means…err, well not half and not less than half, much as I always thought it did, and “very likely” does mean “look I wouldn’t call it a dead cert but I’d be gobsmacked if it didn’t turn out that way”. Certainly it seems to me certain people are certainly uncertain about “uncertainty”.