Trends, change points & hypotheses

The article provides a good overview on the debate. Some summary excerpts:

Is it really true that global temperatures have not risen since 1997?

The simple answer is: they have risen, but not by very much. “Our records for the past 15 years suggest the world has warmed by about 0.051C over that period,” said the Met Office. In layman’s terms that is 51 thousandths of a degree.

One [dataset], held at the National Climate Data Centre (NCDC), run by America’s National Oceanic and Atmospheric Administration, suggests that global temperatures rose by an average of 0.074C since 1997. That’s small, too — but it is another rise.

A third and very different data set is overseen by John Christy. . . “From 1997-2011 our data show a global temperature rise of 0.15C,” he said.

Overall, then, the world has got slightly warmer since 1997. Perhaps the real question is: why has it warmed so much less than was predicted by the climate models?

For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.

Some scientists appear to be warning we will fry, while other sources fear we will freeze.

How we interpret the 20th century temperature data has implications for how we project future temperature variability and change.

Climate trend statistics and graphs

So, how should we analyze the recent time series of temperature global or local temperature? Various blog posts have attempted to instruct us on this matter:

An argument for change-point analysis and analysis of partial time series is provided by Raymond Sneyers: Climate Chaotic Instability: Statistical Determination and Theoretical Background [sneyers environometrics].

Abstract. The paper concerns the determination of statistical climate properties, a problem especially important for climate prediction validation. After a brief review of the times series analyses applied on secular series of observations, an appropriate method is described for characterizing these properties which finally reduces itself to the search for existing change-points. The examples of the Jones North Hemispheric land temperature averages (1856±1995) and of the Prague Klementinum ones (1771±1993) are given and results discussed. Relating the observed chaotic character of the climatological series to the non-linearity of the equations ruling the weather and thus climate evolution, and presenting the example of a solution of the Lorenz non-linear equations showing that non-linearity may be responsible for the instability of the generated process, it seems justified to conclude that there are severe limits to climate predictability at all scales.

Three competing hypotheses

Consider the following three hypotheses that explain 20th century climate variability and change, with implied future projections:

I. IPCC AGW hypothesis: 20th century climate variability/change is explained by external forcing, with natural internal variability providing high frequency ‘noise’. In the latter half of the 20th century, this external forcing has been dominated by anthropogenic gases and aerosols. The implications for temperature change in the 21st century is 0.2C per decade until 2050. Challenges: convincing explanations of the warming 1910-1940, explaining the flat trend between mid 1940’s and mid 1970’s, explaining the flat trend for the past 15 years.

II. Multi-decadal oscillations plus trend hypothesis: 20th century climate variability/change is explained by the large multidecadal oscillations (e.g NAO, PDO, AMO) with a superimposed trend of external forcing (AGW warming). The implications for temperature change in the 21st century is relatively constant temperatures for the next several decades, or possible cooling associated with solar. Challenges: separating forced from unforced changes in the observed time series, lack of predictability of the multidecadal oscillations.

III: Climate shifts hypothesis: 20th century climate variability/change is explained by synchronized chaos arising from nonlinear oscillations of the coupled ocean/atmosphere system plus external forcing (e.g. Tsonis, Douglass). The most recent shift occurred 2001/2002, characterized by flattening temperatures and more frequent LaNina’s. The implications for the next several decades are that the current trend will continue until the next climate shift, at some unknown point in the future. External forcing (AGW, solar) will have more or less impact on trends depending on the regime, but how external forcing materializes in terms of surface temperature in the context of spatiotemporal chaos is not known. Note: hypothesis III is consistent with Sneyers’ arguments re change-point analysis. Challenges: figuring out the timing (and characteristics) of the next climate shift.

There are other hypotheses, but these three seem to cover most of the territory. The three hypotheses are not independent, but emphasize to varying degrees natural internal variability vs external forcing, and an interpretation of natural variability that is oscillatory versus phase locked shifts. Hypothesis I derives from the 1D energy balance, thermodynamic view of the climate system, whereas Hypothesis III derives from a nonlinear dynamical system characterized by spatiotemporal chaos. Hypothesis II derives from climate diagnostics and data analysis.

Each of these three hypotheses provides a different interpretation of the 20th century attribution and has different implications for 21st century climate. Hypothesis III is the hypothesis that I find most convincing, from a theoretical perspective and in terms of explaining historical observations, although this kind of perspective of the climate system is in its infancy.

Cherry picking data, or testing alternative hypotheses?

Back to the issue of cherry picking data, and interpreting the temperature time series for the past two decades.

Is the first decade+ of the 21st century the warmest in the past 100 years (as per Peter Gleick’s argument)? Yes, but the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4. In terms of anticipating temperature change in the coming decades, the AGW dominated prediction of 0.2C/decade does not seem like a good bet, particularly with the prospect of reduced solar radiation.

Has there been any warming since 1997 (Jonathan Leake’s question)? There has been slight warming during the past 15 years. Is it “cherry picking” to start a trend analysis at 1998? No, not if you are looking for a long period of time where there is little or no warming, in efforts to refute Hypothesis I.

In terms of projecting what might happen in coming decades, Hypothesis III is the best bet IMO, although it is difficult to know when the next change point might occur. Hypothesis III implies using 2002 as the starting point for analysis of the recent trend.

And finally, looking at global average temperatures makes sense in context of Hypothesis I, but isn’t very useful in terms of Hypothesis III.

And none of this data analysis is very satisfying or definitive owing to deficiencies in the data sets, particularly over the ocean.

IMO, the standard 1D energy balance model of the Earth’s climate system will provide little in the way of further insights; rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models, and explore the complexity of coupled nonlinear climate system characterized by spatiotemporal chaos.

“For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

Previously I focused on what is misleading about the headline.

Now Judith asks me about an extract. What’s misleading is the big I have put in bold. As we’ve seen already, even in AR1 the predictions cited above conclude with the explicit statement:

The rise will not be steady because of other factors.

Leake’s articles in climate are nearly always highly misleading. Your extract is no exception. Leake explicitly suggests the IPCC proposed temperatures would rise steadily. The IPCC, on the other hand, explicitly says the opposite.

It bothers me, frankly, that you don’t acknowledge how misleading Leake is in his presentation. Decadal scale variation is an important question worth looking at; and Leake makes a total hash of it.

Having explicitly told you want was misleading in the extract you provided from Leake, let’s go on to what is misleading in your own comments, Judith.

You say:

Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming.

First, citation please. Where is this prediction for two decades? Do you mean the 1992 supplement cited earlier in this thread with a prediction to 2025? That’s more than 30 years. Or something else? WHICH report?

Second. What’s this “without warming”? Even Leake doesn’t make that mistake. The issue is 15 years with a small trend, not with no warming at all.

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

I haven’t reviewed all the AR4 statements, but I tend to agree with Anteros that the issue isn’t enormously important, although I think Chris Ho-Stuart is technically correct. Interdecadal variability is a well recognized climate reality. If a 0.2C/decade average out to 2050 was predicted, that wouldn’t preclude a lack of warming in some particular decade. For the separate AR4 2007 claim of 0/2C/decade for “the next two decades”, am I wrong in interpreting that to mean the interval from 2007 to 2027? Clearly, we haven’t proceeded far enough into that interval to judge that prediction.

Judith, that would be very disappointing. I aim to be robust, but fair; and I’d really like to help you get a better support for the discussions here. (SkyDragon; sorry I have been slow on that!)

I’d like you to reconsider the above please… but I won’t refrain from challenging what I think is misleading or wrong. I would hope that isn’t the problem! Take me seriously, please.

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

Good! That’s a citation we can use, going to AR4, published in 2007.

Problem is… that ISN’T going to help with the slow down over the last 15 years. You said earlier:

Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20. What is the problem here?

But AR4 was published in 2007! So no, there HASN’T been 15 years “without warming” (or with reduced warming) since the prediction you are citing.

Don’t just run away from that. Talk to me. I won’t pull punches where I think you are wrong, but I’m not going to just dismiss or insult you.

I said in another comment that I expect the recent lull to change and warming to accelerate somewhat. That’s not just my prediction. That’s based in part on an expected change in conditions that impacted the earlier 15 years. TSI is likely to go up. ENSO is likely to start warming things up a bit faster. If this doesn’t happen, then I’ll seriously have to review my position.

It’s really confusing following this when there are different reports being cited all over the place. When you spoke of predictions bearing upon the 15 years just past, I was pretty sure you must have been referring to reports from before that lull.

The standard conclusion, as I have always understood it, is that warming is not steady, and that lulls over a decade or more are common. I expect — in line with the AR4 extract you are citing — that the recent lull is going to show up as a lull, with stronger warming before and after. A rise of 0.2 C/decade (or better, of between 0.15 and 0.3) over the next two decades (from 2007, if you like!) sounds pretty sensible to me.

Summary.
(1) Leake’s initial question is misleading. IPCC predictions recognize the existence of changes in the pace of warming over those time scales.
(2) Leake’s claim of IPCC meaning “steady rise” is flatly contradicted by the actual IPCC report.
(3) The expectation in AR4 (2007) of warming over the coming two decades is not falsified by slower warming over the period before it was published.

Problem is… that ISN’T going to help with the slow down over the last 15 years. You said earlier:

Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20. What is the problem here?

But AR4 was published in 2007! So no, there HASN’T been 15 years “without warming” (or with reduced warming) since the prediction you are citing.”

The projection of .2C per decade is based on the scenarios.
The scenarios used projected forcing from at least the year 2001 going forward. The SRES were published in Nov of 2000.

So, if we want to compare the projection of .2C to observations, 2001 is probably the most defensible starting point. Since 2001 the observations
fall outside a 95% confidence interval for a .2C projection. That can happen for a variety of reasons.

1. Some of the models run too hot. For example, the mean estimate for sensitivity is 3C. more than half of the models have sensitivity higher than this.

2. Emissions did not track with A1B projections or other forcings did
not track with projections.

3. rare events happen, and in shorter time scales they are more likely
to occur

Finally, there is no hard and fast minimum number of years required to reject the projection.
There is simply a probability that one can calculate. For example,
If observations ran 10C cooler after 5 years or 10C warmer after 5 years we would be right to conclude that something was amiss with the models.

The problem is that there hasnt been enough attention paid to WHY the models are not tracking observations more accurately. For example,
the new NCAR model looks to be even more out of wack with observations.

A review of Ar4 projections indicates that there is a case to be made for models running too hot. That possibility, hasnt been addressed or investigated or eliminated as a possibility. preliminary Ar5 results are still running too hot in TLT as Santer recently showed at AGU 2011.

“For the separate AR4 2007 claim of 0/2C/decade for “the next two decades”, am I wrong in interpreting that to mean the interval from 2007 to 2027? Clearly, we haven’t proceeded far enough into that interval to judge that prediction.”

That means we can wait until 2030 to see if the models run in the year
2000 were correct. Until then, why trust them.

The other point is the model results published in 2007, actually start in 2001, so thats the data you want to start your test.

All that said. If 2012, turns out to be 10C cooler than 2001, what would
you conclude about the models? would you say that its too early to render a judgement? 5 C cooler? 1C cooler?.

The bottom line is
this. The models were run with historical forcing up to 2000 and projected forcing after that. Second, differences in projected forcing dont change outputs until 20+ years down the line. the models clearly cant exhibit any behavior they want in those 0-20 years and still hit the projections
that close 30 years out. At all times along the process ( 0-20 years) we can surely note whether the models are falling above or below the observations. And we can note that the longer the models run hotter than observations, the more rapid the warming will have to be to catch up and
hit the window.

“But AR4 was published in 2007! So no, there HASN’T been 15 years “without warming” (or with reduced warming) since the prediction you are citing.”

A) See Fig SPM.5 from the reference. Model projections start 2000.

B) See Fig SPM.5 from the reference. The trends for all of the scenarios for the period 2000-2040 are effectively linear, similar to or lower than the trend 1997-2000 that informed the model start point, and on the scale of the predicted 0.2C increase there is no variability to speak of in any of them.

The model projections cannot be reconciled with the last 15 years of flat temps, regardless of when you start.

To achieve the predicted 0.4C rise, we would need to see about 25 years worth of 1990’s style warming in the next five years.

The believers cannot stand having the tenets of their faith challenged. Chris, as we see, likes to call those who point out problems in his faith, ‘liar’.
For AGW believers to be so allegedly obsessed with communication, it is ironic how often they retreat to simply…denying….what other people say and declaring them untruthful.

Steve Mosher – Ordinarily I wouldn’t add to the excessive column space already devoted to this not very important point, but since you addressed your comment to me, I’ll respond.

Chris Ho-Stuart is correct in criticizing Leake’s claim that the warming has been less than predicted. The various IPCC curves that have been cited are all drawn with the understanding that decade to decade variation from them is something to be expected – the curves are smooth simply because there’s no way of knowing which decades will vary from the projected mean and in which direction. Not to belabor the point, but predictions of 0.2/decade average out to mid century can’t be invalidated by data from the first decade of the century. Predictions of 0.2 per decade for a specified two decades (“the next two decades”) can be invalidated by data from 2007 to 2027, but not by anything that hasn’t gone beyond early 2012. The fact that projected curves were drawn starting in 2001 isn’t a test of “the next two decades”.

You are right that if 2012 is 10 degrees colder than 2001, the models will be in trouble.; And so will the rest of us. The same will be true if it’s 10 degrees hotter.

The most important point I wanted to make was in the first sentence. This is inconsequential stuff, as is obvious if one looks at the last 100 years rather than the last 10 to 15. Quibbling about it seems to me to be more about scoring points than understanding what is going on now, or will in the future. With that in mind, I’ll try to refrain from getting caught up in the arguing if these points continue to generate further comments, and to respond only if something new and important is added to the discussion.

“Chris Ho-Stuart is correct in criticizing Leake’s claim that the warming has been less than predicted. The various IPCC curves that have been cited are all drawn with the understanding that decade to decade variation from them is something to be expected – the curves are smooth simply because there’s no way of knowing which decades will vary from the projected mean and in which direction.”

Honestly. It’s like watching someone try to play Twister. Here’s a graph:

Chris Ho Stuart: The problem is that Leake said the rise would be steady, and the IPCC said it would NOT be steady.

Did IPCC language allow for the possibility that the rate of increase might possibly be way below the predicted rate for 50 to 100 years? I put it this way because of the vagueness in the phrase “would NOT be steady”. A large number of us interpreted the IPCC language as excluding the possibility of next to no warming for a period of 15 years. Had they seriously considered what has happened as a possibility, warnings of disaster would have been less shrill.

Thus, I think that Leake and Curry have interpreted the IPCC language fairly.

curryja: Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20.

Almost for sure, had the IPCC anticipated what we have as a real possibility, they’d have written differently. For example, they might have written, “there may be 15 – 30 years of no warming before the overall warming trend resumes, and 0.2C/decade is the anticipated mean rate over the century.” But they didn’t.

Thus, I think that Leake and Curry have interpreted the IPCC language fairly.

James Evans – The link and graph you refer to are interesting, and I think might help readers better appreciate Chris Ho-Stuart’s conclusion that predictions can’t yet be made accurately for individual decades, which is why Leake’s criticism of the IPCC model-based curves was misguided.

The models you refer to are very different from those cited by the IPCC. They involve the DePreSys approach to decadal climate modeling, based on the premise that better decadal forecasts will be possible if more attention is devoted to model initialization. The latter is not a major focus of the GCMs cited by the IPCC because over multiple decades, their projections converge toward the same trajectory from different initial conditions.

The graph you linked to shows that the DePreSys attempts may be a step in the right direction, but still have a long way to go. Notice for example, the great deviation in the hindcast due to the 1991 Pinatubo eruption, which obviously couldn’t be anticipated by better initializations.

Basically, the point made by Chris is pretty universally understood within climate science. Average temperature anomalies can’t be expected to anticipate actual values for any single decade, and are intended to be interpreted over multiple decades. It does seem to me that a great deal of time is being wasted here arguing about that, and could be better spent on assessing the relationship between interval lengths and signal to noise ratios.

Isn’t it noticeable that this is an observation [that a decade is harder to predict/less meaningful than longer periods] that should be made equally by people wherever they are on the climate spectrum?

Although it is tempting, I do find it tiresome that partisans jump on the tiniest ‘trend’ as containing large amounts of ‘meaning’. It could almost be used as a test of partisanship – anybody claiming [clearly] unjustified significance could be put in a sin-bin of ‘not to be taken seriously for a month’.

the key issue here is the length of such pauses that is “allowable” by H1. The IPCC and its proponents are emphatic that the flat, cool trend from mid 1940’s to mid 1950’s is not natural variability, but anthropogenic aerosol forcing. So pauses of 10-15 years are now expected, but not pauses approaching 30 years?

So pauses of 10-15 years are now expected, but not pauses approaching 30 years?

As I understand it, pauses of 30 years would be expected, albeit very rarely. Although it would certainly raise questions about the accuracy of models, the existence of such a period wouldn’t disprove AGW nor IPCC predictions of decadal averages; only those predictions that are explicit about specific 30 year time periods would be disproven.

Why did you omit the following even though you spoke of predictions that were right next door?

“The IPCC and its proponents are emphatic that the flat, cool trend from mid 1940′s to mid 1950′s is not natural variability, but anthropogenic aerosol forcing.”

Judy – I’m somewhat familiar with the IPCC reports (mostly AR4, less so for earlier ones), but I haven’t seen that claim. Could you cite the exact section and words where the IPCC emphatically attributes the temperature fluctuations between the mid 1940’s tand mid-1950s to anthropogenic aerosols while excluding natural variability? My own reading of the evidence is that much of the fluctuation during that interval was due to natural unforced variability from internal climate dynamics, with aerosols perhaps adding some cooling after 1950 but not necessarily a major player before 1950 nor necessarily an exclusive player from 1950 to the mid 1950s. If you could quote the exact IPCC assertion in this regard, it would be helpful.

I’ve just been reading back issues of Isaac Held’s blog and came across this gem:

“The model results give a hint of mid-century flattening, which is typically attributed to an increase in cooling aerosols, although not as pronounced as in the GISS curve, nor exactly contemporaneous with it. Is this just part of the random variation inherent in the model runs, or can some of the flattening be attributed to changes in WMGGs?

Reply
Isaac Held says:
March 29, 2011 at 9:19 am
There is some flattening of the CO2 evolution prescribed here (between 1935-1945) which is based on Etheridge et al.”

BillC – Thanks for quoting my question to Isaac Held. The possible CO2 “flattening” was pre-1945 and therefore largely irrelevant to the abrupt post-1945 dip before the curve flattened out between about 1950 and 1976. The reason for a reduced CO2 rate of rise was probably not due to a reduction in emission rates, but it may have reflected carbon cycle feedbacks that slightly altered the balance between atmospheric CO2 and terrestrial and oceanic sinks. Of the sinks, ENSO phenomena appear to play a significant (but transient) role in altering terrestrial CO2 uptake, but I don’t know how well that correlates with those early observations. The early 1940s were characterized by strong El Ninos that probably contributed to the spike around 1945, and whose cessation contributed to the post-1945 decline, possibly in combination with PDO changes.

MattsStat – Your point 1 hasn’t been ignored but has been addressed by Chris, me, and others. Please see our comments. Your point 2 hasn’t been ignored either, but doesn’t change the principle that predictions for individual decades aren’t useful at the current state of our ability to predict. The large scale interdecadal variability is apparent in the climate record of the past 100 years and is not a recent phenomenon.. That too is encompassed in our earlier comments.

When you read through this thread of comments it seems clear that some people have blind faith in the modelling approach used by the IPCC.

This faith is in spite of all evidence that shows the approach of averaging the results of models demonstrates that they have no particular model that can be relied upon for accuracy and the fact that none of the models outputs match observations.

The problem here is the intentional ambiguity in the IPCC reports. They amass hundreds of pages of scientific research. They assume a basis for all this, the radiative heat absorption by CO2(this is in their founding documents), and produce massive summaries, generally including long term ordinary linear regression in approriately applied to a time series, and then make a statement such as “an increase of .2 deg C/decade”. Hidden away is the caveat “The rise will not be steady because of other factors.”

Pardon me, but you can’t hammer away at an argument that CO2 is the cause of the temperature increase for page after page and then cover your ass with a few lines here and there that “the rise will not be steady”, or the effect of clouds is poorly understood. Statements like this deserve at least as much page space as the other arguments because they point out major weaknesses that are not fully assessed.

In this context Leake’s statement “The implication was that temperatures would rise steadily, not with 15-year gaps. ” is a perfectly resonable takeaway. That was a major implication to my mind in almost every paper I’ve read that predicted global warming. In most cases the author’s seamlessly slid from solid conclusions into “this is not inconsistent with CO2 causing global warming” and some statement about rising temperatures. So don’t be too surprised when people pick up on the implied conclusions that are so forcefully expounded.

Fred Moolten: MattsStat – Your point 1 hasn’t been ignored but has been addressed by Chris, me, and others. Please see our comments. Your point 2 hasn’t been ignored either, but doesn’t change the principle that predictions for individual decades aren’t useful at the current state of our ability to predict. The large scale interdecadal variability is apparent in the climate record of the past 100 years and is not a recent phenomenon.. That too is encompassed in our earlier comments.

Those points that you restate so clearly are the particular points why looking at the last 100 years instead of focusing on the last 15 would be a misdirection. We know already that interdecadal variability is great and that predictions for individual decades are not useful. However, the prediction (scenario, hypothesis, whatever) is tested by the data that came after it was made, and the data since the prediction have diverged from the prediction more than was expected by the people who made the prediction. It is possible for the 50 year prediction to be more accurate than the 15 year prediction, but until such a potentiality has been actually demonstrated to be true, every year that the data diverge from the prediction discredits the theory on which the prediction rested.

Your points clearly express why the sentence that I quoted was a misdirection, i.e. a bad recommendation.

“Notice for example, the great deviation in the hindcast due to the 1991 Pinatubo eruption, which obviously couldn’t be anticipated by better initializations.”

I can’t see anything special about that dip in temperatures. There have been many similar dips in the global temp graph. (Just look at it.) That particular dip is asccociated with Mt Pinatubo, because it helped dig the models out of a hole at the time. Which huge volcanic eruptions caused the other dips?

The residual warming in the 50 years to 2000 was about 0.08 degree C. The IPCC is wrong – the models are wrong – because they missed this mode of internal variability without which no sense can be made of any trend.

The whole box and dice of global warming is totally kaput – it needs a fundamental rethink. The maximum rate of warming in the 20th century was 0.08 degrees C/decade from all other factors. Can we use the 20th century natural variability to predict 21st century natural variability? I don’t think so.

The [Glaciers and ice caps] (GICs) rate for 2003–2010 is about 30 per cent smaller than the previous mass balance estimate that most closely matches our study period. The high mountains of Asia, in particular, show a mass loss of only 4 ± 20 Gt yr−1 for 2003–2010, compared with 47–55 Gt yr−1 in previously published estimates2, 5.

Current estimates suggest there are about 12,000 to 15,000 (glaciers) in the Himalayas and about 5,000 in the Karakoram. Of these thousands of glaciers, only 15 have been measured on the ground to see if they are gaining or losing ice overall. Despite the scarcity of data, trends are emerging. “It is pretty clear that the Himalayan glaciers have been losing mass, with markedly greater loss in the past decade than earlier,”. . .

“Glaciers in the Himalaya are receding faster than in any other part of the world and, if the present rate continues, the likelihood of them disappearing by the year 2035 and perhaps sooner is very high.”

Whatever happened to IPCC as a review of the science?

The uncertainties in the Himalayan glaciers appear to be greater than IPCC’s claimed confidence!

It is simply not true that there is an “IPCC hypothesis” that is falsified by a short term (scale of a decade or so) variation that is somewhat above or below the general trend.

Short term variations like this ARE a matter of scientific interest and hypothesis and competing ideas. They are not a matter of a clear consensus. And neither does the IPCC make strong claims or hypotheses on them — other than the statement that they ARE comparatively short term and that we expect the longer term trend to continue upwards.

It is also a misrepresentation of what the IPCC does to speak of IPCC “hypotheses”. The IPCC is not a research body. They don’t do scientific work. They summarize it. They make statements with associated confidence levels, based on the combined work of a lot of scientists, but these are not in the form of a “hypothesis”, but a conclusion. Whether you agree with them or not, the distinction matters.

The post does not claim that the AGW hypothesis is falsified solely by the recent lack of warming. It merely points out two competing hypotheses. (The falsification involves other factors, in my view.) Calling it the IPCC hypothesis makes sense because their endorsement is the focus of the debate.

If a particular prediction was required, the post could have used the prediction from the FAR that if there were few or no steps taken to limit greenhouse gases, temperatures would rise by 0.3 degrees per decade. This was predicted to mean a rise of 1 degree C by 2025.

Of course, times have changed, but if we pretend those predictions were never made, how can we learn from them?

Judith, the prediction is not for a rise of 0.2 every decade. If you think there is a prediction, for heavens sake quote it.

I don’t see anything which I would call a “citation” to any such prediction. A cite is more than just “the IPCC says”. You give some other cites, but not one for the alleged IPCC prediction. You AND Leake confuse the magnitude of the long term change expected with a prediction that applies to the last ten years.

While I wrote the previous, Judith — for the first time — did give a citation. “In the AR4″. You can do better than that for such an enormous report, Judith! — but then you say “out to 2050″ — which underlines the very point I am making.

You reinforce that Leake’s original question refers to a prediction which simply does not exist.

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections. {1.2, 3.2}

Model experiments show that even if all radiative forcing agents were held constant at year 2000 levels, a further warming trend would occur in the next two decades at a rate of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios. Best-estimate projections from models indicate that decadal average warming over each inhabited continent by 2030 is insensitive to the choice among SRES scenarios and is very likely to be at least twice as large as the corresponding model-estimated natural variability during the 20th century. {9.4, 10.3, 10.5, 11.2–11.7, Figure TS.29}

“”Joshua | February 7, 2012 at 7:47 pm |
IPCC prediction: 0.2C/decade during the first half of the 21st century. This prediction is cited in Leake’s article and also in my post.
What year is this? How many years in a century?””
A1. 2012. score 100%
A2. 100 score 100%

Firstly I’d refer you to my comment below. I think if push comes to shove, that’s where I’ll place my vote – not enough info to say very much at all. Certainly nothing meaningful.

However, the predictions of the FAR are dramatic, and assessing the reasonableness of dramatic predictions is very different from looking at a noisy signal over an even shorter period of time that isn’t doing very much at all, and asking what the signal is saying for itself

As I said, it is easier to discern trends if the signal to noise ratio is quite high. The same point is true of predictions. As the FAR predicted a very strong signal it is easier to identify the signal being other than predicted.

Back to my initial point – I wouldn’t go anywhere so far as to say that it was a failed or wrong prediction. What I would say is that it isn’t doing very well so far. Which isn’t of course saying very much at all.

I piped up because Chris Ho Stuart was going on about a lack of predictions from the IPCC. Of course everybody forgets that up until 2001, they were spraying predictions around like confetti [ish..]

Anteros — I did not say a lack of predictions. I said no prediction for rise over the last decade. That’s because I DO know pretty well what the IPCC reports say.

The IPCC simply does not have a prediction for short term changes like that. You’ve not shown anything other than the longer term predictions, along with explicit recognition that there are expected to be unpredictable short term variation, on the scale of decades.

This post starts out by saying it has warmed less than the IPCC predicted. No prediction is cited. That is because it doesn’t exist.

Now, I said that if a prediction was wanted the FAR could be used – as indeed it can. The last 15 years are relevant to that and I think it is entirely reasonable to say that there has been less warming than the IPCC predicted. Since 1990, or 1995 or whenever.

It’s not a major point and I don’t think it means very much – as I say elsewhere. But it is true that there has been less warming than the IPCC predicted – I think it is unreasonable to deny it irrespective of caveats and short periods of time.

It sounds like a desperate attempt to defend something that doesn’t need to be defended

Why don’t you say that the IPCC changed its prediction from 0.3C per decade to 0.2C per decade in 1995 [which it did] when it realised its estimation of climate sensitivity [among other things] was too high?

Thanks Judith… I AM familiar with the AR4, of course; but it should still be cited properly and any predictions quoted more accurately. Leake got it wrong. Your extract confirms it.

It is, of course, true that warming over the last decade has been less than the long term trend. There’s nothing particular surprising about that, in the sense that we don’t have the capacity to predict at that level with any confidence.

Understanding these short term changes in rate is an important and legitimate question. It’s a fair guess that the next few years will see a speed up in warming again. TSI is increasing and the ENSO appears to be moving back towards a push in extra heating; but that’s more of an educated guess than a strong consensus supported prediction. There are also other factors, like aerosols, which continue to be very tough to model. We’ll see.

The main thing I wanted to underline is that Leake was distorting the nature of predictions. As is his wont, I might add.

“For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

It would have been more clear to state that this projection of warming from the AR4 applied to the first two decades, such as in this statement of the IPCC

“For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.”

but since we are talking about the period 2000-2011, there doesn’t seem to be anything misleading in Leake’s statement as far as I can see

Anteros, you’re still barking up the wrong tree completely. This whole thing is about a warming lull over the last decade. Leake and Judith are both using the latest IPCC reports. You should too. And you should pay attention to the recognition from all the reports that short term changes from decade to decade exist and are not predictable at present, and were not predicted back in 1992 either.

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at 2000 levels, a further warming of about 0.1°C per decade would be expected.

In the earlier TAR report IPCC was a bit less specific, projecting a range of 0.15°C to 0.3°C per decade.

What actually happened?

Greenhouse gases continued to rise unabated, BUT instead of a warming there was a net cooling of the globally averaged land and sea surface temperature anomaly (HadCRUT3).

For the most recent decade, this was around -0.1°C.

In Las Vegas, Macao, Atlantic City, Monaco or anywhere else, IPCC would have lost the bet.

So the IPCC models made a lousy forecast.

But the real question here is:

If IPCC models cannot even predict the temperature of the next decade, why are we to put any confidence whatsoever in their ability to project temperatures for the next several decades – or even century?

Answer: We should be extremely skeptical of any model-based temperature projections cited by IPCC.

(1) Better handling of uncertainty and ranges of outcomes. AR1 mentions ranges, but the expressed prediction as a single number obscures that. AR4 does better.
(2) CO2 forcing is less. Back in 1992, the CO2 forcing was estimated at about 6.3 W/m^2 per natural log CO2. By AR3 a more accurate value of 5.35 had been obtained, mainly from consideration of the shortwave interactions as well as longwave.)
(3) Sensitivity estimates haven’t actually changed all that much. The range has narrowed a little, but sensitivity remains something between 2 and 4.5 degrees per doubling; or about 0.5 to 1.2 degrees per W/m^2 forcing.
(4) There are now more models. In 1992 the model results quoted probably depended overmuch on a GISS model, which then had sensitivity on the high side. (Caveat: transient response sensitivity is probably more useful than equilibrium sensitivity for looking at shorter scales and I’m not so sure of the numbers obtained in 1992.)

Judith, Leake’s question was phrased as follows: “Why has it warmed so much less than the IPCC predicted?” That’s highly misleading, because in actual fact that IPCC did NOT predict how much it would warm over scales Leake is considering.

Leake also asks:

Overall, then, the world has got slightly warmer since 1997. Perhaps the real question is: why has it warmed so much less than was predicted by the climate models?

That also is just silly. Climate models show variations over these time scales just like the real world does. The difference is that there’s no correlation in those short term variations. One model might have a slow down from 1990 to 2000; another from 2015 to 2025. The models predict, if anything, that you are going to unpredictable short term increases and decreases.

Leake includes sensible quotes in his article, but he continues to make — and emphasize in his headline — the absurd implication that models, or the IPCC, is making predictions that clash with the observed small scale slow down. That’s just flatly false.

I repeat. Examining and explaining decadal scale changes is a perfectly good and sensible open question. The phenomenon is real. The problem is real.

“For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

On reflection, this was not good advice. For it is stupid academic squabbles like this that continue to harm the reputation of climatology and climatologits in the public mind.

Keep on squabbling guys. Whatever the exact weight put upon appendix 7 subsection 3 caveat 7, the general public have been led to believe – by constant propaganda from climatologits and their political allies for a decade or more – that we live in a dangerously warming world and so must immediately make sacrifices and do counter-intuitive things for the good of the planet.

I do not recall that the take-home message of AIT was that warming was going to be a sort of on/off/maybe next year phenomenon but that it was happening now was real and was dangerous.

Maybe it is different in the US, but in the UK at least we have a well-founded and deep suspicion of salesmen who get you to sign up to something for its many benefits, and only discover years later that the small print buried deep in the Appendix means that the policy doesn’t apply when you need it most.

So Mr Chris H-S, keep on pointing out that on a close reading of subsection 7, clause 7 para 16 (as amended by subsequent resolutions as needed) means that what the IPCC said wasn’t exactly what they meant and so they have suddenly invented academic wriggle room. Shout it loud from the rooftops! Writhe and rend your raiment about how tough the press are on you and how a journalist this time hasn’t presented your case in the most favourable light.

Then point me to all the writings in the last twenty years where you and colleagues have been equally loudly shouting that warming was only going to be intermittent, that the idea of ever-increasing warming was wrong, that Al Gore had vastly overstated the case and about how the sceptics got that right.

When you can produce an extensive library of such documentation I’ll be happy for you to consider yourself vindicated.

If the IPCC does not intend people to believe that they have made short term projections, then they need to change the way they draw graphs.
If they truly believe that the next 10 years are a total mystery then they need to stop drawing graphs that tend to convey that message.
Its not that hard to be clear about this. They go the extra mile to make
sure that some graphs are not misunderstood. they should extend
this to all their presentations. If they truly have no idea what the next 10 years will hold, but are confident about 20 years from now, they should
clearly say so when they present charts

the IPCC publsihes a chart like this: Look at the care they take in the legend.

“Figure 10.4. Multi-model means of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th-century simulation. Values beyond 2100 are for the stabilisation scenarios (see Section 10.7). Linear trends from the corresponding control runs have been removed from these time series. Lines show the multi-model means, shading denotes the ±1 standard deviation range of individual model annual means. Discontinuities between different periods have no physical meaning and are caused by the fact that the number of models that have run a given scenario is different for each period and scenario, as indicated by the coloured numbers given for each period and scenario at the bottom of the panel. For the same reason, uncertainty across scenarios should not be interpreted from this figure (see Section 10.5.4.6 for uncertainty estimates).

They took care in the legend to advise people NOT to interpret discontinuities in the lines. Did they take care in the legend
to tell people not to take the short term lines seriously? No.
If they really have no clue about the next 10 years, then they need to show that and explain that.

Steven mosher, I don’t see the problem with the graph you mention. It includes a shading envelope that indicates a range of possibles. It has a horizontal scale on which ten years doesn’t even show (the tick marks are 50 years apart). You look and you immediately see that the increase over 10 or 15 years is small by comparison with the shading width.

Communication can always be improved, and I don’t want to get sucked into defending the IPCC as perfect communicators. I’m simply saying that Jonathan Leake was incorrect to speak of the IPCC making predictions that conflict with the lull in trend in the last 15 years. Inferring unstated predictions from a graph is invalid, whether the legend warns against this or not.

The IPCC, as Chris points out, makes it clear that warming is not expected to occur in a linear fashion. This point is also made time and time again by climate scientists. If the general public doesn’t understand this then it might be in part due to poor communication by climate scientists and journalists, but what the general public might believe is not the issue here – this is a forum for people who actually take an active interest in the subject so there should be an expectation that they are rather better informed than the average man on the street, especially if they are going to make confident pronouncements about the supposed flaws in the IPCC position (and other things). So in order to argue about whether the IPCC is right or wrong on on the subject its necessary to make a bit of effort to understand what the IPCC is actually saying. Of course if the purpose is simply to find reasons to say the IPCC is wrong then I guess it’s not so important.

Why don’t you say that the IPCC changed its prediction from 0.3C per decade to 0.2C per decade in 1995 [which it did] when it realised its estimation of climate sensitivity [among other things] was too high?

It would be completely incorrect to say that, so there’s a good reason ;) The best estimate and range of climate sensitivities used in the SAR are exactly the same as in the FAR. The difference in the projections is due partly to lower emissions scenarios (less CO2, methane and CFCs in particular), and partly to the introduction of aerosols into the scenarios.

‘The IPCC, as Chris points out, makes it clear that warming is not expected to occur in a linear fashion. This point is also made time and time again by climate scientists.’

I’ll look forward to seeing the evidence to support your assertion that the point is made ‘time and time again’ by climatologists. Becasue it certainly doesn’t gel with my memory. Seems to me that the emphasis on this point has only recently been remembered once it has become ever clearer that the temperatures are stubbornly failing to do what they are told.

So I’m sorry, but without further confirmation of an extensive library of quotations and presentations where this point has been drummed home, I have to provisionally put the ‘time and time again’ assertion in the same pigeon hole as Tony Blair’s ‘as I have said many times before and made my position absolutely clear’.

This, of course, was pure Blair speak for ‘f..k me I’ve never thought of that before and need a second or two to dream up something plausible’.

And you might want to reflect on this helpful remark from Richard Betts of the Met Office and teh IPCC, commenting today at Bishop Hill

‘I think the problem is that an own-goal has been scored in the communication of climate model projections. They are normally presented with the results smoothed over time, or as decadal means, for clarity of presentation. This has meant that it was not at all obvious that natural variability is important in the the shorter term’

which is a polite (as ever from Richard) and face-saving way of saying ‘Its a fair cop, guv. You’ve got me bang to rights’

Chris hijacked my post, but the point remains. Multiple hypotheses are in play. If the AGW hypothesis advocates claim it will take decades to resolve this issue, then so be it. Suspend all action in the interim. How is 2040 for a decision date? Sorry, but the absurdity is showing.

Are you serious? Read Chris’s arguments above – are you really saying that you have never seen them made before? Maybe you should try actually listening to what scientists say and giving their arguments proper consideration rather than automatically dismissing them out of hand, then they might come as less of a surprise next time.

1. They seem primarily to rely on appealing to the ‘disclaimers’ that the IPCC published…like those bring bits on TV ads that say things like ‘your results may differ’, ‘applications subject to status’, ‘no guarantee express or implied’, and most importantly

‘this advert is probably a load of s**t but we hope you won’t notice’.

Any organisation that has to rely on such get-out clauses to counter charges that their product or service does not remotely perform as advertised merely shows that they are disreputable and not to be trusted. More at the shoddy end of e-Bay rather than the John Lewis of scientific organisations.

2. He then proceeds via some devious route to try to pull more wool over our eyes. That legalistically, the IPCC is only the summariser of information provided by others. So any mistakes in the basic data aren’t the IPCC’s fault and it is, therefore, completely blameless for anything published under its name.

Which might just possibly have some traction if the group of people doing the summarising were completely independent of those doing the basc research. But they aren’t. They are the same people. By definition. The IPCC makes a point of picking the people who are its authors from exactly the same community as those doing the research.

So when, for example, researcher PJ (as we may call him) writes that MM’s paper on some old teleconnection shit is the hottest thing since sliced bread and conclusively proves that Thremageddon is expected two weeks come next Michaelmas, he is reviewing his long-standing research buddy’s work. No wonder that he gives a seal of approval. And, surprise, surprise we will find MM saying nice things about PJ also.

The IPCC can’t just wriggle away from its responsibility by pleading that the review was written by an ‘independent guy, and any consequences are nothing to do with them.

Even with all the deficiencies above, I’d have a bit more sympathy if people like Chris H-S had a demonstrable track record of loudly shouting that the trend would be intermittent going back some years.

And they haven’t.

Instead we’ve had such lunacies as the Mad Woman from the Met Office going on national TV to assure us that last years cold and snowy winter in the UK was a direct consequence of global warming. I don’t recollect H-S – or indeed yourself – protesting that she was wrong and that this merely showed that warming has paused.

We had the man in 2000 telling us that because of global warming, snow in UK would be a thing of the past. I don’t recall his immediate rebuttal that he had been misquoted and that really it might be a whole generation before any such effects came into play.

You claimed earlier that climate scientists have made this ‘intermittency’ point ‘time and time again’. And I asked you to provide some documented verification that they have indeed regularly and firmly stressed this point.

So far you do not seem to have been able to. Maybe it is still in preparation?

But hey, what do I care? Shenanigans like this word-chopping a la Ho-Stuart merely serve to reduce the climatologits credibility yet further. As Betts from the Met Office so nicely put it:

‘It’s been clear to me for a while that the field of climate science has a lot of work to do in regaining trust’

I don’t consider myself to be an expert by any means but in the few years I have been taking an interest in the subject of climate change I have tried to educate myself as much as possible about the various scientific arguments surrounding the subject, and one thing that has constantly been impressed upon my mind is that when there is a long term trend caused by increasing GHG levels there will periods when it is masked (or accentuated) by short term natural variability. And that this is reflected in individual model runs but as the timing of events such as El Nino/La Nina, volcanic eruptions etc. is unpredictable when projections are made based on ensemble runs then they will tend to average out and the projection will show a fairly steady trend.

Now funnily enough I don’t bookmark every interesting web page I visit or every informative blog comment I see, but here are a couple of examples

but it’s a point that has come up countless times in discussions on the various climate blogs I visit, including this one. I simply can’t believe you are unfamiliar with this argument. And to point it out isn’t to seek some kind of disclaimer or get out clause, it is an absolutely valid point and an essential part of the argument about how the IPCC projections compare to actual observations. As I said above, even if (and I’m not saying this is the case) scientists have been poor at communicating this point to the public there is no excuse for anyone who actually takes an interest in the subject to the point where they feel competent to make confident pronouncements on the state of climate science and the reality or otherwise of (C)AGW not to be aware of it. If you want to pass judgement on an issue and have your opinion takes seriously you have a responsibility to actually make an effort to understand it.

Regarding David Viner’s comment I think that if he was quoted correctly then it was a silly thing to say, although the kind of winters we have had in the last couple years have certainly been less common than they used to be. I’m not familiar with the particular comment from his female colleague which you refer to but the notion that global warming and its side effects such as the large reduction in the extent and volume of arctic sea ice could have a significant impact on atmospheric circulation patterns is one I have seen raised on occasions and doesn’t seem inherently implausible. It is interesting that the last couple of winters when we in the UK were suffering unusually (by recent standards any way) severe conditions other parts of the NH such as Greenland and eastern Canada were enjoying unusually warm weather. I find it interesting that you automatically assume her to be incorrect, I wonder what you base this assumption on.

‘For the next two decades, a warming of about
0.2°C per decade is projected for a range of SRES
emission scenarios. Even if the concentrations of
all greenhouse gases and aerosols had been kept
constant at year 2000 levels, a further warming of
about 0.1°C per decade would be expected’ (p12)

In a big box highlighted by a tasteful background colour to stand out from the rest.The first and most important box on the topic ‘Projections of future changes in climate’

This is the message that the ‘climatology consensus’ wanted the politicians and the public and the press to take away with them. This was their projection. Even if they read nothing else at all about climate change this is what they wanted Blair and Bush and their officials to know. This what they expected to be included in the front page article in the London Times and the NYT and the WSJ This was what they wanted to be the discussion point on the TV and the radio.

No caveats.No ifs or buts. No ‘your results may differ’. No cautionary notes. A definite unequivocal statement. The summation of 20-odd years work by thousands of people and tens of billions of dollars of public money

If the IPCC had wanted to give a different message, there are many other ways it could phrased this paragraph. But it didn’t.

Leaving aside for now the question of how effectively the IPCC communicated the expected nature of future rising temps, would would be your expectation based on your understanding of the science?

Assuming for argument’s sake that the IPCC’s calculation of the long term trend was broadly correct, would you expect temps to rise in a more or less linear fashion or would you expect there to be periods when temps were flat or even falling?

correct. the ipcc does not do science. its documents are not scientific. I think
that citing AR4 as a source is no better
than citing wikipedia. further if we want
to evaluate ipcc documents the standards
we need to apply are standards like

Steven, Dr. Curry, there is a point to what Fred and Chris have been about. But I don’t think they understand what it means in terms of the discussion here. From the figure you and Steven posted and the write up in AR4, IPCC claim is that by 2030 it will be such and such, and that the anthropogenic influence will be twice the natural variation. It is true that the scenarios are from 2000, so using 2000 is a good choice. At 2030, the anomoly is expected to be about 0.7 C, but starts at 0.2 for year 2000. 1/3 of that is .167 C That offfset is the maximum that natural variability can account for at 2030. The estimate for 2011 is about .5 on the graph. So, accounting for all the natural variance for the first 11 years since 2000, the anomoly has to be about 0.333 C. But the trend from 2000 is showing us at about .28 C at this time on the same baseline, unless I have been wrongfooted.

So maybe saying what Leake said is not exact, but it is accurate. indicating the models are running high about 0.016 C per year. This agrees with other posters such as Lucia, in general and approximately. So, taking into account what was actually said at 10.3 and 10.4, in AR4 it cannot be claimed at present that the models have not been “falsified.” And as pointed out, it would be expected at such a short number of years that such could occur, and that the final trend could actually be higher than what the AR4 stated. It is too early to claim victory, but not too early to point out that the odds are that the models are running high compared to what the IPCC actually stated.

“(4) There are now more models. In 1992 the model results quoted probably depended overmuch on a GISS model, which then had sensitivity on the high side. (Caveat: transient response sensitivity is probably more useful than equilibrium sensitivity for looking at shorter scales and I’m not so sure of the numbers obtained in 1992.)”

hmm. ModelE has a sensitivity of 2.7. Not sure what you are refering to

This model which turned out to have a lower sensitivity was developed around 1997 I think. If you read one of the Hansen 1997 papers he talks about using a new model with a lower sensitivity. Before that it was something like 4- 4.5ºC.

However the projections in the FAR were not very dependent on the spread of model sensitivites – the ‘best estimate’ was produced by comparing model experiments with observations and scaling to infer a climate sensitivity of 2.5ºC (2.1ºC if compared to current 2xCO2 RF formulation).

There are many trends found in nature and the works of man that have the characteristics described for temperature. They are sine waves of varying amplitudes with sawtooth irregularities.

Should the IPCC have used better description of the waveform their models create? Yes, but I’ll bet they didn’t really expect general readership to get involved.

If the frequency of the temperature trend is the long term forcings (mostly from things that happened to the ocean 800 years ago) and the sawtooth irregularities are what we can measure with satellites, ARGOs and the occasional thermometer at the airport, this waveform looks like a gazillion (pardon the technical description) others.

If you are convinced that you have separated the sawtooth irregularities from the actual signal, then you can set them aside when doing major calculations.

If on the other hand there is some uncertainty as to what forms part of the sawtooth variation and what is part of the underlying signal, you need to pay pretty close attention to all components of the information you receive.

Chris is now deploying the AGW wack-a-mole defense: When things are going the way believers want, the IPCC is the paragon of climate science and those who dispute that are denialist scum. When the IPCC gets in trouble, the same believers claim it never even makes a prediction, and those who claim otherwise are liars.

Chris Ho-Stuart: It is also a misrepresentation of what the IPCC does to speak of IPCC “hypotheses”. The IPCC is not a research body. They don’t do scientific work. They summarize it. They make statements with associated confidence levels, based on the combined work of a lot of scientists, but these are not in the form of a “hypothesis”, but a conclusion. Whether you agree with them or not, the distinction matters.

That is nonsense. The hypothesis is there whether you want to assign it to the IPCC or not.

Short term variations like this ARE a matter of scientific interest and hypothesis and competing ideas. They are not a matter of a clear consensus. And neither does the IPCC make strong claims or hypotheses on them — other than the statement that they ARE comparatively short term and that we expect the longer term trend to continue upwards.

That is a nice clear statement of the hypothesis. If the short-term lower-than-predicted temp records continues, then the hypothesis will be discredited. Right now, all we can say is that the prediction based on the hypothesis does not have a demonstrated record of accuracy.

Right now, all we can say is that the prediction based on the hypothesis does not have a demonstrated record of accuracy.

If there was no prediction for the first 12 years of the 21rst century, then it seems a bit misleading, in 2012, to say that we do not have a demonstrated record of accuracy for predictions of average temperature change for the 21rst century, or through 2050, or even through 2025.

Of course, I would expect that if temperatures during these 12 years had increased at a rate consistent with the predictions of average increase through longer time periods, some would say it was evidence that the predictions were accurate.

The predictions are information. The record of temperature trends over the past 15 years is information.

The implications of the predictions that were made is worthy of discussion. And discussion of the meaning of the trends and predictions, and implications, without a mention of the caveats that were made:

“The rise will not be steady because of other factors.”

is not particularly useful. I would rate it at about on the same order or meaning as a discussion of the hypothesis without discussing the temperature trends subsequent to predictions that were made.

So the question I would have is why didn’t Judith or Leake mention the caveat the IPCC put in right there along side the predictions they (Judith and Leake) spoke of?

Joshua: If there was no prediction for the first 12 years of the 21rst century, then it seems a bit misleading, in 2012, to say that we do not have a demonstrated record of accuracy for predictions of average temperature change for the 21rst century, or through 2050, or even through 2025.

Whatever you wish to call them, they have no demonstrated record of accuracy. They were presented to the public as though they were accurate descriptions of what would happen imminently and persistently without immediate action. Only after it was clear that they were wrong was there the increased “clarity of communication” that they did not really rule out 12 years of nearly no increase in mean temp, and that they were not intended to tell us what would really happen without action.

What’s more, it’s still extremely important to act now (we have been warmed) because the “non-prediction” now is that the non-warming can’t last, though the warming may not be “steady” by some post-hoc redefinition of steady.

Chris Ho-Stuart has been attempting to define “non – steady” in a way that no one took it when the IPCC report was written.

In the eyes of the general public you are making a distinction without a difference.

We/they do not really give a toss about whether the declaration is made by the IPCC in its capacity as the IPCC or by the individual members of the IPCC in their individual capacities and then summarised by those self-same members in their capacities as members of the IPCC . It doesn’t matter one jot which hat they are wearing at the time. They are all climatologists..and (dare I say) members of the consensus.

The crucial point that you are all dancing around and refusing to confront is that the idea of ‘Trust us, we’re climate scientists’ has taken another huge battering in the public mind. The last two years – since the Blessed Liberation of the Climategate 1000 and the Gods dumping snow and other s**t all over Copenhagen – has seen endless further revelations that the theories are ‘incomplete’ (at best) and that climatologists are no more trustworthy than the average Joe Sixpack,…and in some cases quite considerably less so.

And – in Europe at least – several unusually harsh winters explained way by the faithful as yet more evidence that global warming is real and that we’re all freezing because the planet is getting dangerously warm (?) are in danger of turning you all from the high status of ‘trusted advisers’ a few years back into laughing stocks.

You should be very worried by this, because, government funded as you all are, when hard financial times come – as they have – the easiest way to make cuts is to take away money from the softest targets. And climatology is now one of those.

I am just amazed that you collectively have no response other than to reassert that you are right and deserve to be trusted. And now to start legalistically rewriting history when your predictions don’t turn out right. I don’t remember there being any such uncertianties when you were proclaiming ‘The Science is Settled’ and other such BS.

And it is not good enough to say individually ‘it wasn’t me guv’. ‘Al Gore misrepresented my views’, ‘We always knew that the warming would stop’ and all that. Until a few weeks ago you were all proudly boasting about how much of a consensus there was. It was (occasionally) your ‘killer punch’. 97% of you all agreed. The flipside of 97% agreement is that 97% of you also have to take the rap.

So keep on arguing abou teh exact wording of who said what to whom and when – and whether they were cating in their individual capacity or collectively or as memebers of the consesnus. It really doesn’t matter any more. The general public will look on with amused bewilderment as you try to argue that black iwhite that hot is cold and that you deserve out trust.

From Hero to Zero is but a short downhill slide. And you guys are starting your descent and accelerating like the slope of a hokey stick.

“The general public will look on with amused bewilderment as you try to argue that black is white that hot is cold and that you deserve out trust.”

I think you are absolutely right Latimer. They will wonder why Climatology measured average temperature at surface yet with solar fluctuations of Wm2 at top of atmosphere in the mismatch. Peaches and plums.

If they had only measured incoming solar energy percentage of distribution through a atmosphere with a mixed albedo.

In complete contrast to the point I want to make, RSS have published in the last 24 hours their data showing precisely no warming at all since 1997.

They also, for those who have been waiting to celebrate for a long while [genuine alarmists who want to be proven wrong] show that for the last 15 years – since the beginning of February 1997, there has been global cooling.

Perhaps this isn’t in contrast to my point after all, which is that such things are essentially meaningless. The globally averaged temperature anomaly has a tiny modicum of virtue solely because there is precious little else. To quote thousandths of a degree is insanity, whereas a tenth or two is just a basic misunderstanding of noise, averages, chaotic systems and the vaguest of measuring coverage.

It seems to me to make some sense to say that the 20th century saw a rise in temperature of approximately three quarters of a degree. But that seemingly included 3 30 year periods that were different to what came before and after. And even then, these observations are barely discernible from a realistic distance.

It strikes me as a little irrational – though very human – to attempt to extract genuine meaning from 15 years of data. A third of a century? Possibly, maybe, just about – depending on the strength of the signal, but tempting as it is, I think staring hard at messy little bits of noise (from less than half of that time) hoping to see signs and wonders is a little too much to ask.

Check this out from Richard Lindzen – not because it is partisan [it isn’t, in this context] but because he uses visual means to make the very same point as I have tried to do.

Very compelling visuals in the Lindzen video, Anteros, thank you for posting that.

However one has to wonder whether there was anyone in that audience both competent in statistics and willing to challenge Lindzen on the following omission from his presentation.

If each point in the right slide is obtained as the average of 100 more or less normally distributed points in the left slide, the errors bars shrink by a factor of sqrt(100) = 10. Lindzen did not mention this.

I took Lindzen to be implying, both by this omission and his subsequent remarks, that in fact they don’t shrink, and that it is therefore misleading to zoom in on the right by a factor of sqrt(n) (n the number of points on the left producing one point on the right) without also increasing the length of the error bars in proportion.

Now imagine that McIntyre was in the audience. Would he have raised this point with Lindzen at question time, or would he have passed over it in silence?

Now further imagine that the speaker had been Mann instead of Lindzen, with the exact same talk, slides, and emphases, and ask again what would McIntyre have done.

It would be a very interesting poll to see who believes McIntyre would be just as likely to have raised this point with Lindzen as with Mann, and who believes otherwise. Especially if McIntyre himself were among those polled.

Non linear and non ergodic systems do not produce data that can be used for prediction. It may be possible, however, to separate some of the Earth systems which interact to produce climate and climate change and that some of these systems may well prove to be ergodic and any non linearities may yield to discretisation techniques.

It may well be the case that only the systems that produce forcings eg volcanic eruption, solar winds, sunspot activity and the like could be non ergodic but still capable of yielding scenarios for climate modelling purposes.

“IMO, the standard 1D energy balance model of the Earth’s climate system will provide little in the way of further insights; rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models, and explore the complexity of coupled nonlinear climate system characterized by spatiotemporal chaos.”

No doubt about that, even a basic 3D model would years ahead of the game.

The whole of Climate Chaotic Instability: Statistical Determination and Theoretical Background assumes a partial argument. Rather religiously actually. It offers three hypotheses that explain 20th century climate variability and change, nothing about 21st century hypotheses though. It discovers that only the three investigated scenarios have merit in the scientific debate of atmosophere.

What is it’s purpose, if only to reflect on a possibly invalid theory? The consensus is very much alive. No mention at all of any contrarion perspective.

It allows only three scenarios in the mix. What about Macro-climatology?
As there are no verses in the book about it, scientists don’t have to considerate it and can remain in a bliss of rhetoric.

Is Climate Science such a religion, it must issue fatwas against scepticism and logic. I can easily understand how how difficult it was for Galileo to shift an incorrect scientific paradigm.

Climate Scientists can fall of the end of the earth whilst reason will remain firmly planted on this beautiful mother of a Earth.

If we have a predicted rate and we have real data, can we not work out the minimum degree of ‘noise’ in the system. If the model states 0.2 degrees per decade, then true-modeled give us the current noise. However, from this random noise we can work out the likely hood of different temperature swings; again based on true-model. We can then see if we can fit 1900-2012 without a slope and see the probability of it occurring at random.

In order to derive a temperature shenomaly dT, one needs to know an initial temperature T1 and an end temperature T2.

So I ask, what was the global temperature T1 in the year 1750, 1850, 1900 (you choose). How was it arrived at and what a shmuck you are if you think I’m going to accept an answer to hundreths of a degree.

AGW stands on the pillars of a SHMUCK SHENOMALY.

Highly educated doctors and PhDs but not an ounce of commonse sense amongst them. It’s bloody well embarrassing.

Obviously they should have put error bars on the 0.2 degrees per decade. If you use two years a decade apart, and the interannual variability is several tenths of a degree, you are not going to get an 0.2 degree trend very accurately. I don’t know how people are using 1997 to compute a trend, but they should average at least a few years on each end to reduce the error bars to something where 0.2 degrees would be detectable. I always recommend at least a 10-year average which gets rid of solar cycles too (hint: not good starting at a solar max and ending in a min 15 years later). Using decadal averages the last decade was 0.15 degrees than the previous one, and the error bars are actually smaller than the trend.

Statisticians, and I’m not one, should realize that when the detrended standard deviation is 0.1 degrees (which it is close to), you can’t get an accurate trend of tenths of a degree per year from two years separated by ten years. A zero trend is just as likely as an 0.2 degree trend if the real trend is 0.1. Averaging more years reduces the standard deviation by the square root of the number of years, so by the time you average ten years the standard deviation for a decade is down to 0.03 degrees. Now you can get a trend from two decades with much smaller error bars and a trend of 0,1 degrees would be more likely to be seen.

What is special about an annual average? A decadal average is just as useful and has smaller error bars in addition to removing sunspot cycles quite well. Maybe you prefer the raw daily or hourly data?

While there may be some die-hards who haven’t gotten the word yet, we know now that our climate cannot be successfully modeled.

Too many uncertainties

Too many unknowns

Too much chaos.

For a good treatise on WHY model predictions – especially those covering longer time periods – do not work, and why “experts” have a worse chance of predicting something correctly than “non-experts”, read Nassim Taleb’s The Black Swan.

Manacker, You convey a mix of Luddite, Malthusian, and Cornucopian perspectives in the way you convey exactly what you would like to see. This opinion of yours is on the Luddite side.

Taleb’s book is not a dire warning of hopelessness. It is in fact a motivating influence and call-to-arms for engineers, scientists, mathematicians, and statisticians to get their act together. No one should be constrained to using Normal or Gaussian statistics any longer for environmental models. The natural world contains many fat-tail behaviors that were previously ignored because the Normal thin-tail statistics was the way that we were taught.

That’s what Taleb was saying. Uncertainties are bigger than we think, but it has no relevance to actually making predictions. You just have to use the correct fat-tail statistics.

Unfortunately, as Taleb wanted to sell books, he didn’t put a lot of math details into The Black Swan. It was left to the astute readers to figure this out. I use Taleb’s ideas heavily and they should be part of any uncertainty analysis toolbox.

Classifying my statement into some arbitrary categories you have picked out does not change the fact that the models cited by IPCC were unable to correctly project the temperature for the first decade of this century, as I stated (and as Jonathan Leake points out)..

Since climate models have demonstrated that they are unable forecast our climate one decade into the future, should we have any confidence in their ability to project climate changes over several decades?

A simple YES/NO answer is OK, followed by a one sentence reasoning for why you chose this answer.

It was not about climate change at all – it was simply about the utter futility and absurdity of trying to make long-term predictions in chaotic systems with more unknowns than knowns..

No. There is not a lot of “math” in the book (Taleb is not a “nerd” – and the book wasn’t written for “nerds”).

But there is a whole lot of “common sense”, which (unfortunately) is missing in the projections of future climate change being sold by IPCC, starting with the failed projection of 0.2 degC warming for the first decade of the century (topic of this thread) and going on to the forecasts of 1.8 degC to 4.0 degC warming by the end of this century.

If climate models are no good at predicting the future even ten years out, what use are they at all?

I think the word ‘even’ here suggests you are leading yourself astray in a belief that short-term prediction should be easier than long-term prediction. Analogously, you could ask why quantum mechanics is useful if it can’t predict the outcome of a single experiment.

Short-term prediction carries a couple of major complications I can think of right now:
1) dependence on initial conditions to define evolution of internal variability mechanisms. Similarly to weather forecasting, efforts can be made to setup a model to match initial conditions at a certain point in time but they are likely to break down pretty quickly because we lack the quantity and quality of data to be precise enough in the setup (and possibly because the chosen model does not accurately produce variability similar to that observed on Earth).
2) possibility of influence from unpredictable factors (volcanic eruptions, solar variability).

These factors carry less importance on multi-decadal timescales, in a probablistic sense at least. A string of very large volcanic eruptions or long period of extremely low solar activity would carry some significance but are not likely occurrances within the frame of, say, 50 years. I can’t remember where but I’ve seen it discussed that a ‘sweet spot’ for climate projections would be about 30-50 years. Shorter than that, unpredictable factors can have a considerable effect. Longer than that, the particular scenario (i.e. what humans will do) becomes an important factor and there is also the potential for dynamic ‘surprises’.

On a more general point, projections of the future are just one possibility for use of GCMs. They are also indispensible tools for exploring factors which affect climate.

‘you could ask why quantum mechanics is useful if it can’t predict the outcome of a single experiment’

That you phrase the question that way shows that you don’t know much about QM.

But my question about models still stands. Let me phrase it another way.

We have spent something like $100 billion on climatology in the last 25 years. The purpose has been for us to understand better what will happen in the future wrt climate. And the final outcome of all that $100 billion is the climate models. Everything else is just part of the ‘scaffolding’ to that goes into the construction of those models.

And it is apparent that they don’t work very well – if at all – on the stuff we want them to do. It is even beginning to seem likely that they can never be made to do the things we would like them to do. That the nature of the climate system means that it is as insoluble a problem as is the behaviour of an individual wave/particle in teh QM world.

So, before we write off our $100 billion as just money wasted, I wonder if there are any side benefits of these models that we can point to and say ‘well at least we got ……’. Much like some think that going to the moon was a total waste of money but that we got teflon saucepans as a spin off.

So – are there spin offs from climate modelling? Have we found out (by accident perhaps) anything useful from them?

And the final outcome of all that $100 billion is the climate models. Everything else is just part of the ‘scaffolding’ to that goes into the construction of those models.

I’m now wondering what your understanding is of what climate models are, how they are built and how they work. Your characterisation simply doesn’t make sense to me – in many ways climate models are the starting point for research into the climate system.

Climate models are built using quite simple (or in some cases, not quite so simple) rules such as the law of gravitation, planck function, ideal gas law, pressure gradients etc. More recent models incorporate atmospheric chemistry e.g. methane oxidising to CO2. The large scale complexity that emerges from these rules is due to the amount of different objects/forces which are interacting according to them.

I was listening to a Feynmann lecture the other day and he was talking about a particular theorem related to Quantum Mechanics which had been known for about twenty years but never tested (this was in the 60s). The mathematics described by this theorem when applied to a real situation became so complicated that, at the time, the theoretical consequences couldn’t be calculated. Without an ability to model the consequences of a theory it can’t be tested. This is what climate models, GCMs in particular, offer – the ability to explore the consequences of physical laws that we think are having an effect on climatic systems so that they can be compared with observations. For example, if you ran a GCM without simulating the rotation of the planet there would be a huge difference in weather and climatic patterns.

Where modelled consequences clearly don’t match observations the differences can be used to explore what’s missing or not quite right – perhaps the modelled elevation of land in certain areas is not quite right, causing a difference in the flow of wind currents, or maybe the grid resolution of the model is too coarse for certain features to properly resolve.

Depending on what the problem is found to be the model can be improved or the error in the model can be quantified and taken into account in any analysis involving it.

And it is apparent that they don’t work very well – if at all – on the stuff we want them to do.

They seem to do produce a generally good approximation of Earth’s climate. Not sure what you want them to do.

So, before we write off our $100 billion as just money wasted, I wonder if there are any side benefits of these models that we can point to and say ‘well at least we got ……’

Firstly, even if it is the case that $100bn has been spent on climate research very little of that would have gone on climate model development. Probably the largest expense in modelling would be purchasing and upkeep of the supercomputers used to run them.

Regarding the side benefits, well the end game for climate science would be the potential for geoengineering, of our own planet or perhaps another one in the distant future. The various space programs may eventually be able to get us to another planet but the chances of encountering a planet habitable to humans would be greatly improved if we can cause it to be habitable. Along the way climate research has aided in vast improvements to forecasting of weather + El Nino and Monsoons.

Thanks, I think I understand – and have always understood – how models are constructed. Many years ago I was briefly involved in similar efforts, so I am not a complete newbie.

But you misunderstand the ‘we’ that I am using. ‘We’ in this case are the taxpayers. The people who ultimately pay your grants and bills and expenses and all that. And we (in that sense) only really fund climatology because we’d like an answer to the question about whether the climate is really changing in ways that might be detrimental to humanity, if so when will it happen and how much. will it be. And maybe to help to give some ideas about what (if anything) we can/need to do about it.

We ask you to find this out on our behalf and give you a pot of money, expecting you to come back with the answers.

And you haven’t.

Instead you’ve constructed a load of models (do we really need more than 20?) that can’t even tell us about the climate a few years out. They may be extremely intellectually interesting , crafted by the finest minds (though everything I read tells me that they are more thrown together like a heap of junk and that its a miracle if they can be run twice without major realtime surgery becuase the coding and methods are so archaic) and beautiful in their elegance.

But all those things are irrelevant. The do not do the job we have paid for them to do. They do not fulfil your side of the contract. You have had $100 billion dollars, we have got nothing. They are junk. If you were a commercial organisation you’d be so deep in lawsuits as to be drowning.

As to your belief that only a small part of the $100 billion went directly on climate modelling, that is about as daft as to say that the cost of going to the Moon was only the cost of the Lunar Module, since that was the only bit that actually got there. All of climatology – satellites, paleo, philosophy, datasets – whatever it may be is spent in the end to support the models..to help you guys ake better models and to answer the questions posed above. Splitting out one particular area of specialisation and saying ‘well we didn’t get all the cash it must be Joe down the hall who did’ is a cop out. It is all money for climatology whatever its precise allocation within the system.

And your spin offs onto other planets are so far into the future as to be little more than a wishlist. That we can now forecast some weather events better is indeed good news. But could we not have got the same result more quickly and more cheaply by just improving weather forecasting?

Economic times are harder. Budgets are under pressure. In all government expenditure there is increasing pressure to deliver excellent value for money. This is as true of ‘research’ as it is of welfare or the military or anywhere else in public service.

Seems to me that you guys have been left alone with your sandpit for far too long developing whatever caught your fancy and have taken your eye off the big picture. We don’t give you all this money to write papers, or to Kill the Deniers or to go to conferences. We give it to you to solve a particular supposed problem. And you haven’t done so.

Time to start either getting the effort back on the right track or to admit defeat and resin yourselves to the fact that it simply can’t be done. Your choice..but one you will have to make soon.

“Our records for the past 15 years suggest the world has warmed by about 0.051C over that period.”

I have my doubts that scientists can measure global average temperature to within tenths of a degree at any given time. Does anyone seriously believe that “we” actually know the trend over 15 years to within 5 hundredths of a degree? Seriously?

I believe the instruments in question are tree rings. And they work by a mystical process called ‘teleconnection’. Somehow they are able to give a precise record of temperatures hundreds of miles from their location.

The theory of teleconnections has been written up by Drs C. H. Arlatan and S. H. Yster and is often cited in the climatology literature. Usually just after the horoscope page.

Free at last, free at last; thank God Almighty hypothesis III is free at last. We can actually talk about it. We can discuss the math of Tomas Milanovic and the physics of Robert Ellison. The Tsonas paper can now be read and reread as a construct that is an alternative to the trace gas radiative transfer model. Is this really the promised land?
Now really, I do have some tidbits that I have harbored, wondered about, and seem to fit with the implications of Hypothesis III:
VS on Bart Verheegen’s blog March 2010 demonstrated that for the time series of 1880 to 2008, temperature fell within natural variation; clarifying for me the falsehood of “unprecedented warming” in the late 20th Century. The other tidbit was an observation: global temperatures responded in a homeostatic way to perturbations by the volcanic eruption of Mt Pinatubo and the El Nino of 1998. Each time the temperature was forced up or down, the global temperatures returned to their previous baseline. Homeostatic mechanisms, as applied to climate change would mean that climate sensitivity is very low: i.e., near zero. Therefore arguments and calculations of climate sensitivity, particularly at the 3.5 C guesstimate of IPCC, didn’t make sense to me. Now the climate scientists, other than the Team of course, can pursuit identifying the precursors of and eventually be able to predicts future abrupt climate changes. Pursuing this line of research is more likely than not to lead to better decade weather and climate changes forecasts. Can we put the jibberjabbing aside for a while and concentrate on some science?

Which pretty much applies to climate science. The part about CO2 is correct, but the part about positive forcings is yet to be proven correct and the part about it all leading to disaster is most certainly overblown to the point of being science fiction rather than science fact.

Mt. Pinatubo generated particulates which eventually fell to the ground due to gravity.

Yes, I think the conceptual error can be described by analogy to the basic laws of motion. The OP is making an assumption that volcanic eruptions apply a force to planetary temperature, which is then free to do as it likes within the reference frame of the planetary climate system and that appears to be causing it to flip back into place.

The reality is that the volcanic eruption applies a force, through the release of reflective aerosols into the stratosphere, moving the planetary temperature, but then a force of equal magnitude is applied as the aerosols are scrubbed out of the atmosphere. There are no clear homeostatic implications – flipping back into place is simply an expected consequence of the sum of forces.

The Times articles – I saw it in The Australian – had temperature graphs that show monthly data. It all peaked in early 1998 as a result of the 1997/98 ENSO dragon-king. A dragon-king – I have said before – is an extreme event associated with a chaotic bifurcation. An extreme ENSO event happened at the 1976/1977 ‘Great Pacific Climate Shift’. So we have a couple of examples in the record that are associated with climate shifts at decadal scales. They are linked to oceanographic and global hydrological shifts that are fundamentally important to human societies and the natural world. Oceanographers and hydrologists have been researching these things for decades. My own journey began in 1990 when I read an article on Flood Dominated and Drought Dominated Regimes in north-east Australia. The article (Erskine, W.D. and Warner, R.F., 1988, Geomorphic effects of alternating flood- and drought- dominated regimes on NSW coastal rivers. In R.F. Warner ed.) was inspired by an observation that rivers changed form in the late 1970’s from a high energy braided form to a low energy meandering form.

ENSO determines 80% of temperature variability in the tropics (McLean et al, 2009) and 70% globally. So the record is complex to start with. There is decadal variability to ENSO and the Pacific more generally and they are associated with the trends of cooling and warming seen in the 20th century. Here is a graph from which this background variability has been removed.

It shows moderate warming of 0.08 degrees C/decade in the 50 years to 2000. One of the things to keep in mind is that the background is considerably more variable than we have seen in the 20th Century. This can be seen in, for instance, the 11,000 ENSO proxy.

There is complex and dynamic behaviour at all scales – but in one sense it is very simple. It is all about energy – energy in less energy out equals the change in energy stored in Earth’s climate system. The problem is the data – SORCE, CERES and ARGO all start post 2000 – so miss the critical shift around the end of the last century. Here is the RSS plot – if you start from the big La Niña in 2000 there is a temperature rise for the decade. Extra warmth is found in CERES and in the ARGO deep ocean data. The problem still is the shortness of the record and how much useful information it contains about even short term events.

There is data at ISCCP-FD and at the Earthshine project that shows the 1990 shift. My feeling is that the cloud related shift to a cooling influence will intensify as La Niña increase in intensity and frequency for the next decade or 3.

There are also intriguing suggestions that UV changes (not TSI) at the poles are implicated in mid latitude climate change (Lockwood et al 2010). Could this be the missing factor in little ice age dynamics regardless of whether the Thames freezes?

A day after China barred its airlines from complying with what many consider a tax, the head of the International Air Transport Association (IATA) warned that several nations view the EU scheme as an “attack on sovereignty”.

“Non-European governments see this extra terrestrial tax as an attack on their sovereignty,” International Air Transport Association (IATA) director general Tony Tyler said in a speech to the European Aviation Club.

This issue has been canvassed previously. From my memory, several posters from the EU (or maybe England) supported the new EU tax on the basis that it only added about $30 to a trans-Atlantic flight

I pointed out that this tax covered non-EU land and oceans on long haul flights. I now have an answer on the additional cost to a return Sydney-EU-Sydney flight : about AUD$700 (Qantas flight fares, February 7)

I should also point out that the Euro is a cot case (and the British pound is closer to this than is comfortable), so AUD$700 is considerable

I really hope that retaliation occurs. Perhaps we may see the Chinese buying old European castles at knockdown prices and turning them into fried rice outlets :)

Not a single reference to Earth Orientation Parameters in the article & comments. Every day it’s looking more & more like no one or almost no one who participates in this forum is serious about understanding natural climate variations. Particularly concerning is the apparently popular notion that hypercomplexity (in the mathematical sense) cannot be simple.

Ride ’em cowboy. Get’s so the irregularities in the Earth’s rotation is as hard to ride as a bucking bull from the western plains. No wait – that’s the bourbon whiskey. Maybe you should give up drinking. As for hypercomplexity – all we need is good ole Max Ent and a power distribution – gives us a fat head or a fat tail. I just keep getting the 2 mixed up. We don’t need no stinkin’ dynamical complexity, high faluting butterfly talk, bifurcated phase space and a whole lot of city slicker glop about autocorrelation and dragon-kings. A cowboy just needs a fat head (fat tail?) to express the untold viccisitudes of the soul. Dang nat city slickerts will fall for it every time.

It’s a shibboleth. What part of mad theory didn’t you understand. What the hell has hypercomplex numbers to do with anything in the real world. Oh for Christs sake. Either say something sensible or amusing.

Now I know where that missing heat went. To the deep pressured depths of the oceans.

“”The pressure of the atmosphere and bodies of water, has the general effect to render the distribution of heat more uniform. In the ocean and in the lakes, the coldest particles, or rather those whose density is the greatest, are continually tending downwards, and the motion of heat depending on this cause is much more rapid than that which takes place in solid masses in consequence of their connecting power. The mathematical examination of this effect would require exact and numerous observations. These would enable us to understand how this internal motion prevents the internal heat of the globe from becoming sensible in deep waters.

General Remarks on the Temperature of the Terrestrial Globe and the Planetary Spaces; by Baron Fourier.””

Not joking about complex numbers Chief. Quite the contrary. This is one of the most serious problems in the whole climate discussion. Ignorance won’t make it go away, even if the ignorance comes from “experts”.

Tsonis et al. (2007) is little consolation for skeptics. His definition of a major climate shift in 1976 occurred after a lull such as this. However his shifts and lulls have an amplitude of 0.1 degrees, and therefore are washed out in a longer term warming trend. Anyway, according to Tsonis, we would be due for another shift that will increase the temperature rapidly.

I linked to a graph of Tsonis ‘washing out’ the natural variability in the 20th century – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/ – the residual is about 0.08 degrees C/decade and I am a little tired of repeating myself for people who are psychologically unable to process this fact.

There are 2 relevant papers – amongst a plethora of excellent work.

A new dynamical mechanism for major climate shifts
Anastasios A. Tsonis, Kyle Swanson, and Sergey Kravtsov

‘This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature.’

The periods last 20 to 40 years in the proxy record – and I don’t think that the 20th Century is an adequate handle on the limits of natural variable. You’re a cowboy that get’s dragged kickin’ and scremin to the dance Jim.

Here is the discussion regarding the above question in the climate emails:

1) … to argue that the observed global mean temperature anomalies of the past decade falsifies the model projections of global mean temperature change, as contrarians have been fond of claiming, is clearly wrong. but that doesn’t mean we can explain exactly what’s going on.

2) Here are some of the issues as I see them: Saying it is natural variability is not an explanation. What are the physical processes? Where did the heat go? We know there is a build up of ocean heat prior to El Nino, and a discharge (and sfc T warming) during late stages of El Nino, but is the observing system sufficient to track it? Quite aside from the changes in the ocean, we know there are major changes in the storm tracks and teleconnections with ENSO, and there is a LOT more rain on land during La Nina (more drought in El Nino), so how does the albedo change overall (changes in cloud)? At the very least the extra rain on land means a lot more heat goes into evaporation rather than raising temperatures, and so that keeps land temps down: and should generate cloud. But the resulting evaporative cooling means the heat goes into atmosphere and should be radiated to space: so we should be able to track it with CERES data. The CERES data are unfortunately wonting and so too are the cloud data. The ocean data are also lacking although some of that may be related to the ocean current changes and burying heat at depth where it is not picked up.

3) we can easily account for the observed surface cooling in terms of the natural variability seen in the CMIP3 ensemble (i.e. the observed cold dip falls well within it). So in that sense, we can “explain” it. But this raises the interesting question, is there something going on here w/ the energy & radiation budget which is inconsistent with the modes of internal variability that leads to similar temporary cooling periods within the models.

II. Multi-decadal oscillations plus trend hypothesis: 20th century climate variability/change is explained by the large multidecadal oscillations (e.g NAO, PDO, AMO) with a superimposed trend of external forcing (AGW warming). The implications for temperature change in the 21st century is relatively constant temperatures for the next several decades, or possible cooling associated with solar. Challenges: separating forced from unforced changes in the observed time series, lack of predictability of the multidecadal oscillations.

Based on the data, the oscillation is predictable for the whole temperature record as shown:

Heh. One of the jibes of warmists about skeptics is that they appeal to utter unpredictability and chaos and uncertainty to dismiss all “scientific consensus” re climate. Now, backs to the wall, it is that same “unpredictability” to which they now must have recourse to justify the utter lack of scientific “falsification testing” validation of their Great Cause.

The approach I use in applying physics is to remind myself of the fundamental laws as often as possible. This helps to rule out all sorts of impossible scenarios. The basic law of energy transfer establishes the long-term trend.

Girma | February 8, 2012 at 2:00 am | Reply
…
Challenges: separating forced from unforced changes in the observed time series, lack of predictability of the multidecadal oscillations.
I have long consdidered that the “forced/unforced” terminology is suspect. In a complex recursive system, the distinction is rather arbitrary. It’s pretty hard to sustain in the face of the long-range observation that a dominantly CO2 atmosphere in deep pre-history was steadily and gradually turned into rock and hydrocarbons (mostly by life), and is now trickling back into play, at VERY slow relative rates.

“rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models ….”

Theorem: The steady-state dissipation of a thermodynamic system due to an energy flux between two isothermal surfaces equals the maximum rate of work possible for a Carnot engine operating between these same temperatures given the same energy input.

from which it directly follows that the maximum temperature change possible for a 3.7W/m2 forcing is 1.44K. This is merely a limit, not a solution. Unfortunately, the theorem’s derivation is mathematical, not rhetorical.

Ian, how about looking at a time slice of the last 15 years? I calculate from the data (UAH data here for lower troposphere) a linear regression trend of 0.85 C/decade, with a 95% confidence range of 0.023 to 0.147 C/decade. (Simple white noise model calculated with Excel; a more sophisticated model taking autocorrelation into account would give wider error bars.)

Or the 30 years trend? 0.165 C/decade, with 95% confidence range of 0.144 to 0.187

So that data suggests that the short term 15 year trend is indeed slower than the longer 30 year trend, though the support of that inference is not particularly strong. It also confirms that the trend is still for warming, whether taken over either 15 or 30 years.

He found hundreds of errors. When he pointed them out, IPCC officials simply brushed them aside. Stunned, he asked himself, “Is this the way they approached the climate assessment reports?”

Vahrenholt decided to do some digging. His colleague Dr. Lüning also gave him a copy of Andrew Montford’s The Hockey Stick Illusion. He was horrified by the sloppiness and deception he found.

“Stunned.” “Horrified.” Yes, if you read this book you will be appalled, astounded, dumbfounded, horror-struck, overwhelmed, shocked, etc. at the lies, fraud, deceit, dishonesty, inaccuracy, misrepresentation, etc.

Someone should write a computer program to crank out reviews in this vein of books and articles critical of climate scientists like Santer, Jones, and Mann, in case that’s not what’s already happening here. The thesaurus offers plenty of good words, and there’s a vast range of suitable phrases, with “should be incarcerated for wasting billions of dollars of taxpayer money” and “the biggest fraud since Piltdown man” barely scratching the surface.

We’ve been here before, plenty of times by now. Those who regard this sort of thing as a contribution to our deeper understanding of the climate have a way of resolving scientific differences that is rarely practiced in serious scientific circles.

Vaughan Pratt: We’ve been here before, plenty of times by now. Those who regard this sort of thing as a contribution to our deeper understanding of the climate have a way of resolving scientific differences that is rarely practiced in serious scientific circles.

Ah. but those who regard this sort of thing as a contribution to our deeper understanding of climate-related political debate are making a serious point that should not be ignored.

@MattStat Ah. but those who regard this sort of thing as a contribution to our deeper understanding of climate-related political debate are making a serious point that should not be ignored.

Matt, what are you saying here? Apologies in advance if I’ve misunderstood you.

I will grant you that political debaters should not ignore what to them is a serious point. But how about scientists who don’t feel they have anything to offer the political debate? Are they expected to understand the bitterness of the global warming pill, or explain the consequences of not to taking it?

Wouldn’t science be better off if those who had never come near making it to the debate team, but who had gotten science grades good enough to get them into a good or even great school, were allowed to continue what they find themselves good or great at, and let those good or great at political debate focus their talents on the climate debate?

In a fair fight, those who don’t accept AGW could go about proving it false by finding competent gladiators for their side and challenging the other side to match them with their best gladiators. Courts of law are organized around that principle.

But that’s not how the AGW protesters have been going about it. Instead of issuing a challenge to the other side they’ve created their own kangaroo court by kidnapping those scientists they figured would be most hapless when out of their element, arming them with the same weapons their gladiators were trained on, ridiculing them, and then demanding before a massed crowd of onlookers that they show themselves undeserving of that ridicule.

The crowd that loved the Roman circus 2000 years ago is just like the crowd today that screams their approval when they see blood drawn in the arena, literally then but figuratively today. It’s like throwing Christians to the lions, with the climate scientists as the Christians and the faux scientists as the lions. Lions may not be competent scientists, but they’re far from dumb animals.

Today’s crowd can’t see the blood, but they can still smell it, and they love it!

Just to be completely contrarian about recent temperatures, I don’t buy the connection with ocean oscillations since it seems to me they’ve been pretty flat since 1990.

To quote Mug Wump on the long-running Amazon discussion group Global warming is nothing but a hoax and a scare tactic, “It’s the Sun, stupid” which he repeats ad nauseam.

We’re just now coming out of an odd-numbered solar cycle, namely 23, and embarking on 24. When exiting the even ones the temperature doesn’t go down much (no idea why, but it seems to be correlated with the magnetic alignment of the solar wind—this phenomenon has been going on at least as long as the 162-year HADCRUT3 record, eight evenly spaced instances). Also the exit from 21 was quite weak. 19 was stronger, but the last exit comparable to 23 was 17, which from 1940 to 1950 went down an impressive 0.2 °C after factoring out all other thermal impacts. In comparison cycle 23 only went down about 0.16 °C, not as strong as cycle 17 but enough to almost exactly cancel the CO2-induced warming, while the ocean oscillations stayed out of the picture as noted above.

Meanwhile the latter hasn’t gotten any weaker, and adding in the likely rise for cycle 24 should produce an impressive amount of warming during the decade 2010-2020! Also I don’t expect the ocean oscillations to remain flat for much longer, but to start going up after an extremely cool spell in the early 1970s, which will add yet further to the temperature in 2020.

Just my two cents (three if you count the ocean oscillations). Climate skeptics should feel free to chime in with their customary “expect an impressive amount of cooling during 2010-2020.” Check back here in a decade to see who was right.

You’re a bug eyed loony bin. The only thing you’ll be checking in a decade in a rehab clinic. If you want to provoke ding bat arguments for your amusement in response to your own deliberately ding bat arguments. You’ve come to the wrong place. This is such as shallow facade that it shows your utter contempt for lessor mortals who don’t share and can’t possibly appreciate your inestimable worth. I think you re a ding bat with pretentions of idiocy. You are not even an idiot – you pretend to be an idiot when in reality you are just a brain fried chimpanzee. You think you can play these games for your amusement and the bamboozlement of the herd. Everyone else is too polite or disinterested in your crapola to call you on it – but I will tell you what a crapulous piece of work you are with no dissembling at all.

Calm down.
Currently scientist figure all their laws and theories are absolutely correct(even though their models are crashing and burning).
What I find amazing is that as soon as some scientist come out with a calculation, a model is built around it and it is used instead of our real planets parameters.

Do you know what 48 degrees latitude is?
It is actually a very important number!
This is where velocities and centrifugal force separate as the angle of the planet in rotation is too steep and too slow to pull water south.
If you were to actually generate an orb and rotate it slowly, while pouring water, the water would always want to go to the poles. Rotate this quickly and water flies off at the equatorial region or anyplace at a 90 degree angle to the axis of rotation.

These theories are all temperature data related which do not include any parameters in motion or with physical changes. Strictly temperature data.
The models are huge mistakes…here is an example:
“Scientists say that the planet’s axis has shifted based on their models.”
Okay, let’s look at this with the actual planet. The core and axis are deep in the planet and is incredibly dense. The crust floats on a magma cushion.
The actual event is that the crust has shifted and NOT the axis.

Science has generated many idiocies such as this. From not reviewing science theories as technology changes to 95% of science is still considered as unexplored. Yet no one wants to look at anything which may effect the current consensus and funding generated.

It is interesting he would quote Mug Wump – ‘AGW is nothing but a hoax,’ thread on Amazon because that thread in immortalized on Board Reader as more emblematic of censorship.

For Example, you see on Board Reader–e.g., “As Dr. Pielke, Senior has said, in a period when the oceans are cooling there is no global warming during that period. The oceans have been cooling according to the same methodology that the global warming alarmists would presume to use to elevate their conjecture of man-caused global waming from superstition to …”

But when you tab on the link to the Amazon thread you see the above post was deleted by Amazon based on complaints from the Amazon community.

It is interesting he would quote Mug Wump – ‘AGW is nothing but a hoax,’ thread on Amazon because that thread in immortalized on Board Reader as more emblematic of censorship.

Was this anything more than some disgruntled commenter complaining at some point on Board Reader that Amazon kept deleting their comments? You portray it as a universally accepted fact, which is news to me. “Emblematic of censorship” is unsupportable libel.

What amazes (and sometimes even annoys) me about that long-running thread (heading towards 40,000 comments?) is just how few comments Amazon deletes from it. Incandecentbulb, can you name even one climate blog that deletes fewer comments? Do you really believe Climate Etc. deletes fewer objectionable comments than Amazon? (Judith, have you made that comparison?)

On Tamino’s Open Mind for example I’ve tried posting what I (like almost everyone who posts anywhere) had considered to be detailed and irrefutable scientific facts. This was not to make a nuisance of myself there, mind you—I hadn’t even heard of that blog before—but only to defend myself against attacks on me that had been posted there, that weeks later had been brought to my attention, and that were clearly unsupportable. The moderator apparently mistook me for a “zombie” of the kind Steve Sullivan is allergic to, whose presumed goal in life is to make the lives of AGW alarmists miserable, and deleted what I wrote saying that he “wasn’t going to argue with me” while declining to retract his attacks.

That Tamino thinks “Open Mind” is an appropriate name for his blog reflects poorly on his understanding of the concept.

Both sides of the climate debate maintain equally closed-minded blogs, such as WUWT, Greenfyre’s, Bishop Hill, RealClimate, JoNova, ScienceOfDoom, etc. On all these blogs, if you disagree with their basic premises, then no matter how logically consistent the basis for your disagreement there’s an automatic presumption of guilt until proven innocent. While I have less experience with Lucia’s The Blackboard and Tamsin Edwards’ AllModelsAreWrong, they seem better in that regard.

But I digress. My main point was just to defend Amazon against what seemed to me a particularly unjust criticism. (Disclaimer: through no fault but my own I have no interest, vesting or vested, in any part of Amazon itself.)

Color me firmly in Hypothesis II territory, which few seem to have staked out. Well, we will see – it is going to be interesting. The IPCC types are certainly resisting shifting toward this ground, though it would seem a natural shift in the face of future lack of warming (it describes my thinking to some extent). Meanwhile, the chaos guys may yet rule the day – but I’m betting against them. My money’s on flatness for the rest of this decade. Check in with you all in 2020.

billc, you are such a rebel :) I was thinking a comparison of II to III would be a good approach. You are still going to end up with a range of about 0.52 to 5.2 by latitude with a mean of about 1.48 C.

That’s just my estimate of course, I should leave the cipherin’ to the real mathematicians :)

billc, the way I see it, II is like the old farmer’s almanac. The better the past data and the length of the records are, the better it is for making predictions. III is geared toward determining the changes and the causes of the changes. Both have limits, volcanoes and stuff like that.

As long as both don’t have the same limits, comparing the two should highlight anomalies, the volcanoes and stuff, to improve the efficiency of each.

There will never be a perfect solution with either method, but comparison should improve the degree of confidence and point out the more significant unknowns.

Atmospheric circulation is an oxymoron to temperature data as it is the movement of our planetary gases.
Velocities and centrifugal force of our planet has generated a very fascinating phenomenon of circular motion. From the creation of snowflakes to the creation of tornadoes all require circular motion.

I have read the lead into the thread, and most of the comments, and I confess I am unimpressed. It seems to me that there are two vital issues, neither of which seemed to have been looked at.

The first issue is, has anyone detected a CO2 signal amongst the noise of the temperature/time graph? I have seen no evidence of ANY CO2 signal. If is it there, then where is it?

The second issue to me is the utility of any hypothesis. The reason for hypotheses is to be able to predict what will happen in the future. This then provides a basis for determining which hypothesis is likely to be correct. So given that there are three hypotheses, what do these predict will happen to global temperatures into the future? Then we can get the future data and compare prediction with actuality.

It seems to be that the most likely hypothesis is that there is no CO2 signal; increasing CO2 levels have a negligible effect on global temperatures. What we are witnessing is temperature governed by a series of phenomena which affect temperature, most of which we simply dont understand.

Every movement and way of thinking and acting that takes on enough gravity to be named will ultimately be analyzed based on ‘trends, change points & hypotheses’ but only after-the-fact by dispassionate chroniclers of the past. We can only guess about the future but my guess is that years from now AGW theory will be seen as the Chevy Volt of science.

I say the current global mean temperature record for 1998 for UAH will not be exceeded before 2020.

why UAH? because HadCrut3 is not going to be updated. it’s moved to #4 or whatever. It may be adjusted upwards and get a closer match to GISS. Spencer has indicated a small downward adjustment for very recent temps in the new version of UAH to come soon.

Since it seems unlikely that here in the US we are going to change much in the way of policy, AGW or not, for quite a while, I can wait until 2020 or beyond.

Sure I’ll take on shorter term bets (in quatloos of course) but they are less important. I guess it’s like all the little races that happen before the Derby, versus betting on the Derby itself, not that I’ve ever been.

External forcing (AGW, solar) will have more or less impact on trends depending on the regime, but how external forcing materializes in terms of surface temperature in the context of spatiotemporal chaos is not known.

How is that statement reconciled with the trend of significant warming over the entire 20th century?

With a prolonged cooling regime, negative PDO, AMO and reduced solar, small, long term, impacts increase in relative significance. Conductive cooling for example decreases less that radiant forcing since one is a 4th power function and the other is nearly linear. The conductive impact may only be 1/20 of the radiant, but over 20 times the period, it balances the radiant reduction.

It is the total energy transfer over each time scale that is the issue.

Joshua, CO2 increased in the last 100 years rapidly with respect to the multi-century feed backs. The longer term, multi-century feed backs are trying to catch up. If we can continue producing more CO2 at the same rate, the the 20Century trend would hold for the next century, with the same natural variation imposed on the trend, Girma’s plots.

Problems are, the starting point for the last century, was that at the natural variation mean? At what point are we really on the CO2 forcing Curve? And what longer term natural variables of significance exist?

If the 1900 to 2000 mean is the true global temperature average, then there has been about 0.4 C of warming per century or 0.04 C per decade. That does not sound as frightening as 0.2 C per decade does it?

When I think of something “trying to catch up” I think of a person trying to make a convincing argument or my dog trying to get me to take her for a walk (as she’s doing right now, in fact).

I have a harder time understanding how multi-century feedbacks can “try” to do anything.

Problems are, the starting point for the last century, was that at the natural variation mean?

Even if the impact of ACO2 is stronger than the most concerned predict, i human impact on the environment will not affect the mean temperature of the planet for a very, very, very long time. The mean will most significantly be determined by the temperature of the planet for billions of years prior to human existence.

What I’m trying to understand with some level of specificity (I know it’s tough to be specific without going over my head) is how Hypothesis III is reconciled against longer-term temperature of temperature increase over the 20th century.

If you prefer comments from those who are scientifically literate, I’ll quote from Steve’s comment below:

But even if correct it does not mean that the long term underlying trend is not forcing dependent.

Joshua, there is nothing partisan about that comment. 0.2 C is a worst case, 0.04 is a best case, the range is likely between the two without some unexpected event.

“Catching up” is a fact of life in a dynamic system. When you hit the brakes on your car, you have to allow for stopping distance that changes with your velocity, road conditions, tire conditions and whether the girl on the side of the road is cute or not :) (reaction time).

Using the car for another analogy, if you start it once a week, it starts, if your ignore it for a month, it may start, ignore it for a year and you need a mechanic, ignore it for 5 years and you really need a mechanic. Entropy is a bitch!

Tsonis said that the CO2 signal is super imposed on a longer term trend, which is correct. There is an impact due to CO2. There is also an impact due to agriculture, development, deforestation, black carbon etc. I have no clue how must if due to each, but some, agriculture, seem to have improved conditions and some, development seem to have made things worse. I just want a better feel for the impact of each before I decide the best place to spend the money. Right now, it looks like dealing with black carbon and land use provide the bigger bangs for the buck.

0.2 C is a worst case, 0.04 is a best case, the range is likely between the two without some unexpected event.

Except if “the CO2 signal is super imposed on a longer term trend,” and we can’t really determine what is causing that longer term trend (and thus can’t determine at what point that imposition of the ACO2 signal will be swamped by long term trends), and we have growth in ACO2 emissions, wouldn’t we expect that the magnitude of the impact of the ACO2 signal will increase?

In the long run, we’ll all be dead. Looking at long term trends, (say, over billions of years), the signal of ACO2 would, I imagine, not be detectable. That doesn’t really speak to the importance of short term trends to people who live lives orders of magnitude shorter than billions of years.

Anyway, I’ll read over your responses again and see if I can manage to understand the hypothesis.

That is a comparison of the Central England Temperature to Siberian tree rings in the Taymyr peninsular. There is a new study out suggesting that the Little Ice Age was caused by tropical volcanoes. England is influenced by the Gulf stream current, so it is a fairly decent indication of ocean heat content, not great, but not too bad. The Taymyr tree rings are a fairly good indication of growing conditions near the Arctic. The only time that both seem to jibe, is after 1814 or so.

The industrial revolution was the birth of the Agricultural revolution. Lots of land was cleared for wheat following the inventions of the steel plow and wheat combine. Right now, 1% of the total surface of the Earth is planted in Wheat, Rice and Corn, the big three grain crops. A doubling of CO2 will cause about a 1% change in forcing.

Do you think it is possible that agricultural land use could be responsible for 50% of the warming since 1814?

In the context of 2-3 million years, it is obviously completely insignificant in a mathematical sense. But that doesn’t mean that it is completely insignificant w/r/t the impact of climate in how people live their lives.

Why not consider billions of years? I was really thinking of the period when humans evolved but complex life forms certainly existed much earlier. The paleological (sp?) evidence also indicates that the Earth was much warmer than it is today and of course, much colder at times as well.

This seems to support the hypothesis that there are strong negative feedback at work in keeping the Earth’s climate within certain bounds conducive to maintaining life. This could just be an accident but many believe that it is not.

Actually, that’s not quite accurate. I think that within a range of probabilities, ACO2 might be an answer. As I understand it, the “consensus” opinion is based on quantified probabilities of that theory of cause-and-effect.

Now I can understand why you, as an individual, might think I’m making assumptions that I’m not making. But when it happens in an often repeated pattern in these pages, I have to question why that happens.

I would imagine that there is some combination of factors in play – but considering probabilities, I’d have to guess that one of them, at least sometimes, is a willful intent on the part of some “skeptics” to impose certainty onto statements I make that don’t express certainty. I see it happen often when some “skeptics” misrepresent the “certainty” of the “consensus” perspective on AGW. You know, the whole “They said that the ‘science is settled’ kind of meme.”

Again – I’m not saying that I put you into that category. I’m just wondering why a mistaken assumption on your part (about what I assume) is something I find so frequently.

Joshua I don’t think it’s willful, I think people project onto you from others perceived to be on your team like Robert, Andrew Adams etc. Heck on Collide-a-scape you told Michael Tobis you were on his team. Liar ;)

I think that maybe sometimes it happens because I set people up (so I can nail them on making false assumptions).

But either way, it doesn’t reflect very well on “skeptics” as a group. Either they are suckers easily set up by someone of inferior intelligence, or they are prone to false generalizations rooted in inattention to detail.

Tobis is on my tribe in some ways, and not in other ways. Kind of depends on how you define tribe.

And of course, there’s always the “I wouldn’t want to be a member of any group that would have me as a member” line of thinking.

If you want to keep talking about yourself and analysing people’s reactions to you,…

When people consistently make incorrect assumptions about what I assume, I comment on it. I think that the pattern is instructive. When people address comments to me about my assumptions or motivations – as you have done in the post above, I respond.

Here’s a little logic question for you. If you think that I shouldn’t be responding to comments that people address to me, what is the single most effective thing that you can do in response?

Ponder that a bit and get back to me with an answer. I’ll tell you if you’re right.

“High frequency ‘noise'” plus uncertainty in the data can plausibly account for the dip around 1910 and the bump around 1940. Even if it only accounts for a small part of these two “features” (say 0.2C for 4-5 years) the perceived excessive “warming 1910-1940″ and “the flat trend between mid 1940′s and mid 1970′s” look far less significant.

And “explaining the flat trend for the past 15 years” is easier because the trend isn’t flat and the period is very short.

Hypothesis 3 is currently at the level of numerology. But even if correct it does not mean that the long term underlying trend is not forcing dependent.

“High frequency ‘noise’” plus uncertainty in the data can plausibly account for the dip around 1910 and the bump around 1940

Is that really specific enough to qualify as an “account?” How do you get such precise dates out of such vague priors?

The dates of the solar cycles are very precisely known, allowing dates like these to be obtained with at least an order of magnitude better accuracy.

The declines after the odd-numbered solar cycles 13, 15, and 17 were all strong. Furthermore the bottom of the first of these, in 1908, was within 5 years of the deepest trough of the ocean oscillations since 1850, which was in 1913, which is why 1910 was so cold. And the highest peak of the ocean oscillations during the same 162 years was in 1938, a mere 25 years later, and three years after that, 1941, saw the hottest peak of the solar cycles in that period, namely number 17 (number 13 having been the second hottest back in 1899), whence the 1940 bump. (Note that I’m treating the so-called “ocean oscillations” as a single organic whole rather than as independent AMO, PDO, etc.)

The second hottest ocean peak was 1877, which in conjunction with the fairly hot solar cycle number 11 peak accounts for the high temperatures around 1880.

There is less to long term global climate than meets the eye, in fact it is disturbingly simple given our collective prejudices about the complexity of the climate. As one of the students pointed out at our weekly lunch meeting today, this is not so surprising when you consider that the global behavior of the Sun’s output is very simple by comparison with the very complex local behavior of individual sunspots. Climate is not just an incomprehensible muddle, it a mix of simple big things and complicated little things, kind of like Gulliver strapped to the ground with lots of Lilliputians climbing over him.

Regrettably I can’t say that shorter term phenomena like ENSO are simple: they’re well beyond my ken, and I’m happy for now to agree with anyone who wants to claim they’re complex. However I don’t see them as having any significant influence on phenomena of duration much more than a decade, and I therefore discount them as having little relevance to long term climate forecasts.

Attention has been drawn by Judith to three “features” of the temperature record without any formal assessment of whether the features are real or are just a figment of the mind being drawn to two cherries – one at around 1910 and another at around 1940.

These features are being attributed to some sort of non-specific “regime change” that forces climate into a different state.

But two of the features are accentuated only by two short-term periods of what could be “high frequency ‘noise'” which is apparently explainable by “Hypothesis I” – i.e. a short term period when some of the “natural” cooling or warming influences converge.

If you look at datasets with different coverage (eg. Land only, northern or southern hemisphere only, US only) then these “features” look very different suggesting that they would be sensitive to changes in coverage of the datasets.

None of this, though, takes anything away from ongoing warming driven by ongoing increases in forcing as Hypothesis 3 is only concerned with shifts away from the longer term forced trend. It doesn’t displace “AGW theory”.

We have two periods off apparent warming 1910-1940 and 1970-2000 They have about the same slope and are of about the same length. They are both pronounced enough to be considered a trend.

You then say that the first ‘could be attributed to short term periods of high frequency noise’, but are adamant that the second is due to some other cause (I guess CO2 emissions).

Maybe Occam’s Razor is no longer fashionable, but when you have two very similar phenomena occurring within the same system. isn’t it a wise idea to have as your starting point the idea that they are the same thing happening twice? (I fully accept that your investigation may in fact show them to be different, but such examples will be the exception rather than the rule).

Because it seems to me that to really demonstrate a proper understanding of climate you have to be able to explain both periods with exactly the same rigour and within a comprehensive theory.

You can’t really say …’We’ve done oodles of work on the recent stuff and have totally convinced ourselves that the only possible cause is CO2′ and then just dismiss the earlier period as ‘could be high frequency noise’. It will not take an Hercule Poirot to smell a rat and to conclude that your homework really hasn’t been done, nor your comprehensive (Its all down to CO2, stupid) theory submitted to any sort of real test.

So please explain once more how you come to two very different conclusions about the two periods in question. You may also recall that – as a one-time chemist – I just love to see experimental evidence rather than vague generalised hand-waving.

The 1910 to 1940 warming was preceded by 0.2C of cooling and followed by 0.2C of cooling.

Therefore, if the 1910 minimum were high frequency Hypothesis 1 cooling applied to a warmer background and the 1940 peak were high frequency warming on a cooler background then what you are left with is almost monotonic (forced?) warming from 1890 to 2011.

Pre-war temperature also has the potential for being noisier because the sampling covered less of the globe so is more prone to confluences situations where they drive cooling/warming in the observed locations and the opposite in the non-observed locations (cf the difference between HadCRUT3 and GISTEMP).

Post-war we simply have an ongoing trend of warming that is slower at the start of the period, then is fast in the 1980s and 1990s and is then slower – that’s 60 years.

The two periods are not the same because the second period is longer, has a warmer base-line, is better observed and appears more consistently and more clearly in, say, the land-only data. So a naive application of Occam doesn’t seem appropriate.

Of the two interpretations I prefer mine as being less speculative than assuming a “climate shift” of no known cause. And, again, the climate shift idea does not deal with the longer term warming trend.

But your discussion of the 1910-1940 period still relies on two ‘if’s, with no supporting evidence, a general assertion about observational techniques a while back and then on your own preference for your explanation compared with any other.

Fair enough, and you are, of course, entitled to your opinion just as anybody else is. But it’s not what I would call a rigorous proof. It is a plausible hypothesis, but unless I have missed an essential point it is no more than that.

And this is deeply worrying.. For we are told that the explanation of the more recent period of warming has been unequivocally shown to be primarily caused by the increase in CO2 concentration in the atmosphere. And from this proof follow all sorts of other scientific, political and economic consequences.

I also opined that to show that the climate is really well understood, you need to be able to demonstrate that you have a numerically based explanation of both periods of warming that use the same conceptual model.

But if the best explanation you can advance for Period 1 is so woolly and vague, how much can we be certain of the answer for Period 2? Surely it should be almost trivially simple. You take the model you have for Period 2 – the unequivocal one with all the bits about CO2, just rewind it back to 1910 and lo! if the model is right, out pops the curve for 1910-1940 and the cooling after that ..and so on.

This is not difficult to do and the ‘correct’ answer would be powerful and persuasive evidence that you really have got a theory that effectively covers periods when CO2 could not be a strong influence. By implication it would also mean that your understanding of all the other possible influences would be pretty much on track.

This is so obvious a thing to do that I am pretty bemused as to why it doesn’t seem to have occurred to anyone. You have all the tools and techniques, all the equipment, all the staff, all the expertise..and a success in that demonstration would ineed be a triumph for you theory.

So why not? What is holding you back? Or is it one of those pieces of work that never get published for fear of ridicule? That maybe somebody somewhere did try it but couldn’t get the required answers? And rather than risk the wrath of the elders decided that the best thing was just to stay schtumm?

Of course there are two perhaps unresolvable “ifs”. We cannot go back in time to see in more detail what happened in 1910 and 1940.

However, we cannot therefore attribute it to regime change either.

My point is, really, that the “ifs” are not in the slightest bit implausible. As Chris Ho-Stuart has pointed out below the divergences between the various temperature datasets in the past decade or so are as large as the divergence in the SST dataset that would be required to explain the two features that have given rise to the apparently similar length warming trend of 1910-1940.

Until there is better evidence for “regime change” then theory 1 seems sufficient to me. I think Occam would agree.

Evidence required for H3 would be what? A prediction of a change in some as yet unobserved feature of the climate? Observation (I use that word casually) of regime change events in high-end climate model simulations?

I made a suggestion as to exactly how you could resolve the issue in a way that potentially gives you a win-win triumph for your theory. Easy enough to do and you have all the resources available. If you can run a model fro 1980-2000, you ought to be able to run it for 1910-1940…and, if it is any good, you will be able to show good agreement with the old observations. If not……..

But if, for some reason unfathomable to those of us who pay the bills for your organisation you find yourselves unable to do so, then yours is just one explanation among a multitude of others.

I do not think it is implausible.. indeed I’d probably lay a bob or two that may be at least partly on the right lines. But the point I have been labouring to make is that having a plausible explanation only gets you to the starting line. It does not get you onto the podium. To do that requires a lot more work.

Thinking back to my university viva days and the terrifying array of gowns and professorships there arrayed, I could just imagine the dialogue:

Difficult examiner #1; Mr Alder, we understand that you have a theory about climate. Please describe it to us
LA: Well sir I have done a lot of work on period 1980-200 and I can conclusively show to my satisfaction that its all due to the evil gas carbon dioxide. There are lots and lots of computer programmes that demonstrate this
DE#2: Indeed. We shall want to look at the very carefully. No doubt they are available to public scrutiny?
LA: Mostly sir
DE#3: Turning now to the similar period between 1910 and 1940 for comaprison…what caused that warmign according to your programmes?
LA: Please sir I don’t know sir. I’ve never tried them.
DE#1: Why not, young man?
LA: Never occurred to me sir but its all down to the noises isn’t it? Anyway it was a ded long time ago and nobody cares
DE#2: not very scientific Mr Alder, We will adjourn to consider our verdict

Such model runs have already been done and are reported in the IPCC report. But they don’t help with short term variability because the natural variability of even two identical planets would cause temperatures to diverge.

Is just ‘being plausible’ a high enough standard of proof? Because I’m sure that we could all come up with plausible explanations. Surely science – and especially experimentation – is the technique we have learnt to use to distinguish between explanations that are merely plausible and those that are actually true.

Steve, Latimer is making a similar argument to mine. You talk about “plausibility,” but without a concrete definition of “high” in “high frequency” and a concrete algorithm for converting priors into posteriors, the dates of highs and lows you point to are just handwaving based on direct observation of those lows and not an inference from correlated observations.

Certainly one can say an observation is self-explanatory, for example the rising of the Sun each morning. However it is not customary to consider self-explanation as scientific explanation. For the latter one tends to look for a non-vacuous and quantifiable correlation with observations of other phenomena. Correlation with high frequency is non-vacuous but you haven’t quantified it.

I’m surprised that Judith raises the question of whether or not global warming has stopped with presenting a graph like this.

I’m not a climate scientist but I think I can recognise a peak when I see one. And this the temperature in this graph hasn’t peaked! Certainly I’d be very happy if my share investment graph looked like this too!

May not have peaked, but looks like a small plateau to me. Whether this is the summit or not remains to be seen. But there sure hasn’t been any pronounced upward trend in the last few years.

Which raises the interesting question of what happened to
a. cause the warming period of 1910-1940 and
b. stop it.

Because without bomb proof numeric explanations of those events – using exactly the same theories as are used to explain today’s events – it is clear that we do not have a solid understanding of climate, however much those professionally involved would wish us to believe that they have.

I agree also. A trend does have to be a change in sign, just a change in slope does nicely.

Funny about the 1910-1940 period. I brought that up on realclimate some time ago. I was not particularly impressed with the answer. Seems you can cherry pick solar reconstructions that show a couple tenths of degree warming but not for a couple of tenths of a degree cooling. Lean 2000 for warming, 2008 for no warming. Picking which one applies seems a bit complicated to me.

Latimer’s response is the best so far to your initial question Joshua. I was going to try to respond but since I declared my support for Hypothesis II (HII, etc. cause I’m tired of typing that word) I didn’t. However thinking about Captain’s response to my initial comment I realize that I think HIII is informative and don’t wish to dismiss it except to restate my expectation that the chaotic change points will not be dominant.

Joshua I think you should rephrase, because the problem with significant waming over the 20th century is the mid-century rise and fall, such that even the IPCC does not attribute early century warming to ACO2.

I do think that over the long term, ACO2 will increase its influence. But I am skeptical that we have proof of its significance yet. It seems likely that coming decades will show less warming than 1970-2000, IMO. The smoking gun for at least SOME contribution from CO2 is the magnitude of the 1970-2000 warming and our understanding of the relative strengths of forcings to date.

I do allow for the possibility that chaos-based shifts will alter this picture. What I don’t have is the mathematical understanding to use this in any way, and it seems to me even the most math-literate here (eg Tomas Milanovic) don’t profess to be able to use it to make predictions.

I do hope someone can help me understand what I referred to earlier as “high level physical manifestations” of these chaotic shifts. Again, ocean regime changes affecting cloudiness and therefore albedo, seem the most obvious, but I imagine there are others.

You cannot change a person’s mind by just telling them something one time. You must tell them more than once and point them toward data that supports your position. This still don’t work if they don’t read and think.

“The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual,…It is imperative that the Earth’s climate system research community embraces this nonlinear paradigm if we are to move forward in
the assessment of the human influence on climate..”

David Douglass is a leader on this subject, and I look forward to his next paper that will illuminate this issue further.

‘Large, abrupt climate changes have affected hemispheric to global regions repeatedly, as shown by numerous paleoclimate records (Broecker, 1995, 1997). Changes of up to 16°C and a factor of 2 in precipitation have occurred in some places in periods as short as decades to years (Alley and Clark, 1999; Lang et al., 1999). ‘http://www.nap.edu/openbook.php?record_id=10136&page=10

The explanation for abrupt change in the last 10,000 years is that they hve been smaller and less persistent than paleoclimatic abrupt change. BUt they are there – and they are evident even within the extreme limitations of the instrument record.

Peter said, “There is a strong need to better understand the underlying physics of the Earth systems in play and top explore the many dynamic relationships that the available data may show.”

It is a touch wishful to model the impact of a 0.028% change in a mixed gas composition when you are not all that sure what the other 99.972% does. Course, that might just be a little simple minded on my part.

VP..to clarify… To take out “human influence” I was merely referring to Roger’s sentence, so thus I could agree 100% with the sentence. Not to eliminate it altogether from further study! It certainly is a factor to be examined but not to the exclusion of numerous other (more important, IMO) factors.

(FWIW, IMO the human one is more important, but I’m well aware that’s a distinctly minority view on this blog.)

@cd: It is a touch wishful to model the impact of a 0.028% change in a mixed gas composition when you are not all that sure what the other 99.972% does.

As far as trapping IR goes, which is the concern about CO2, we have an extremely accurate idea of what the other 99.972% does, thanks to the comprehensive HITRAN line spectra tables and our understanding of the current rates of growth of all the surface-temperature-relevant gases.

But you knew that, so perhaps you had some other point in mind that I’ve overlooked.

I. IPCC AGW hypothesis:
This one will be proven wrong in the next decade(s). Since the annual atmospheric CO2 growth will decrease with the cooling, consensus will be forced to rethink the “all of the CO2 increase is caused by anthropogenic emissions” hypothesis. If the decadal average for 2010s is lower than the 2000s average (~2 ppm/year), the paradigm will have to shift. I assume the BAU emissions scenario. I predict lower than 1.5 ppm/year decadal average for the 2010s.

Independently from the CO2 attribution, global cooling will disagree with the CO2GW hypothesis and the “climate sensitivity estimates”. Zero or infinitesifal or even negative is not ruled out. Not to forget nonsensical.

II. Multi-decadal oscillations plus trend hypothesis:
This is a bit undefined and fuzzy. I disagree with:
– the large multidecadal oscillations (e.g NAO, PDO, AMO) being unforced. There is some good evidence of the oscillations correlating with solar oscillations. Some of it may be system response, but it can’t be separated at this point.
– the trend is linear and unchanging. The trend itself is another oscillation.

III: Climate shifts hypothesis:
I don’t really understand this one.

My hypothesis:
Most of the global climatic change is caused by solar oscillations. These are coupled with orbital oscillations (solar, Earth’s and other planets’). We know too little of the mechanisms and modulations of energy transfer between Earth and space (solar system mainly). We can NOT calculate the balance. It would be like taking the Drake equation seriously. Meanwhile, we can recognise the basic patterns and maybe even predict (multi)-decadal global climate changes.

Ok, but now let’s use the CDIAC historical data for the past few decades for all anthropogenic CO2 contributions to the atmosphere (mainly fossil fuel consumption, flaring, cement, and land use), to work out how different the 2010s would look from the past five decades if your prediction came true.

Since 1958 the steady rise in atmospheric CO2 comes very close to an equally steady 40% of what the CDIAC says the total human emissions of CO2 come to.

You didn’t say anything about emissions decreasing, so let’s assume business as usual for emissions. If those don’t depart significantly from the curve, then at the 40% rate the CAGR (average annual increase in atmospheric CO2) between 2010 and 2020 will be 2.6% per year. (Currently it’s 2.35% and by 2020 it will be 2.9%.)

For your forecast of a decrease from an average of 2.6% to an average of 1.5% to come true, that 40% rate is going to have to drop to 40*1.5/2.6 = 23%.

That is, you’re predicting that the 40% rate of retention of our emissions, which has remained very steady since we started measuring atmospheric CO2 carefully in 1958, will within the space of a single decade drop to 23%.

Are you thinking of 0.2 ºC/decade as any of unreasonably low, or unreasonably high, or about right?

The last 18 years of the annualized BEST data, starting shortly after Pinatubo, show a steady increase of 0.362 ºC/decade that doesn’t contain any obvious indication of letting up. (Note that this is not over a mere decade but rather over Santer’s proposed minimum of 17 years.)

Of course the BEST data didn’t appear until 2011, and it’s for land (which is where over 99% of humans live so it’s more relevant to us than sea temperature), but if it’s at all reliable it would appear to be showing that 0.2 ºC/decade is way too low by nearly a factor of two!

Anyone who thinks it couldn’t possibly be increasing at 0.36 ºC/decade should be screaming bloody murder about the BEST data being totally fraudulent. Right? ;)

A reduction in relative humidity can occur even though water vapor pressure is increasing if temperature is warming sufficiently. Hence, decreases in relative humidity occur at stations experiencing the largest temperature increases in winter and spring as shown in Fig. 7. The strong correlation between increasing temperature and decreasing relative humidity trends agrees with that found by Vincent (Vincent et al; 2007)

Oooooops, so tell me again, how many GCMs have physics which match with the observed reduction in relative humidity with rise temperature and the consequent negative water vapour feedback?

dude I don’t think the observed reduction in relative humidity with rising temperature implies a negative water vapor feedback, just a less strongly positive water vapor feedback than has been otherwise postulated.

I think this linear vs. nonlinear change distinction is perhaps a false dilemma.

It’s rather like the theory of ‘punctuated equilibria’ in evolutionary biology. The fossil record shows periods of relative stasis punctuated by periods of relatively rapid change. Some have taken this fact to undermine the theory of natural selection. But it is all a question of scale. When you look closely at the periods of rapid change, they too are commensurate with natural selection. The theory relies upon ordinary speciation, and thus the morphology proposed is a form of evolutionary gradualism, in spite of the name.

Do we need a new non-linear paradigm of the climate?

I don’t know the answer to that question, but I am curious as to what such a paradigm would even look like.

For example – How could the climate paradigm change without a fundamental change in our understanding of the relevant physics? That won’t happen just because we ‘decide’ to adopt a new paradigm – It would require convincing evidence that our current physical theories are false. There is a huge difference between false and uncertain.

i still keep thinking the place to start is explaining how it might affect the cloud response. the absolute magnitude of that is so large. maybe i should keep on banging this drum. i’m sure CH think’s he’s answered it but us dummies can’t figure out what he’s really saying.

At the top of atmosphere it is all very simple. Energy in less energy out equals the change in enery stored in the Earth system over any period.

In the maelstrom of the planet things change abruptly from one state to another very much like any deterministically chaotic system in electrical circuits to ecologies, economics, populations and climate models.

‘AOS models are members of the broader class of deterministic chaotic dynamical systems, which provides several expectations about their properties (Fig. 1). In the context of weather prediction, the generic property of sensitive dependence is well understood (4, 5). For a particular model, small differences in initial state (indistinguishable within the sampling uncertainty for atmospheric measurements) amplify with time at an exponential rate until saturating at a magnitude comparable to the range of intrinsic variability.’http://www.pnas.org/content/104/21/8709.full Warning – this paper is a very hard slog.

What is important is what changes in the energy dynamic – and clouds are a big part of this in the short term. Largely because – I think – of changing sea surface temperature both in the Pacific and Atlantic. It makes sense to me that clouds form over cold seas and dissipate over warm. The observations at both the surface and from satellites seem to support that – see some references here for instance – http://judithcurry.com/2011/02/09/decadal-variability-of-clouds/

As a concept – climate is a single global system with tremendous energies cascading though powerful systems. It comprises hydrosphere, atmosphere, cryosphere, ecosphere, heliosphere and pedosphere. The reductionist error is to consider parts in isolation. The new paradigm is to consider the whole as a complex dynamical system. How that is done is another question – but the fact that it is difficult doesn’t matter to God or the universe.

Cet – old Buddy – you want it both ways do you? Not that there is anything wrong with that – it explains the identity crisis at least.

Chaotic systems are extremely sensitive in the region of a bifurcation but insensitive elsewhere. A small push tips the system over into a new state entirely – so rising temperature could for instance push the system into abrupt cooling in as little as a decade.

‘Most of the studies and debates on potential climate change have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. But recent and rapidly advancing evidence demonstrates that Earth’s climate repeatedly has shifted dramatically and in time spans as short as a decade. And abrupt climate change may be more likely in the future.’http://www.whoi.edu/page.do?pid=12455

But will the politics ever survive random cooling over 30 years? No one will ever believe it again. And there will be you and me on our own – Cet old buddy – and the Woods Hole Oceanographic Institution – saying but wait climate is hypersensitve in the region of a chaotic bifurucation. What’s not to understand? I guess lonesome is just part of the iconic nature of being a cowboy.

“I think it’s very interesting that we are now considering the possibility of rapid “episodic and abrupt” climate change, and yet we are constantly and confidently told that climate sensitivity is low.”

Agree that this is the big gotcha to all the “natural variation” skeptical views. They want to ascribe changing temperatures to internal stimulation, which however implies a very touchiness or sensitivity to the strength of the stimulation. The more sensitive the climate, the less internal stimulus is needed to get it to adjust.

Now take that sensitivity to external radiative forcing. That same sensitivity is still there, but now it is responding to the external stimulus instead of an internal source. I mentioned this in a top-level post to this blog a couple of months ago.http://judithcurry.com/2011/11/29/wht-on-schmittner-et-al-on-climate-sensitivity/
The physical analogy is doing a random walk in a shallow energy well. If the climate doesn’t have a steep well wall, it can wander.

All that Hypothesis III is a cop-out, and the term “chaotic attractor” at its root is a fancy euphemism for an energy well.

To improve the heat transfer between fluid boundaries you can increase the turbulent flow, which increases both the molecular contact rate and the rate of diffusion in the fluid. Simple right?

ENSO is a product of changing turbulent flow rates between the ocean and the atmosphere. The ENSO is defined as the change in temperature for a few boxes in the Pacific ocean. There are a bunch of boxes and a bunch of ocean. All you have to do is figure out the thermal impact of the change in relative velocities for each and every box, then you have a fair start.

Then there is the long term laminar flow puzzle. Heat exchanged in the polar regions cools the sea water to approximately 4 degrees C. That water sinks in a slow more viscous flow exchanging very little heat producing a thermal boundary in the deep oceans. The rate of thermal diffusion into and out of that layer is extremely slow. Once you have the turbulent flow problem solved, jump on the laminar flow problem.

It should be a piece of cake to solve the transitional flow problems. Oh wait? I think there is a prize of some kind to solve that problem.

Let’s see, convection in the atmosphere can be laminar, transitional and/or turbulent, it must impact the radiant transfer as well, since there is horizontal convection which can be laminar, transitional and/or turbulent. That horizontal convection can be below, at or above the average radiant boundary layer.

I am sure a simple up/down model can figure out all that is going on.

So there is nothing new about non-linear issues in fluid dynamics, it is still a difficult problem. Climate science has yet to scratch the surface.

Over 1/4 billion hourly values of temperature and relative humidity observed at 309 stations located across North America during 1948-2010 were studied. The water vapor pressure was determined and seasonal averages were computed. Data were first examined for inhomogeneities using a statistical test to determine whether the data was fit better to a straight line or a straight line plus an abrupt step which may arise from changes in instruments and/or procedure. Trends were then found for data not having discontinuities. Statistically significant warming trends affecting the Midwestern U.S., Canadian prairies and the western Arctic are evident in winter and to a lesser extent in spring while statistically significant increases in water vapor pressure occur primarily in summer for some stations in the eastern half of the U.S. The temperature (water vapor pressure) trends averaged over all stations were 0.30 (0.07), 0.24 (0.06), 0.13 (0.11), 0.11 (0.07) C/decade (hPa/decade) in the winter, spring, summer and autumn seasons, respectively. The averages of these seasonal trends are 0.20 C/decade and 0.07 hPa/decade which correspond to a specific humidity increase of 0.04 g/kg per decade and a relative humidity reduction of 0.5%/decade.

There are several interesting theories and I read many comments from posters who have “studied the climate” and “deeply understand” various aspects of the climate and are quite sure what drives the climate and the relative importance of CO2.

Simple question– does anyone’s theory or relative understanding really matter until they can demonstrate it is accurate by having it modeled and then showing that the model matches observations over a reasonable period? Longer periods of matching observed conditions will lead to higher confidence in future predictions being accurate.

Trusting models that do not match observed conditions is called faith. (that something will change in the future to make them accurate)

It is accepted that radiosonde-derived humidity data must be treated with great caution, particularly at altitudes above the 500 hPa pressure level. With that caveat, the face-value 35-year trend in zonal-average annual-average specific humidity q is significantly negative at all altitudes above 850 hPa (roughly the top of the convective boundary layer) in the tropics and southern midlatitudes and at altitudes above 600 hPa in the northern midlatitudes. It is significantly positive below 850 hPa in all three zones, as might be expected in a mixed layer with rising temperatures over a moist surface.

There are a number of global warming objections that the data must be wrong attributing differences to slow sensor response etc.

interestingly, and as discussed in some of the comments on the page you link to, it appears the trend in SLR is slowing, even if you try to avoid cherry picking by truncating the curve in 2010. Just by a little. It’s been discussed before. It’s interesting to think of what the combination of a flattening in SLR and simultaneously land-atmosphere GW might mean. Maybe nothing. Maybe the missing water is in space!

People worry about one molecule of manmade CO2 per ten thousand molecules of other stuff, including the three molecules of natural CO2.
An asteroid hit the earth and killed the dinosaurs and other species, changed the orbit and spin axis and earth survived. We have caused some changes, but our changes are tiny compared to natural changes.

The furniture on the deck of the Titanic needs to be re-examined in detail, plans carefully made and redone, and rearranged again and again until we get this furniture finally right once and for all. But then again we shall have to rearrange it because my salary depends on this furniture!

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” – Upton Sinclair

Climate models are very complicated. There are sections of code from many different disciplines. No one of them understands all the disciplines. Whoever puts this all together does not understand all the different disciplines. No one in the different disciplines understands how this all works together. Since all the disciplines had an input, they all accept the output as absolute truth. This is the Gospel of Climate Science.

I think anyone who has an interest in following the weather could have told you that.

Not unless you have a different definition of weather from Mark Twain: “Climate is what you expect. Weather is what you get.”

There do indeed appear to be severe limits to climate predicability at all weather-relevant scales. But how does that extend to all scales? How would you go about showing that we have no hope of predicting the global temperature averaged over the period 2075-2100 to within say a degree?

There are 2 answers to that – hombre – and they both involve model ‘plausibility’.

‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ Irreducible imprecision in atmospheric and oceanic simulations – James McWilliams

The first plausibility criteria requires billions of dollars and thousands of times more computing power.

‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ A UNIFIED MODELING APPROACH TO CLIMATE SYSTEM PREDICTION
by James Hurrell, Gerald A. Meehl, Davi d Bader, Thomas L. Delworth , Ben Kirtman, and Bruce Wielicki

The second plausibility criteria is much simpler to implement.

‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’

As running simulation ensembles across systematically designed model families would require billions of dollars and thousands times more computing power – we simply decide subjectively what a plausible solution looks like after the fact.

As for how accurate this is – seriously – who really gives a rat’s arse.

Yee hah – yippee-ki-aye – I’m the climate cowbot.
Dude – get lost – we want a shiny billion $ superbot,
Not a lonesome, iconic, laconic, ironic dusty cowbot.
Shibboleth – it’s just a game I play for fun – it’s just a…

I think Mr Clemens has basically said the same thing I did – just better expressed.

And the quote I referenced said ” severe limits”, not impossibilities. I’m not taking the position that modeling is worthless. I do believe that modelling should not be the primary basis for policy, as it isn’t yet good enough to tell us what is happening.

I do believe that modelling should not be the primary basis for policy, as it isn’t yet good enough to tell us what is happening.

Agreed. I further believe that the current approach of modeling the equilibrium states and treating the transitions between them as only of secondary interest is doomed to failure, because for a long time now and for the foreseeable future we’re in a transition with no equilibrium state within a century of today’s date.

‘I haven’t lost my temper in 40 years; but, Pilgrim, you caused a lot of trouble this morning; might have got somebody killed; and somebody oughta belt you in the mouth. But I won’t. I won’t. The hell I won’t!’

The bushwhacking SOB is reduced to making snide comment from the sideline. They are surpringly free of any science at all for one with such a limited capacity for wit.

“Meanwhile, we can recognise the basic patterns and maybe even predict (multi)-decadal global climate changes.”

Quite so.

Poleward and/or more zonal jetstreams mean that the troposphere is bit warmer as the rate of energy flow through the troposphere increases.

Equatorward and/or more meridional jetstreams mean that the troposphere is a bit cooler as the rate pof energy flow through the troposphere declines.

But either way the total system energy content fails to change because BOTH are NEGATIVE system responses to anything that tries to take the system away from the energy content dictated by atmospheric surface pressure and solar input.

Add to that the 60 year oceanic cycling and the millennial solar cycling from MWP to LIA to date and that gives a pretty good idea for predictions.

We currently are going into a cooling oceanic phase for 20 to 30 years but it is a bit early to be at the solar cycle peak though the current solar quietness suggests that possibly the peak arrived early and we are now on the way down. Could still be few active solar cycles to come though.

Meanwhile the jets are more meridional/equatorward than they were so the troposphere is currently cooling and more clouds from more wavy jets is reducing the amount of solar energy getting into the oceans which will intensify La Nina dominance for so long as the sun stays quiet.

I suspect that that will turn out to be a better guide to the future than the current GCM output.

“One of the ways people deal with too much data/information is to make a simplifying story. A good simplifying story would be Newton’s theory of gravity, which in its simplist form ignores friction and other minor factors”

Did Craig say that? My engineers heart shrivels into dust – and I stand vindicated. Friction is routinely included in the calculations involving the laws of motion. Really – the only thing that is not included in the Newtonian Laws of Motion is relativistic effects – and that is entirely forgivable as – when ambling along the trail on my pony – relativity only really matters when we are being chased by cougars.

I wonder how many are aware of the policy implications of what is happening.

If models are not useful in a decadal timescale, such as they can predict a strong warming for a period of minimal or even no warning, then what use is there for models? What government (apart from North Korea…) would make it difficult for people to heat up their homes in the next decade with the explanation that is going to be warm in 2070 anyway?

People do not average-out their lives across decades or centuries: each and every one of us have to go through each and every day first.

If I freeze to death today at -10C, I will not enjoy the warmth of July at +30C even if the average is +10C, perfectly compatible with human life. The same can be said of plants and animals. If I plant an olive tree in my London garden, it will die of cold in February even if the yearly average is in theory just enough to make olive trees survive in the open. If a nasty mosquito species migrates from warmer places during an August heatwave, still if that species cannot survive the following winter it will not be around until next migration opportunity during a future heatwave.

A purely statistical, multi-year approach to modelling the climate is in theory useless for policymaking (similar considerations could be made for non-regional projections, but that is too long a story here – read “How Space-Time Digested AGW” if interested). And if we end up with 15 years of incorrect projections without even a volcano for an excuse, then whatever physical explanation there is, policymakers would be much wiser in keeping climate scientists at arm’s length.

Chris Ho-Stuart | February 8, 2012 at 3:10 am |Steven mosher, I don’t see the problem with the graph you mention. It includes a shading envelope that indicates a range of possibles. It has a horizontal scale on which ten years doesn’t even show (the tick marks are 50 years apart).

To paraphrase E.S…. As you can’t read a simple chart, maybe you should take my Graph lab/class.

Let us grant that someone has proposed 3 theories, as in the head post. The standard way to proceed would be to see if any of them could be disproved. IPCC has made no effort to disprove II and III, which have sufficient explication (in my opinion) to be called at least plausible. In fact, when it is pointed out that the GCMs are running hot, it is claimed that “oh no, we said II is possible with decadal fluctuations”–but if that is allowed, how does IPCC know that the warming from 1980 was not itself just (or partially) a natural fluctuation?

This raises an interesting question, which is how should the IPCC (or anybody else for that matter) falsify hypothesis II and III, which although they are at least plausible, make no testable predictions, unlike hypothesis I. Has anybody made projections for future climate with an unambiguous statement of uncertainty that would allow the projections to be falisfied by the observations?

Hypothesis I also has the advantage that it can be used to make a model based on physics, rather than statistics, that can at least explain past climate. I am not aware of any phsyical model based on hypothesis II and III that can explain 20th Century climate without the enhanced greenhouse effect.

Another reason that hypothesis II and III are not as plausible in my view is that for them to be correct, our understanding of radiative physics etc., which have been tested experimentally and by observations (e.g. spectra of outbound IR radiation) must be fundamentally wrong. This is possible, but unlikely – scientific revolutions do occurr, but not as frequently as individual scientists just getting things wrong.

Girma; the point we are making is that climate models DON’T predict the timing of short term (decadal) pauses or accelerations. What model runs show is that the rate of change is not steady on that scale, but that the underlying main trend shows up over a scale long enough to smooth out the unpredictable short term variations up and down.

Having a slow down over 15 years is in line with what we should expect according to conventional climate science. What conventional science can’t do is predict when slow down or speed up occurs. The graph you show is explicitly for “means”. Reading the report shows that this is the output of an ensemble; not a single model. It does not include and does not try to include short term variation around the trend (which is not predictable); it shows the mean.

Your “comparison” is apples and oranges. You are comparing a single data line (the observed line) with a mean line. What you SHOULD compare with is a smooth of the data; a moving average, perhaps. The scale of the window should be something from 15 to 20 years. Recent work (discussed here at climate etc in this commentary by Judith: Santer on timescales of temperature trends has been more precise, identifying 17 years as the window over which the trend is showing over the short term variation.

So to compare like with like, take a 17 year moving window average of the data. You’ll find it’s within the right ball park for the IPCC projections from models.

Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections

In the above statement, the IPCC used a 15-year period from 1990 to 2005 for trend calculation.

Here is the trend for this period from 1990 to 2005 => http://bit.ly/wULkoQ
This trend is a warming of 0.24 deg C per decade, as stated above.

Now, let us check the above statement against the trend for the 15-year period from 1995 to 2010.

Here is the trend for this period from 1995 to 2010 => http://bit.ly/wFhfXH
This trend is a warming of 0.11 deg C per decade.

This is outside the range between about 0.15 deg C and 0.3 deg C per decade.

Chris, does this result weaken or strengthen confidence in the near-term projections?

The four years leading up to 1995 included a solar max, but the four years leading up to 2010 were in a deep minimum. Do you think that makes a difference, or should we just ignore solar cycles in all this?

Girma, the equilibrium climate sensitivity (estimated at about 3C per CO2 doubling; or about 0.8 C per W/m^2) is not related to the rate of increase, but to how far the increase goes until the Earth is back in energy balance. You are again comparing apples and oranges.

If we take the commonly accepted interpretation of the IPCC statements as projecting 0.2 Deg C/decade temperature increase, then over the 15 year period from 1997 it would have been reasonable to expect a 0.3 Deg C increase. Instead, a 0.051 Deg C increase was measured, which could be interpreted to imply (assauming the IPCC argument is correct) a natural negative variation of 0.249 Deg C over the period, or 0.166 Deg C/decade. If it is accepted that a negative natural variation of 0.166 Deg C/decade can occur, surely it is not unreasonable to accept that a 0.166 Deg C positive natural variation could occur from time to time.

To reach the canonical value of 0.2 Deg C/decade over the canonical 30 year period starting from 1997, it’s clear that temperatures would need to increase by 0.549 Deg C or 0.366 Deg C/decade over the next 15 years. Let’s see.

(By the way, a 0.2 Deg C/decade increase would imply a temperature increase that would fall within the stated limit of 2 Deg C temperature increase to avoid catastrophic climate change).

“Measured” in what sense? One can easily measure a substantial decline by careful choice of endpoints.

Over the past two years various people have pointed out on the basis of how weather has varied over the past several decades that projections based on less than a decade are unreliable. In Feb. 2010 I proposed 15 years as a minimum, 6 months later Tamino proposed the same number, more recently Santer has proposed 17 years.

If you go with 17 years you will see 0.36 °C/decade in the recently released BEST land-temperature data from Richard Muller’s group at Berkeley. Furthermore there is no sign of any flattening in that data during the past decade.

Is it not a single pattern of a warming of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years? WHY NOT?

When you include those two offsets of ±2 °C I would have to agree with you.

If you increase them to ±20 °C it looks so flat that I’m not sure I can see any warming trend at all.

However if you decrease them to ±0.2 °C then it becomes easier to see differences that aren’t so visible with your suggested ±2 °C. In particular I would say on the basis of that last graph that you are quite right about the rise from the 1st peak to the 3rd peak, but quite wrong about the rise from the 1st peak to the 2nd, which is a lot less than your proposed 0.5 deg C every 30 years, and from the 2nd to the 3rd, which is a lot more.

This difference is also present in your original graph, and to the same degree, it just isn’t as obvious when you zoom way out like that.

I’d say on the basis of your original graph, when looked at more closely, that this warming trend is not following the straight line you suggest but is bending upwards. WHY NOT?

Yes but Vaughan, surely you are aware that choosing a 17 yr period with a start date of 1994 right at the bottom of the Pinatubo cooling effect is a nice big fat sweet juicy red cherry pick:) It is difficult to see any significant long term trend with a series containing extreme weather outliers including Pinatubo and the mother of all El Ninos 98.

choosing a 17 yr period with a start date of 1994 right at the bottom of the Pinatubo cooling effect is a nice big fat sweet juicy red cherry pick:)

This would be a fair objection if I had chosen this out of a range of alternatives. However I was trying to make the point that this is what you get when you try to include as much as possible of the allegedly flat 2000-2010 period while still meeting the Santer 17-year criterion. With those two constraints I had no alternative. (One can extend 2010 very slightly to 2010.2, but no further because of the insane outliers in the last two months of the BEST data!)

But since you think I’m cherry-picking, it would only be fair to let you cherry-pick a 17-year period (or longer if you prefer) from the BEST data that makes the opposite point. Go for it!

Note that I composed three moving-average filters of widths 12, 10, and 8 months in order to remove all traces of < 12 month high frequency noise. That combination turns out to be an extremely effective filter for that purpose, for reasons that are best explained in terms of side lobes in the frequency domain if anyone's interested. If you don't remove the high frequency noise it becomes easier to cherry pick from among 17-year windows, so to that extent I'm making it as hard for you to cherry-pick as I made it for me. Just didn't want to give either of us an unfair advantage.

I’ve taken the time to download some data and calculate the trends. I’ve used the global temperature data from GISS, NCDC and Hadcrut3. I’ve also taken the lower atmosphere data from UAH, which is closely related. I’ve calculated trends using Excel. The spreadsheet also includes BEST data for comparison. I’ve not quoted results here, since it measures land only, which gives stronger warming trends throughout.

Here are trends for the most recent 15 years, in C/decade. That is 1997 through 2011 inclusive.
Hadcrut3: 0.01
UAH: 0.10
NCDC: 0.05
GISS: 0.10

This is even less warming that Leake mentions. But if take the 15 years to July 2011, I get this:
Hadcrut3: 0.05
UAH: 0.11
NCDC: 0.08
GISS: 0.12

That’s a bit closer to numbers in Leake’s article; it’s possible he’s working from numbers he obtained some months before publication. But in general, the 15 year trend is going to vary a fair bit over different time periods. There’s still a lot of influence from short term variation on that scale. This is not some new excuse to explain any failure of predictions; it’s been a stock standard part of climate science for the last twenty years.

The 95% confidence on these trends is roughly 0.05. That’s using the regression confidence, without any consideration of confidence on the underlying data. A crude confidence guide just to give a ballpark idea. This means that the data does show, with confidence, a short term trend which is less than the longer term trend we’ve seen over recent decades. That’s not particularly surprising, and (conventionally) most scientists expect the stronger long term trend to continue to be apparent over this century at least. If the recent lull persists another ten years, then there might be reason to look askance at conventional expectations. The current data, however, is consistent with conventional expectations.

It has been suggested that the recent lull over the last 15 years makes it unlikely that the next 5 years will allow for a strong warming trend over 20 years. We’ll see about that in five years time; in the meantime, here are trends for the 20 years just past (1992 through 2011 inclusive). (Regression confidence limits about 0.04)
Hadcrut3: 0.16
UAH: 0.21
NCDC: 0.16
GISS: 0.21

Those numbers match the expected trend of about 0.2 C/decade.

There is no IPCC prediction for the trend over the last 15 years. There IS, on the other hand, an expectation that the main underlying trend is about 0.2 C/decade or so; where “about” can be read as 0.15 to 0.3. It’s well understood and explicit (not explicit enough for everybody it seems, but certainly not a hidden footnote either) that this trend is not expected to show over short windows of time; but only over windows of time long enough to smooth untrended variations.

There is still a fair bit of up and down seen in the 20 year trend when I look at the data. But any 20 year window from about 1980 to now, in any of those datasets, shows a trend somewhere from 0.15 to 0.25 C/decade. I expect that to continue; just as I expect to continue to see substantially greater and smaller trends over a 15 year window or less.

I have adapted this comment to be a post at the experimental SkyDragon forum. In that post I include also links for all the data files, for the spreadsheet, and images of plots of how trend varys with time. Basically I plot the trend value for a window centered at a given time, for every possible window of that length. Hence you can see how the 15 year or 20 year trend (or others) varies over time. You can find it at Recent trends in global temperature. If you would like to discuss further, I’ll continue to read both here and in the forum.

Girma, this also relates to your question. You don’t expect 15 years to show up the long term trend. You do expect to see the long term trend start to dominate over short term untrended variation over longer windows, like 20 years. Those numbers are not an excuse for failed predictions. They are a feature seen in the data. It’s ALWAYS been understood that untrended variations make short windows not much good for looking at a global warming signal; that’s the nature of climate.

I know you don’t agree with the models. That’s your prerogative. But it is just wrong to say that the recent 15 year lull conflicts with IPCC expectations.

Chris
This means that the data does show, with confidence, a short term trend which is less than the longer term trend we’ve seen over recent decades.

Thank you for that.

That’s not particularly surprising

I disagree.

Because of the following:

Yeah, it wasn’t so much 1998 and all that that I was concerned about, used to dealing with that, but the possibility that we might be going through a longer – 10 year – period of relatively stable temperatures beyond what you might expect from La Nina etc.
Speculation, but if I see this as a possibility then others might also.

Girma, you show an email in which a conventional climate scientist considers the possibility of a ten year lull. Apparently you take this to mean that lulls are NOT considered possible in the conventional picture. Boggle.

Yes… so? La Nina etc (the major factor in “1998 and all that”) is not the only short term influence, and scientists know this. Such lulls have occurred before. They aren’t a surprise and we are not good at predicting them, and you’re STILL citing a climate scientist who is considering the possibility of a 10 year lull. That’s SUPPORTING my position.

Chris, the basic problem is that sensitivity is an abstract concept that has little to do with reality,

David, let me suggest a different possibility. Like everyone else in the world you have no clue what climate sensitivity is, and therefore have no clue whether it is abstract, concrete, or even meaningful.

So equally you have no clue as to whether it has anything to do with reality. In your complete ignorance of the concept, for all you and anyone else in the world might know, it might have everything to do with reality.

If I may continue the piling on Wojick. He doesn’t seem to realize that abstractions help us understand reality. Just because a concept is abstract does not mean it doesn’t reflect reality.

“By analogy, abstractly one can throw a potato chip as far as a baseball, but don’t bet on it. In short, the focus on radiative physics is misguided.”

Again poor Wojick does not understand what the word abstract means. In this case, he probably meant to say hypothetically. Radiative physics is not abstract, as it is clearly the mechanism by which the planet maintains its average temperature. So Wojick is saying that modulation of the radiative transfer leading to temperature change is only a hypothetical premise.

On the one hand that’s merely definition 3 here out of 5 definitions of this adjective.

On the other, in my reply to David I was assuming it was the one he intended, since I’m not aware of his ever having used that adjective with any of the other four meanings.

To back up WHT’s point, temperature is a great example of a quality that is concrete to the Man In The Street but abstract to a physicist. So abstract in fact that a molecule can’t run a temperature. A gas cannot have a temperature unless it consists of at least two molecules.

The MITS reasons that one molecule moving at ten times the average speed of air molecules at sea level must be much hotter than average, but this only shows a lack of appreciation for how something like temperature becomes meaningless without an abstraction on which to base it.

Edim, the 50 years trends in Hadcrut3 are positive over every 50 years you pick last century. Although it does get very close to zero in the middle.

1900-1949 inclusive has a trend of +0.100

It goes up to a local high of +0.102 over 1903-1952; this is actually the culmination of a steady increase going into the previous century.

It then declines fairly steadily to a minimum of +0.003 over 1930-1979

Amazing. What a coincidence. That’s the 50 years you picked out!

It then increases again, to a maximum of +0.141 over the most recent 50 years.

This is consistent with the physical factors driving climate that we know of. The major factors in the early part of the century appear to be a mix, coming out of the little ice age. The middle decline is mostly (we think) from anthropogenic aerosol pollution. Enhanced greenhouse has been a factor all along, but that really took off from around 1970 or so, and is now easily the dominant forcing. The 50 year trend will, I expect, continue to increase, as it sheds the tail of that mid century cooling window.

The physical expectation is that the trend will continue upwards. Observations so far have confirmed the predictions of warming being made some 30 years ago. Recently we’ve been getting more specific predictions into the future, and as you keep an eye on trends you can (potentially) falsify the expectations from present physical understanding.

Present understanding is that there is to be a persistent warming trend throughout this century; though (as has been noted consistently) this is not expected to be a steady trend. The expectation is that the main on-going trend should be apparent over 20 year windows or more; with 15 years or less showing a combination of this underlying trend with shorter term untrended variation.

Steven, what do you become at 0.13 °C/decade? That’s what I figure for the impact of rising CO2 in 1990. By 2000 AGW was up to to 0.16 °C/decade. Currently I figure AGW is running at 0.2 °C/decade.

For those who’ve just tuned in, some people define AGW to be the impact of rising CO2 brought on by our land use changes, conversion of fossil fuels to energy, and flaring of same. I’m one of those people, why shouldn’t I be? Why shouldn’t anyone be? If you have a different definition of AGW, by all means put it on the table and let’s discuss it.

Now the Keeling curve shows atmospheric CO2 to have been increasing extremely steadily. Hence with that definition of AGW it’s impossible for the impact of rising CO2 to be all over the place. That impact has to be rising very steadily.

Which is the case for my AGW numbers above.

As several million people have pointed out, CO2 isn’t the only thing driving global temperature or it would be rising as smoothly as CO2.

Well, duh.

Anyone conflating natural fluctuations in global temperature with those attributable to rising atmospheric CO2 is living in a state of sin.

You must separate the natural and anthropogenic contributions to global temperature. Ideally you would do so to a precision of well under a millikelvin. If you can’t come even close to that level of precision then you can’t claim to understand climate change, because climate change works with very small variations in temperature, in case you hadn’t noticed.

We should, however, anticipate that the warming we see at present will continue, and that that we still have a say in how strong that warming will be; and plan accordingly without panic.

The reason we should have high confidence in this conclusion is not extrapolation of trends or curve fitting; but physics of what drives temperatures and climate. This is a matter which tends to get lost in discussions such as those of Leake. Of the various “hypotheses”, only number 1 has a solid testable basis. The others invoke all kinds of unexplained cycles and shifts and so on without any suggestion of what could be driving them; and a major problem with the lack of indication of those cycles going back beyond the instrument record. That, combined with the idea that climate is insensitive and doesn’t change much in response to forcings becomes merely oxymoronic.

Manaker, my reply was basically that we are not limited to observations of temperature. It’s physics which leads us to think that this isn’t just cycles.

Looking at nothing at all but the temperature trends, without any consideration of what is actually causing them, would remove entirely my whole basis for expecting the 30 year trend to remain comparatively strong warming.

I know Girma’s point was that the temperature data doesn’t (by itself) give strong evidence for a persistent ongoing trend. I agree that we need more. I think we HAVE more, and said so. I know we have have more, because it’s really the physics which is more my particular interest than statistics on trends.

The problem with your reliance on ‘the physics’, is that the pesky climate doesn’t seem to be behaving the way your theory tells us it should. And I’m disappointed to see that your response isn’t to go back to the physics and see what you’ve missed, but merely to reiterate that it’ll all work out fine in the end.

H’mm

Scientific history is littered with examples of ‘comprehensive theories’ that were 90%+ correct, but with just a few little problems in some dark corners. And it is the investigation of those troublesome phenomena that can lead to interesting and new insights. Einstein’s work on the photo-electric effect and discovery of the quantised nature of radiation is a classic example.

So colour me unconvinced that you really understand this climate system. And colour me even more unconvinced by the argument that though it is impossible to forecast the climate 5 years away, you are perfectly capable of doing so 50 or 100 years out.

Unless and until you can give a better understanding of the recent plateau in temperatures it seems to me that you have a lot more work to do. And that ‘It’s the carbon dioxide, stupid!’, may prove to be a far too simplistic theory.

You cite “physics” (rather than “physical observations”) as the basis for the postulations leading to the CAGW premise.

By this, I suppose you are referring to greenhouse theory or climate sensitivity hypotheses used in the climate model simulations.

These are great, but as a rational skeptic, I would like to see empirical data to validate the hypotheses (which you call “physics”).

If the “physics” tell me one thing, but the “physical observations” are showing me something else, I’ll go with the physical observations, especially in a science that is still in its infancy, such as climate science.

Until these hypotheses (your “physics”) are validated by empirical evidence, based on real-time physical observations or reproducible experimentation, they remain uncorroborated hypotheses.

This is how “physics” (and all other sciences) work, Chris. Hypotheses and theories are tested against empirical evidence. If the empirical evidence shows that they can pass repeated falsification attempts, they can become “corroborated hypotheses” and eventually “reliable scientific data”. The CAGW hypothesis has not passed this test.

Show me (and Girma) the empirical evidence to support the CAGW hypothesis, i.e. that human GHG emissions have been the primary cause for past global warming and that this represents a serious potential threat to humanity and our environment unless these emissions are curtailed dramatically.

Well blow me down! You’re suggesting that Chris H-S goes out and does some experiementy thingies!

Surely you know by now that such things are strictly discouraged in climatology. Chris would soon find his career at an end

The Theory is Correct. The Observations are Wrong. Long live The Theory. A bas les experimentalists! The Glorious and Infallible IPCC has Written the One True Truth. Speak with One Voice And Praise The Theory. Shun and Dismiss the Well-Funded Big Oil Running Dog Denier Scum……….

The problem with your reliance on ‘the physics’, is that the pesky climate doesn’t seem to be behaving the way your theory tells us it should. And I’m disappointed to see that your response isn’t to go back to the physics and see what you’ve missed, but merely to reiterate that it’ll all work out fine in the end.

Latimer, you’re just wrong there. The climate IS behaving consistently with what theory tells us. The people who say it isn’t are consistently distorting what expectations have actually been given.

There are some glitches between theory and data here and there, sometimes resolved by better theory, sometimes by correcting data problems. That’s normal in all kinds of science, by the way. But we haven’t been looking at those problems here. The “problem” being raised HERE — a 15 year lull — simply IS NOT a conflict with theory.

The point is — which I HAVE emphasized many times now — that the physical theories are incomplete. It’s not a solved or finished problem. They don’t, for example, adequately deal with short term variation. The short term variation is quite CONSISTENT with theory to date, but the theory is not at present able to predict them. (It may never do so beyond a narrow window, much as weather prediction is strongly limited in scope; this is where chaos shows up.) Physical modeling of many factors like cloud, or carbon cycle, or solar cycles, continue to be unsolved, and limit the skill of predictions.

There are a whole HEAP of open questions in climate science, and we (or rather working scientists) are going back to the physics again and again and again. What on earth makes you think they are not??

Manacker says:

You cite “physics” (rather than “physical observations”) as the basis for the postulations leading to the CAGW premise.

By this, I suppose you are referring to greenhouse theory or climate sensitivity hypotheses used in the climate model simulations.

These are great, but as a rational skeptic, I would like to see empirical data to validate the hypotheses (which you call “physics”).

Greenhouse theory is “theory” in the same sense as relativity is “theory”. It’s really very well understood physics with ample direct empirical confirmation, and actually one of the simplest and best solved problems in climate.

Sensitivity isn’t a “hypothesis”. The hypotheses concern values for sensitivity. The most important constraints here are actually observational. Physical theory is not able to model the Earth well enough to give a definite prediction for sensitivity. That’s why it’s invariably given as a range of possible values.

What the physical theory gives you quite definitely is that there IS a strong persistent contribution to heating up the planet. What it doesn’t give you particularly well is how much response there will be (sensitivity) and also what other local changes around the globe occur and how short term chaotic changes take place as the whole system settles to the changing conditions.

There is, however, strong empirical reasons for lower bounds on sensitivity, which is where you get some fun debates from those who propose — in spite of all evidence to the contrary — a low value. Major positive feedbacks contributing increased sensitivity do have a well understood physical basis, but it is not complete. There’s a whole heck of a lot more going on and so theory does not tell you a particular value.

In discussions here, some people don’t seem to understand how sensitivity is defined; they mix up equilibrium sensitivity with transient response sensitivity, or try to read equilibrium sensitivity from a temperature trend. That’s not falsifying theory. That’s just needing to learn a lot more before you can even follow discussions. The problem is I’m engaging here with people who operate at many levels.

I don’t expect to convince you to change your mind all at once; nor can I possibly go into a full account of all the data and theory that is being applied — even if I knew it all, which I don’t.

What I do know a fair bit about is the radiative physics in the atmosphere; at the level of a keen student, not a researcher.

‘The people who say it isn’t are consistently distorting what expectations have actually been given.’

Nope. 100% the wrong way about. Arse about face.

If we have got the wrong end of the stick (and I by no means accept that we have), then the fault lies with you, not with us. If you have ‘given us the expectations, and we haven’t received them correctly, then it is your responsibility to make sure that we have them correctly, not ours to decode your mumbo jumbo into things that we interpret to your satisfaction.

That is the basis of sound communication…YOU need to check that the message as sent has been received correctly…

As to distorting expectations’, please show a consistent and frequent library of publications/presentations/public speeches designed for the general public where you and your colleagues have placed as much emphasis on the variability of recent temperatures as you have on the ‘inevitable’ long-term warming.

Show me the portfolio of letters to the press where you have pointed out that stories discussing dangerous global warming should be moderated with the caveat that it may not actually warm an for up to a generation, but that we should still be very concerned.

Show me the transcripts of the TV appearances where you and colleagues have appeared to reassure the public that the strange weather event du jour isn’t directly caused by global warming, but is just part of natural variation.

I’ll take a wee wager with you that you can’t do any of those things. And the reason is that they never happened. Until very recently – prompted by the article in teh Sunday Times and other similar ones, the idea that ‘climate scientists have always known there will be a slowdown or stopping of temperature rise, but its all taken care of within our theory, so we still need to do something now’ has been a dirty little secret hidden in a dark place and rarely if ever let out for public examination. One can speculate why – not wishing to give the funding institutions an excuse to withdraw their largesse must be a strong hypothesis. And not wishing to halt an otherwise desired political ‘green’ process is another.

But whatever the reason, you guys have NOT made it clear that these effects are likely to occur. And acusing us of deliberate misinterpretation is no way to escape your responsibilities to have done so.

It is no wonder that the Man from the Met Office writes that he believes that there is a lot of work to do to regain the trust in climate science. With daft arguments lke this the amount of work needed becomes ever greater.

Chris, the basic problem is that sensitivity is an abstract concept that has little to do with reality, and the policy issue is about reality. To take an extreme example, suppose CO2 levels double but we fall into an ice age and the temp goes down 10 degrees. Is the sensitivity minus 10? No, of course not, because other factors are at play. The point is that radiative physics are only one aspect of climate dynamics, and the other aspects may matter more as far as reality is concerned. By analogy, abstractly one can throw a potato chip as far as a baseball, but don’t bet on it. In short, the focus on radiative physics is misguided.

CHS: In discussions here, some people don’t seem to understand how sensitivity is defined; they mix up equilibrium sensitivity with transient response sensitivity, or try to read equilibrium sensitivity from a temperature trend. That’s not falsifying theory. That’s just needing to learn a lot more before you can even follow discussions. The problem is I’m engaging here with people who operate at many levels.

How many levels are you operating at? If one, how do you define climate sensitivity? If multiple levels, how are you different from anyone else here?

Personally I operate at one level: I “try to read equilibrium sensitivity from a temperature trend” as you put it. Since I apparently to need to learn a lot more before I can even follow discussions, if you could convince me you had a more reliable approach you would have my full attention!

“”Vaughan Pratt | February 10, 2012 at 2:01 am |
Girma, when would you say was the last time Earth experienced a rise of 15 °C in a mere millennium? What proportion of the planet’s species might fail to adapt to such a sudden change?
You are advocating business as usual, right, Girma? I’d hate to misrepresent you on that little detail.””

Vaughan,

Do you think the following link to a pie chart graphic is still the business as usual, in the wider climate scientific community?

It is all those bad humans need for fossil fuel energy that is going to exterminate them. Their bad behavior is going to make temperature to rise by 20 Deg, coupled with natural spatio-temporal chaotic systems;

‘Shorter Latimer. I bear no responsibility for an errors in my opinions. It’s all your fault’

But we weren’t discussing my opinions..for which I, of course bear full responsibility and am happy to do so.

We were discussing instead this remarkable phrase from Chris Ho-Stuart.

‘The people who say it isn’t are consistently distorting what expectations have actually been given’

Ever played the game ‘Chinese Whispers’? In which one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first.

Apart from being tremendous fun at children’s parties, it tells us something interesting and useful about communication in general. And that is that if you really want a message to be correctly received you need to build in a way of checking that it has been, otherwise it will accumulate errors.They may be caused by noise in transmission, by genuine misunderstanding, by inattention, by copying errors, by copying errors, by mispront, by language difficulties, by cosmic rays. But whatever their cause they are inevitable.

So designers of communication systems pay special attention to ways of checking that the message has got through correctly. To use a relatively simple example if you watch any episode of Air Crash Investigation and you will see that the pilot always reads back his ATC instructions so that both parties know that it has been received as intended. This strategy of send/receive/check is universal in good comms systems design.

Note especially the requirement that communication must be a two way process. It is not

‘I send, you receive’, but

‘I send, you receive, you send, I receive, we both check’

So for CH-S to blithely state that we haven’t correctly receieved (and indeed have wilfully distorted) the message we have been given really highlights a number of points about climatology communication.

Many climatologits have not grasped the two-way nature of communication at all. They are still stuck with the ‘I send, you receive’ model.

We can see for example in the website ‘Real Climate’, which is entirely dedicated to such a one-way proposition. Its strap line ‘Climate science from climate scientists’ does not augur well that it is a 2-way street. And its comments policy only reinforces that idea.

With no corresponding check of perception, they have no way of knowing what people’s expectation will be at the end of the game. And if it is hugely inconsistent with that they thought they had given, it really is of little value to blame the recipient.

For it is they themselves who have chosen to restrict themselves to the one-way communication model. They who wilfully and deliberately choose to ignore contrary views – dismissing any objections with derision. They who choose the megaphone not the telephone as their way to disseminate their ‘message’. And, as in this case, when that mechanism fails as it has here, and hen it causes them acute public discomfort, it really ill-becomes them to blame the receiver for the failure.

To reiterate: For my opinions I am happy to take full responsibility.

But for the fact that they think their message may not have been received correctly by the masses, only the climatolgits themselves bear the blame.

@Markus Yep, definitely 74.8% anthropogenic and only 25.2% natural.
I’d hate to misrepresent you on that little detail.

If you found an error I’d greatly appreciate your drawing it to my attention. If not then I don’t understand the point of your remark, unless you’re finding those numbers a bitter pill. If you’re interpreting the chart as standard deviation instead of variance then I would certainly agree with you that under that interpretation the chart would be wildly inaccurate.

Your recent reasoning in a three-component analytic model of long-term climate change, to me was illogical in several respects,

Do you mean it was wrong or that you didn’t follow it? Happy to fix either one.

but the biggest bias was your misanthropism.

I’ll have to pass on that one. Understanding how “but the biggest” can serve as a connective between topics whose relationship has not been made clear is above my pay grade.

your ad-hominem attack was no more than a condition of your preconceived ideals.

Again I’m not following. An ad hominem argument is an appeal to a negative trait that is unrelated to the argument. How is someone’s inability to accept that their reasoning is circular irrelevant when that’s what I’m complaining about?

Being a bank robber is a negative trait, but calling someone who’s robbing a bank a bank robber is not an ad hominem attack because it’s a relevant trait.

In my 58 years hanging out with Aristotle, I’ve never seen a better protagonist with the fallacy than you.

Just think how much more we will all know in 10 years…and then again in 20 about the relative strength of solar and anthropogenic forcing. What better time than a Maunder minimum #2 to see first hand if attribution studies have even come close to identifying the underlying forcing from the increase in greenhouse gases.

The last decade has been an interesting one. True, no contined increase in temps, though some pretty warm years. Will a new instrument record warm year or two occur in Solar Cycle 24, perhaps near the Max, especially if even moderate El niño sets up? How will complete skeptics of any AGW spin such an occurrence? And what of the long-term, multidecadal decline in Arctic sea ice? How might a sleepy sun for several decades impact that? Stay tuned…for the most exciting period on the history of climate study is about to begin!

It seems that some of us, at least, will have have learned something about climate and climate change. I hope that having a little knowledge of these issues will serve to make us more aware of what potential impacts our actions and that of other external influences could have on climate in particular and on our environment generally.

Wisely stated Just think how much more we will all know in 10 years…and then again in 20 about the relative strength of solar and anthropogenic forcing

This is precisely the point I was trying to make to Chris.

We simply DO NOT KNOW whether it will continue to cool slightly as it has over the past 11 years, or start warming again.

We don’t even know WHY it has warmed and cooled slightly in roughly 30-year half-cycles over the long-term record.

We think we know (but aren’t sure) why there was an underlying warming trend of around 0.6 degC per century – but IF it continues to cool slightly over the next several years despite ever-increasing CO2 levels, we will have to revise our theory on that, as well.

Yes. It is an exciting time for climate science – actually much more so than the much-ballyhooed IPCC late 20th century “poster period”, because we will get a real-life test of the CAGW hypothesis and assumptions, which are principally based upon that period.

If one picks 1998 as the “start date” for the current cycle of “lack of warming”, one arrives at an essentially flat trend (Met Office tells us: “Our records for the past 15 years suggest the world has warmed by about 0.051C over that period” ).

If one takes only the past 10 years (2002-2011) one arrives at a more significant cooling rate of 0.1C per decade.

There are good arguments for NOT starting in 1998 (a record high, strong El Nino year).

Starting in 2002 (or 2001) gives a more pronounced trend, but only over a shorter time period.

Cherry-picking?

Max

PS The fact of the matter is it is cooling slightly, as it did from around 1941 to around 1970 (not warming imperceptibly, as Met Office would have us believe).

That’s going from 1989-1999 inclusive at the top, to 2004-2011 inclusive, at the bottom. You might like to play with the spreadsheet I’ve made available at SkyDragon to do these calculations quickly.

Swings around a bit, doesn’t it? That’s why we DON’T use the 10 year trend as you have done. That’s why it’s just silly to say Met Office is trying to make us believe what isn’t “actually” the case. The Met Office, after all, specified the time span. What they actually said is (via Leake’s article) is this:

“Our records for the past 15 years suggest the world has warmed by about 0.051C over that period,”

Why would you pick a short term, which is even LESS reliable as an indicator of what’s coming, as what is “actually” happening now? I call BS on that. (BS being “Bad Science”, of course. :-)

What’s “actually” happening isn’t a trend over any window. Next year might be warmer or cooler; the changes “now” aren’t given by ANY trend. The trend over a window is a diagnostic, used for testing hypotheses. What is ACTUALLY happening now is that the atmospheric greenhouse effect is getting stronger; and at the same time the circulations of water and air and heat and cloud and so on around the globe are going on their merry chaotic way, meaning that we are going to have unpredictable short term variations while there is a continual flow of heat into the ocean from the energy imbalance between what is being emitted and what is being absorbed.

It’s the physics that matters. The behaviour of temperature, along with a lot of other observations, is all backing up the general picture of the planet shifting climate to get into balance with the new atmospheric composition. It’s not just extrapolating trends. It’s classic conventional science, digging into material causes, and forming and testing hypotheses.

Well, you excuse the models from explaining short-term variance and trends. That leaves long-term. Which, of course, can’t be validated till a long term has passed. So we’ll just have to trust the models are right!

Nice work if you can get it.

But I dooone thang sew. Trust in the models seems to be a VERY expensive wager.

Let me give you my opinion as to WHY Met Office cherry-picked 1998 (instead of 2002): so they could still claim a warming trend, even a statistically insignificant one, rather than (oh horrors!) a cooling trend, and then (wink-wink) remind us that the trend was started in a record high El Nino year..

The IPCC projection of +0.2C per decade warming looks much worse when compared to actual -0.1C per decade cooling than it does when compared to +0.03C per decade warming – right? [Error is only 0.17C rather than 0.3C per decade.]

And even that +0.03C per decade warming sounds a bit better when expressed as +0.05C warming over a 15-year period – right? [Casual observers might conclude that the IPCC error was only 0.15C per decade.]

Chris, that is what cherry-picking and word-smithing is all about, and Met Office as well as IPCC are experts at both.

They DIDN’T cherry pick anything. They were asked about the recent trend over 15 years, and gave the answer.

Good heavens man! That’s about as unfair an accusation as you could possibly give! They didn’t pick the window length at all.

The Met office DOES NOT pick any particular window as being the “actual trend”.

Careful studies of data, such as that by Santer cited previously, attempt to give a scale to the window length where you start to see the underlying trend. That’s not cherry picking either; that’s making a testable inference based on looking at all the data; one that will be confirmed or falsified as time goes by.

Climatereason, the spreadsheet I have provided at the thread in “SkyDragon” (Recent trends in global temperature) allows you fairly quickly to produce plots which give trends for four well known global datasets (GISS, NCDC, HadCrut3, UAH) plus one land only dataset (BEST). It allows you to enter a window length (in months) and the plot will then show the trend over every possible window of that length for each of the datasets. You can see how the trends increase and decrease with time.

Is this “authoritative”? Science isn’t actually about “authoritative” answers. It’s when you get a conciliation from many lines of evidence that you begin to think science has a handle on something.

Anyhow, those four datasets are all pretty close. There are differences and I know a bit about WHY there are differences, but that is not “authority”. That’s part of the messy business of science and replication and testing and falsification and so on.

But in general I think we can have confidence that four together give a fair picture of how temperatures are changing over time. (It’s worth understanding what is actually being measured here — know how an “anomaly” is defined and calculated. Another topic.)

I’ve given plots over there which show trends from moving windows of 15 and 20 years. You can easily generate others with the spreadsheet. The sources of data are given with links. Here’s a direct link, for example, to the image of a plot for trends with a 20 year window. linky. The vertical scale is the trend value, the horizontal scale is time, and each plot point represents the trend for the 20 year window centered at the given time.

For example… the purple line is for HadCrut3. It shows a local high of 0.237 C/decade at the time 1994.333. This means that the 20 year window from April 1984 through to March 2004 inclusive shows a trend of +0.237

Since then, the windows have declined. The most recent 20 year window is Jan 1992 through Dec 2011, and that has a trend of +0.155

The plot tracks how the trend over the window changes and the window moves over the last century to now.

I don’t give it as “authority”. If there are bugs, I want to know. If anyone wants to repeat the calculation, I give links to the data I used. But I do commend this as my best effort, which I think is correct, and which people can look at or use.

The SkyDragon thread has more comment, links and the spreadsheet itself.

The four datasets are not pretty close, nor are they of equal credibility. I like UAH because it is a measuring instrument, not a complex statistical model. It shows no warming from it’s beginning (1978) until 1997. It shows no warming from 2001 until now. But then second flat line is higher than the first flat line. The step up occurred during the big ENSO.

There is no trend here, just a step function. No physical trend whatsoever. No evidence of GHG warming whatsoever. The other datasets disagree, of course. The point is that there is no simple evidence such as you claim. I personally think the UAH data is sufficient to falsify AGW. But in no case is it simple.

There is no trend here, just a step function. No physical trend whatsoever. No evidence of GHG warming whatsoever. The other datasets disagree, of course.

If it will set your mind at ease, David, the other datasets actually agree. For example HADCRUT3 is just a step function whose steps average 27 years in length. Those steps are even longer on average than the ones in the UAH dataset. No physical trend whatsoever. No evidence of GHG warming whatsoever. The other datasets don’t disagree, they agree. You’re absolutely right about climate being flat.

Incidentally the six HADCRUT steps show a net decline of 0.036+0+0.012+0.01+0.03+0.067 = 0.155 °C when you add them up. How does that compare to the net decline of the UAH steps?

I personally think the UAH data is sufficient to falsify AGW.

As David has made clear on this and many occasions, his standards as to what it will take to convince him that AGW is false are very high. I personally think that for most skeptics a cold day in July is sufficient to falsify AGW. Imagine David’s uncertainty before the UAH data became available.

No, I had not seen that paper. It looks interesting, and the general notion makes good sense. That’s not a judgement on the paper or the work, which I haven’t read. I’ve seen similar kinds of work on shorter period changes associated with the Pacific Southern Oscillation.

PS. Don’t forget the next sentence after “I agree that we need more.” in the paragraph you partially quote.

The next sentence is “I think we HAVE more, and said so.”

It is the combination of all lines of evidence and tested theory which is the basis for the educated folks (that is, folks educated in climate science in particular) giving useful information on that technical subject, concerning what we do actually know about climate and what it’s doing.

Whether it is “scarey” or not is beside the point. The aim is to give what information we have. Which does indeed confirm about as well as science confirms anything that the planet is heating up primarily because of human caused changes to the atmosphere. That’s not just a guess. It’s the overwhelming conclusion supported using all available evidence and indicated by best available physical theory.

How much it will heat up and what attendant patterns of change will be seen around the globe is not so definite. That’s important information wanted to plan for the future, and that’s being worked on. Conclusions are being drawn, though generally they are more tentative, as befits the more limited level of scientific support for the such conclusions.

Look at the figure again. It is NOT showing temperature, but the “THC anomaly”. That is, according to Mann et al, the THC will be contributing a small cooling effect. Actually, it should be “Knight et al”; Knight is the first author.

This is a proposal for part of the contribution to natural variability above and below the main trend… but an unusually long period contribution, which is indeed most interesting.

What the paper ACTUALLY says of the overall temperature trend is seen in the conclusion. Here are the last two sentences of the paper:

This natural reduction would accelerate anticipated anthropogenic THC weakening, and the associated AMO change would partially offset expected Northern Hemisphere warming. This effect needs to be taken into account in producing more realistic predictions of future climate change.

That “partially offset” means that Knight et al are proposing that the main warming trend is greater than the quasi-periodic THC/AMO contribution to global temperature. Hence the paper is not predicting cooling, but that this effect will mask of some of the warming, leading to a reduction in the warming trend over that scale.

This fits pretty well with what I had suggested earlier, where I anticipated an upcoming 20 year trend of 0.15 to 0.20 C/decade… the low end of IPCC projections. But my guess was simply based on extrapolation, not on a particular physical theory. I don’t use that the prejudge Knight et al’s idea, even though it is fits pretty well with my existing perspective. The paper will need to stand or fall on its own merits.

Knight et al are not proposing an alternative to the main drivers of global temperature. They are proposing a specific factor contributing to secondary and untrended variation, which (if correct) might allow for better mid-term forecasts on the scale of several decades.

It doesn’t falsify the IPCC projections, for two reasons.
(1) FIrst, it’s not well confirmed. It’s a proposal; not a refutation of all alternatives.
(2) Second, it’s not really an alternative anyway. It would, if it holds up, give a constraint on the mid term 20 year projection, to let it be nailed down a bit more tightly, towards the lower end of expections. Instead of “about” 0.2, it would be “a bit under” 0.2.

When I did the same thing above, I got called a “lukewarmer”, which gave me a chuckle. :-)

Why do most of the global mean temperature peaks lie on a straight line?

Why do most of the global mean temperature valleys lie on a straight line?

They don’t, as far as I can see.

If you think you can see some kind of pattern by eyeballing a graph, the proper thing is to get some kind of sensible significance test. That’s not easy; you should really check with a professional statistician. I’m not one of those. Sorry.

Why do these two global mean temperature boundary lines parallel?

Because that’s how you defined them; they the same line with different offsets.

If you want to make a non-trivial comparison, you need to actually calculate two lines independently, and then see if they are parallel. But what definition would you use?

As it stands, note that your upper line has a couple of outlier peaks well above the upper line, and the lower one…. doesn’t.

Why is the slope of these boundary lines equal to the trend for the whole data from 1880 to 2010?

As before… they are the same line, just with different offsets. It’s because that’s what you’ve chosen to plot.

This is not a good place for these kinds of questions; this is really more about learning a bit of simple statistics.

Prof. Curry stated “Yes, but the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4″. The RealClimate analysis demonstrates unequivocally that this statement is factually incorrect. This is becuase Prof. Curry failed to consider the uncertainty around the expected trend. This may not have been stated explicitly in the summary for policymakers, but perhaps that is because it was a summary for policy makers.

Blustering about what the observations may or may not show in the next dacades does not change the fact one iota that Prof. Curry was simply wrong about this.

Prof. Curry stated “Yes, but the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4″. The RealClimate analysis demonstrates unequivocally that this statement is factually incorrect

Joshua: I assumed it was because I wasn’t sophisticated nor intelligent enough to understand that the answer to my question was obvious.

Not quite. It’s because, whatever your actual sophistication and intelligence, you write stupidly on purpose. As you wrote in another post, you try to prove people wrong using the technique of “Socratic Dialogues”, which in this venue are pointless.

It is a modification of the ipcc figure, the original of which is seen here:Figure TS.26, AR4.

The original figure is a comparison of predictions with data.

Your figure is a modification. You’ve somehow REMOVED the data from the original figure, and added in your own version of the data.

That’s not honest.

It’s also technically incompetent on two levels.

First, the projections are of central tendency, which means that they should be compared with smoothed data, filtering out short term variations. That is indeed what the original figure uses. You’ve removed that, and replaced it with unsmoothed data. Note that the original still shows the unsmoothed data points with black dots, and the smooth with a black line. That’s the major problem.

Second, the modified data is not quite aligned right. It seems to be shifted down a little.

The correct and more honest thing to do would have been the following.

KEEP ALL the original figure, including the observational data already supplied. Editing the image to remove parts of it, without even saying that’s what you’ve done, is reprehensible.

Extend with additional data available since publication, making sure it is correctly aligned with the existing plot, and identified clearly as added to the original. Purple is fine.

Add in a new longer smooth data line, using “decadal averages” as done in the original figure. That’s what you should compare, not raw data.

Do all this, and you’ll find (to my own total lack of surprise) that there’s no falsification at all.

Since the IPCC has not ever predicted 10 year trends, which is how you get “cooling”, you are, again, flatly wrong. The IPCC expectations are for a warming trend that shows up over longer time scales. On short scales, trends are expected to be substantially above and below the persistent long term trend. The IPCC expectation is for a trend of about 0.2 C/decade. That trend should be sought over windows of 20 years or more. The “about” in this case means something from 0.15 to 0.3

I personally would bet on the trend for 2000-2020 to be a bit under 0.2, which is still in line with the given uncertainty levels.

You are responsible for choosing to cite that image, no matter what clown produced it. The person who produced it doesn’t actually know what is being predicted, or more likely (given that the original smooth of observations was deliberately removed) has deliberately obscured the matter. If you didn’t produce the image, then it would seem you’ve been sucked in badly. It most definitely does NOT show any falsification.

To the extent you think it does, you are merely refusing to use what is actually explicit in IPCC expectations. It is most certainly not, and never has been, trends over 10 years!

1. Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?

Have you got some sort of circular reasoning problem when your ad-hominem attack was no more than a condition of your preconceived ideals.

“”On both sides of the climate debate, the test that people seem to be applying as to whether their reasoning is logical is whether it leads to conclusions they already held.

Your recent reasoning in a three-component analytic model of long-term climate change, to me was illogical in several respects, but the biggest bias was your misanthropism

“”In my line of work, which for the last 35 years has been logic, this is known as circular reasoning.””

My experience during that period in persuading people that they are using circular reasoning is that it is utterly impossible to do so. People have beliefs, and they simply refuse to attempt to imagine the opposite. Which is what you have to do in order to debug your reasoning.””

Argumentum ad populum has been your argument here. In my 58 years hanging out with Aristotle, I’ve never seen a better protagonist with the fallacy than you.

‘For the next two decades, a warming of about
0.2°C per decade is projected for a range of SRES
emission scenarios. Even if the concentrations of
all greenhouse gases and aerosols had been kept
constant at year 2000 levels, a further warming of
about 0.1°C per decade would be expected’ (AR4 WG1 SPM p12)

There is no discussion of uncertainties. No ‘if’s or ‘but’s. The whole para is highighted to stand out from the rest of the text. And it appears right slap bang under the heading ‘Projections of Future Changes in Climate’.

This was their take home message to Bush and Blair and Merkel and Putin and other world leaders. To the press and the rest of the media. And to the interested general public. This was written n plain(ish) language for a lay audience to understand. There can be no ambiguity. This is the prediction.

‘Latimre – what do you think the word ‘about’ means in the passage you have quoted?’

I think it means exactly what its common usage means.

No use trying to wriggle. The statement is unequivocal. It was deliberately meant to be easily read and easily understood. The Summary for Policy Makers is not a legal document where every word can be parsed and analysed. It is there to influence policymakers.

And it clearly states that they expect 0.2C warming per decade. Not 0.1C or 0.0C.

May I suggest that the source of confusion here is not over “about”, but whether a rate of “about 0.2 per decade” can be read as “about 0.2 for every decade”.

The “0.2 per decade” is simply the magnitude of trend; the units of degrees per decade. It’s not an indication that every decade will see about that rise. The suggestion is that over TWO decades, you will see a rate of increase of about that magnitude.

That prediction fails if the trend over the two decades is substantially different from 0.2.

The prediction does not say you can look at the trend over one decade and see that trend. I won’t quibble over whether it could be worded more clearly; but the meaning is nevertheless the same as it has been for every IPCC report ever. It’s always been recognized that there’s lots of short term variation over a decade or so. It’s always be explicit that the rise is not expected to be steady.

‘It’s always been recognized that there’s lots of short term variation over a decade or so. It’s always be explicit that the rise is not expected to be steady’

Chris, this is the Summary for Policy Makers. Guess who it is aimed at? Yep, Policy Makers. The clue is in the name.

And who might the policy makers be? Senior politicos and their staffs primarily. Why was this summary written, rather than just dump AR4 in its entirety on their desk and say ‘there you go matey..its all in there’? Because these guys are busy people, have limited time to read things, probably have a zillion other important things to worry about and may have only limited interest in the topic. So, like an ad on the telly, it has to be short and sweet and cover the main points of a topic that the recipient may have only limited (or no) background knowledge about.

It may well be that in geeksville, arizona ‘it has always been recognised’ that there will be short term variation. For those whose careers are funded just to obsess about the last jot and tittle of every word in the report this may be common currency (though historical records of this being so seem to be hard to come by).

But this is not the case for the occupant of the White House, or the Elysee Palace or 10 Downing Street or wherever it may be. Th eones at whom the SPM is aimed. They likely know little and care less about the historical conventions of the IPCC and its implicit caveats. They just have document on their desk called ‘Summary for Policy Makers’, and quite reasonably expect that it should give them a quick read and a decent understanding of the key points of the topic in hand. Probably enough for them to incorporate something about it in a speech and answer 1st level questions in Parliament or a press conference without making complete arses of themselves.

And this document is made available to the general public as well (a good thing to do). So the interested layman might take exactly the same approach. That if he reads this he can hold his own with the regulars of the Dog and Duck or over coffee at work on the topic.

I’ve written before about how misguided it is fro you to blame the recipient for them getting your message ‘wrong’. Here, it seems you compound the mistake bigtime.

You do your cause no good just by leaping up and down and shouting ‘you’re all too stupid to understand what we tell you, scum’. Especially when you don’t seem to have taken the slightest trouble to move beyond megaphone communication.

Despite your wish that policy makers are as stupid as you pretend to be, here is what the SPM says to start,

“The basis for substantive paragraphs in this Summary for Policymakers can be found in the chapter sections specified in curly brackets.”

And are there “curly brackets” after the paragraph you keep partially quoting????

Yes;
“For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7} ”

I am so used to the unit “degrees per decade” that it would never even occur to me to take that as meaning “degrees for every decade”, or an indication of a time frame. Especially when the time frame was explicitly given as two decades in the same sentence.

The trend over 10 years varies enormously. It’s been negative a number of times in recent decades; and also very high; over 0.4 at times. It’s not something you can reasonably predict.

There are two wordings of the prediction you are speaking about. In the technical summary, they say:

Committed climate change (see Box TS.9) due to atmospheric composition in the year 2000 corresponds to a warming trend of about 0.1°C per decade over the next two decades, in the absence of large changes in volcanic or solar forcing. About twice as much warming (0.2°C per decade) would be expected if emissions were
to fall within the range of the SRES marker scenarios.

That phrasing seems okay: “degrees per decade over the next two decades”. The “per decade” is the unit; the “over the next two decades” is the window.

In the summary for policy makers, it’s slightly different.
In the summary for policy makers, they say:

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

Using “over the next two decades” might have been better than “for the next two decades”, but at this point I think the more serious problem is a willful determination to take the wrong reading.

I don’t doubt there are curly brackets all over the place. Just like there are references academic papers and in some popular history books. They are there to guide the very interested reader to further material. Big deal.

A Summary should be what it claims to be. If you really need to read all the other stuff because you are otherwise missing really important stuff then it ain’t a summary it’s a ramble. This is clearly entitled ‘Summary’, and does not say ‘BTW you guys need to read all the rest as well because I just can’t be arsed to write a proper summary’

You and Chris H-S keep falling into the trap of trying to say that what was written and published for all the world – and all the world leaders – to see doesn’t mean what it actually means. That there is some sort of code known to the cognoscenti of the IPCC that makes the language mean something different from its common meaning. That we should all have known that the author had hos fingers crossed when he wrote it. Or that it was the Third Wednesday after the Feast of Walpurgis and Beltane, so that words take on a different meaning.

And then, to compound the felony, you wander about claiming that its all our fault. Words almost fail me, so I will use an abbreviation.

‘I am so used to the unit “degrees per decade” that it would never even occur to me to take that as meaning “degrees for every decade”, or an indication of a time frame’

You’re doing a great job of emphasising my oft-repeated point about communication. You – as somebody involved in this professionally – may have some specific meaning that you are so used to that you never even notice it. And maybe that is fine for discussion with your immediate colleagues. Every field has its jargon, so this is no surprise.

But when you are writing for a wider sphere, you must be just as careful with your language.and phrasing as I hope you are with the numbers.

In this case they were writing for non specialists in an attempt to make some sense of all the technical stuff in the rest of the report. It matters not a jot what you take it to mean. What matters is what the intended audience takes it to mean. And it is quite reasonable for them to expect that the language used will be in common usage and the meanings will be the common meaning.

And if a phrase or sentence or paragraph is possibly ambiguous, it is the author’s responsibility to make sure that such ambiguity is eliminated. It should not be the reader’s task to try to guess which meaning was intended.

The common meaning of ‘about 0.2C per decade’ is perfectly clear. If you choose to interpret it differently that is your choice. But you cannot accuse those who see it differently as wilfully distorting it.

Yr “Yes, it’s utterly stupid to declare what the AR4 says by only invoking the summary and ignoring the details in the technical section.”

O. K. Michael, I got it. When dealing with you and your pals–the “team”–we must always read the fine print. You’ve made your point.

And we also now know, thanks to you, Michael, that any decision-maker that relies on the weasel-worded summary statements appearing in your profession’s most prestigious guide for policy-makers is “stupid”, unless that policy-maker has also read and digested every fine-print detail of the whole report.

And finally, we know, Michael, those cleverly-worded, summary statements in the AR4 report are just a sound-bite friendly, agit-prop resource. You know, the sort of “good stuff” the team’s Big-Green trough-masters can draw on to whip-up decorative, “scientific” justifications for their CAGW scams, as needed. Like I said, I got it.

Dikran,
No, what RC has done is toss up a lot of bull dust, and you believers, having an appetite for bull dust, think it is wonderful.
RC is a pure propaganda site, and your quoting them is no better than some low level apparatchik of the USSR quoting Pravda.
I look forward to hearing how RC explains away the fact that the IPCC was completely wrong about Himalayan Glaciers:http://www.guardian.co.uk/environment/2012/feb/08/glaciers-mountains

It is deeply sad that whenever in this debate an assertion is shown to be unquivocally incorrect, rather than accept that it is incorrect, the response is an attack on the source or an attempt to move the discussion onto another topic (Himalayan glaciers).

RC would be stupid to present a false analysis. The AR4 model runs are all archived and publically available, so if they misrepresented them it would be straightforward to expose the falsehood. The ball is in your court, present an analysis that proves RC wrong; I am a rational man, I am swayed by logical reasoning, but I am not swayed by ad-hominems or rhetoric or bluster.

It is deeply sad that you confuse what takes place at RC as conclusive of anything other than Schmidt’s bloviation.
RC has been, by your definition, stupid for quite some time.
Your defense tactic is ridiculous.

Dikran,
I followed your link from the RC page. I acknowledge that 2011 fell within the uncertainty range of the model estimates, althought the range was 0.8C wide. It is too soon to tell the accuracy of the models in the AR4 report. More interestingly, temperatures have been below Hansens Scenario C since 2003. This is the scenario, where draconain cuts would keep atmospheric CO2 concentrations constant at year 2000 levels (the best case scenario, which he felt was highly unlikely). Hansen has stated that the cause for his is the large increase in sulfate aerosols resulting from Chineses coal burning. RC glosses over these disparities, and instead focuses on trends longer than 15 years. A few posters asked about what would happen if the current 15-year trend (-.009C/decade – CRU3)continued, but Gavin seemed to deflect these.
My question to you is how long would such a trend need to last before we begin to rethink these models? Gavin seems to think that not much will change this year.

DanH The models need rethinking continuously, that is the way science works and indeed is what the climate modellers do. They are always looking to include more physics so that the models become a better representation of reality. The models tell us the likely consequences of our actions based on our best understanding of the physics.

The key point here is that the observations are consistent with model projections, I have no objection to people criticising the models as long as the objection has a basis in fact. This one doesn’t and promulgating it is merely reducing the signal to noise ratio of the discussion, so it is in the best interested of both camps to drop it.

I’m not sure you’re getting the point of the uncertainties: it’s not unlikely for a climate system with a sensitivity of 2-4.5ºC to show relatively little warming across a given short interval (e.g. 11 years) even when the average rate of warming, across a longer period encompassing that interval, is about 0.2ºC.

Therefore, you can’t make any significant statements about sensitivity from this data.

You also might be interested to know that the IPCC range of uncertainty is not stratified by sensitivity across this short period. Some of the lowest trends come from higher sensitivity models, some of the highest trends come from low sensitivity models.

I should note that I do think 0.2ºC/Decade is probably a small overestimate though this is more likely due to the overly large forcings in most of the models rather than having any clear implications for sensitivity. I think the current warming rate is closer to ~0.15ºC/Decade. One thing that hasn’t been mentioned earlier in the thread is the word ‘about’, which has historically been used by the IPCC in a way that should really be written as ‘ABOUT’ in big letters.

PaulS said, “I should note that I do think 0.2ºC/Decade is probably a small overestimate though this is more likely due to the overly large forcings in most of the models rather than having any clear implications for sensitivity. I think the current warming rate is closer to ~0.15ºC/Decade. One thing that hasn’t been mentioned earlier in the thread is the word ‘about’, which has historically been used by the IPCC in a way that should really be written as ‘ABOUT’ in big letters.”

Agreed, ~0.15 with approximately +/- 0.15 natural variability. That is a huge difference in initial estimates and would make a huge difference in planning for and the cost of preparing for the future. That’s the point.

This plot was from the IPCC AR4, modified by Girma with an added uncertainty added by myself, smilie wearing shades :)

Definitely not long enough for a confident trend, but it looks like we are ABOUT to establish one with major policy implications.

Of course, how creatively AR5 handles the Antarctic, mid-tropo, lower strat and tropics would also have major policy implications.

Regarding what you were saying about ‘falsification’ of scenarios one interesting thing to look at is what scenario we followed over the past 11 years. In terms of emissions the A1B / A2 pathways are pretty close, but if you look at estimates of total radiative forcing change (e.g. from GISS) there is zero increase since 2000. This means that the scenario we have effectively followed is the ‘Year 2000 Constant Concentrations’ one, projecting about 0.1ºC/Decade.

Dikran,
If the models are constantly being reworked to include the latest research, then why are so many people still using the AR4 model scenarios, which are about six years old? As I mentioned earlier, the observations are only consistnet with the models, because the models have such a large uncertainty associated with them. One would think that they would rework the models before the observations fall outside the 95% confidence level.
Similarly to Paul,
It appears that even 0.15C/decade is too high. While a 30-year trend line will yield a similar value (0.16), shorter or longer time frame do not. (The 15-year trend is essentially 0, the 60-year trend is 0.11, and the 90- and 120-year trends are 0.07C/decade). Some posters are critical of the short time interval, and rightly so, but why neglect the long term trend in favor of an intermediate timeframe? A long term warming of ~0.7C/century would experience short-term rates of both 0.15 and 0C/decade

While a 30-year trend line will yield a similar value (0.16), shorter or longer time frame do not. (The 15-year trend is essentially 0, the 60-year trend is 0.11, and the 90- and 120-year trends are 0.07C/decade). Some posters are critical of the short time interval, and rightly so, but why neglect the long term trend in favor of an intermediate timeframe?

Because the theory isn’t that the climate should be warming at a certain rate by decree over any chosen timeframe. The theory is that climate will warm in proportion to changes in radiative forcing over time (+ equilibrium ‘pipeline’ warming). I’ll post the GISS forcing diagram again. Note that the forcing increase since 1950 is about 3 times that from 1880 to 1950, hence the theory would expect a greater rate of warming over the past 60 years compared to the past 130 years. There is also a, less clear, acceleration at around 1970 so again, we would expect the past 40 years to have a greater trend than the past 60.

Likewise, see my previous post about RF change since 2000. The GISS estimate suggests there has been no net RF change over this period.

DanH the reason that we are still using the AR4 models is that organising a consistent set of scenarios and getting a large number of modelling groups to coordinate to produce the multi-model ensmeble is a large effort, which detracts from the time required for research. There will be a new set of model runs for the next IPCC report and I understand that work on this is already underway.

Dikran, If you take the IPCC model range, you include models that predict 1.5K increase for the 21st century. So to say that the observations are “within” the model uncertainty is in my book a virtually meaningless statement. I think Hansen’s 1988 scenarios were clearly wrong on 2 fronts. He overestimated the percentage of emissions that would remain in the atmosphere, by a very large margin actually. And his model had a high sensitivity. In my book, that is definitive evidence that Hansen was unduly alarmist. In fact, the error bars on the models are probably greater than 100% of the values for such things as temp anomoly. And that’s the problem we should all focus on, not endless debates about how we can “adjust” the data so its consistent with our theory and models.

Mr. Marsupial, perhaps you can help me out with this sensitivity thing. Seems the Antarctic is not warming because CO2 needs water vapor to work. The tropics are not warming as much because they have too much water vapor for CO2 to work properly. The mid-latitudes are not warming as expected because ? At least the northern high latitudes are working. That kinda contradict what Arrhenius predicted.

Since, “we” know more than Arrhenius, is there a modeled out put by latitude that matches what is going on?

I’m sorry, silly me. I thought that with Hypothesis II latitudinal sensitivity of the past 120 years might indicate somewhat natural variations impact on the climate sensitivity to CO2 increase. Then an average sensitivity based on the latitudinal trends being 1.48C per doubling might be some indication of future response to CO2, which appears to be somewhat less than 0.2C per, though still within the confidence interval of the model predictions, just closer to scenario C.

you don’t have to read RC, just go to the IPCC model archive and plot the 95% credible interval for the A1B projection for yourself, and plot the observations on the same axes. You will find that the observations are consistent with the models and hence Prof. Curry’s assertion is incorrect.

One of the main issues with Dr. Curry’s statement is the instrumental record and which scenario is used to determine if HI is or can be falsified. Lucia has been on this subject for over a year now and comparing the various data sets to projections, in her opinion, the data is verging on falsification of H I.

As you know, falsifying H I with the large uncertainty is not an easy thing to do. Some say it is impossible to falsify. However, since the 0.2C per decade appears to relate to the BAU scenario, there is a possibility that that scenario is falsified.

So do you consider it valid to falsify “a” scenario, or just the particular scenario you happen to select?

It isn’t difficult at all. If the observations (within their stated uncertainty) lay outside the stated uncertainty of the projections, then that would falsify the models. Yes the credible interval is broad, but that is because there are large uncertainties involved.

However, that is beside the point. Prof. Curry has claimed that the observations are not consistent with the IPCC projections. If you plot the projections and their uncertainty, then it clearly not the case. I really don’t understand why some can’t accept this.

Short answer to the question is “Yes, there is indeed modeled output by latitude that matches what is going on.”

At least, in the general terms you are using.

Model data is available as gridded data, which allows you to get the numbers for latitudes if you want. The group at NASA also give you all kinds of plots from model runs using “ModelE”, including plots of all kinds of data by latitude; temperature included.

If I am reading the plot model correctly, 1 degree of warming has not happened in the Antarctic and the tropics appear to be projected higher than observation. The sensitivity by latitude that I calculated differs a bit from the that model.

Now is that because that model does not consider natural variation or perhaps is the radiant physics is off a touch?

Mr. Marsupial said, “It isn’t difficult at all. If the observations (within their stated uncertainty) lay outside the stated uncertainty of the projections, then that would falsify the models.”

Business as usual estimates both emissions and response. That means there are two layers of uncertainty. If you consider only the response, I would say it is falsified, barely, but falsified. Perhaps a more specific post is required because the observations agree with H II more than H I.

By separating the scenarios and the models it would be easier to determine what is falsified. I am confident some models should be either falsified or corrected.

As you know, falsifying H I with the large uncertainty is not an easy thing to do. Some say it is impossible to falsify.

I’m assuming that by ‘falsifying HI’ you mean ‘falsifying the range in the IPCC’s scenario-based projections’. It really isn’t that difficult. All that needed to happen was a ~0.2ºC drop in global temperatures since 2000 (or a ~0.8ºC increase). That hasn’t happened, ergo the IPCC range is not ‘falsified’. It’s like saying it’s impossible to falsify the theory of gravity because all this stuff keeps falling down.

However, since the 0.2C per decade appears to relate to the BAU scenario, there is a possibility that that scenario is falsified.

The uncertainty that Dikram is talking about is for a BAU scenario (A1B). Observations are currently within the IPCC range, ergo the range isn’t falsified.

Paul S said, “I’m assuming that by ‘falsifying HI’ you mean ‘falsifying the range in the IPCC’s scenario-based projections’. It really isn’t that difficult. All that needed to happen was a ~0.2ºC drop in global temperatures since 2000 (or a ~0.8ºC increase). That hasn’t happened, ergo the IPCC range is not ‘falsified’. It’s like saying it’s impossible to falsify the theory of gravity because all this stuff keeps falling down.”

Or not rise with the projection. With current sensitivity estimate of 2C and less for a doubling, that is “likely”. So it is more like falsifying the gravitational constant because things don’t fall as a fast.

Which, if I were in the climate modeling business, I would be considering, instead of splitting hairs.

Capt Dallas, the model is not a perfect match. It does show some of the variation by latitude you expect but the magnitudes may differ somewhat. There is discussions of these limits on how well the model matches distributions of change over the globe in the associated journal article, prominently linked in the pages I gave previously. Section 2 lists and discussions the known deficiencies. Read it for yourself, please.

I thought the discussion was on whether H I will be falsified and if H II and H III might be worth consideration.

Since the projections are based on the models simulations that indicate approximately 0.2 C per decade, the error in the models in the Antarctic and tropics appear to be higher than observation, and the trend in the tropics since 1994 is only 0.04C per decade, it appears likely that H I will be falsified. Perhaps a better look at the observations will help,

@Dikran marsupial Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?

Judging by the following quote, apparently there is such a person:

@Dikran marsupial This is becuase Prof. Curry failed to consider the uncertainty around the expected trend.

But why should such a person be believed when the rule of inference they appear to be applying would seem to be that if RealClimate doesn’t know something, therefore JC doesn’t know it either.

Now admittedly that’s how RC prefers to reason about these things, quite understandably of course, that’s how we all reason. But where in the spectrum from logical to arrogant would you say that line of reasoning lies?

On both sides of the climate debate, the test that people seem to be applying as to whether their reasoning is logical is whether it leads to conclusions they already held.

In my line of work, which for the last 35 years has been logic, this is known as circular reasoning.

My experience during that period in persuading people that they are using circular reasoning is that it is utterly impossible to do so. People have beliefs, and they simply refuse to attempt to imagine the opposite. Which is what you have to do in order to debug your reasoning.

It is easy to get mixed up following the discussion. Just the discussion alone between you and Girma – it appears both of you are correct, which makes me think either you both are talking past each other about different things or the nature of the data is that it can be manipulated almost anyway you want it to.

I guess in the end it doesn’t really matter, as I believe we are getting warmer and I’m willing to accept that anthropogenic causes can be a significant factor. I just haven’t seen anything that classifies as good science that shows it is something to be concerned about. Arguing about starting points for statistical analysis of the temperature data is nothing more than an academic exercise. It is interesting and may further our understanding, but it certainly is not a clarion call to action on the part of governments.

Is it just inherent in some large scale version of Heisenberg’s Uncertainty Principle? Or because we don’t know how to writhe the code? Or some other reason?

Because it seems to me that before we spend another cent on this endeavour we need to have a very good idea of exactly what we can achieve. And that if the answer is that the models will never be good enough to have a decent idea about future temperatures, we should defund the lot immediately and spend the money on something useful instead.

The models are irreducibly imprecise because of sensitive dependence and structural instability arising from the intrinsic nature of the multiphasic Navier -Stokes partial differential equations – they are crazy little, type 3, deterministically chaotic bastards. And people say they can’t understand me. What the hell is the freakin’ problem with them?

These explanations are all very dubious. I thought solar forcing was negligible (at least that is the dogma of all the IPCC reports). Sulphate aerosols influence is essentially unknown even according to the IPCC. ENSO I thought couldn’t result in a net multi-decadal trend because of conservation of energy. We have had multiple posts from Fred on this subject. Of course, when an explanation is required, one resorts to things that are essentially unknown. This is just astrology and not science. And by the way, what caused the little ice age and the Mideval climate optimum? If the models are as good as you say, then they should be able to tell us. It seems pretty clear to me that the bulk of the evidence shows that the models are overpredicting warming, especially if you use the lower troposphere satellite data.

I liked much your attempt at differentiating the hypothesis used to establish a theory of climate dynamics.
I would like to attempt to use this differentiation to identify more accurately the physical background.

You wrote:I. IPCC AGW hypothesis: 20th century climate variability/change is explained by external forcing, with natural internal variability providing high frequency ‘noise’. and Hypothesis I derives from the 1D energy balance, thermodynamic view of the climate system

The operative word is here 1D.
To stay rigorous, Hypothesis 1 is not really that the variability = GHG signal + noise.
The real Hypothesis 1 is that the climate system can be robustly and deterministically predicted by a 1D model.
This Hypothesis has necessary consequences.
– only energy balance matters (here comes the school of people saying that the system is trivially simple because it exchanges energy only by radiation)
– only “equilibrium” matters (here comes the school of people who compare the system to a small ball slightly moved away from its equilibrium position inside a spherical bowl)
– space doesn’t matter (this is a tautology because if a 3D system can be reduced to 1D and still predicted, then the “neglected” 2 D obviously didn’t matter)
– from the above follows also necessarily that everything that happens in the real 3D world can only be noise (here comes the school of people who say that everything averages out)

Interestingly you will have noticed that 99% of the comments here are resolutely 1D and many are even totally unable to understand the difference between a 3D world and a 1D model. This gives us some of the funniest coments which boil down to saying things like “What can be possibly complex about multiphasic Navier Stokes? How can that be relevant to anything?” ?

The analogy to this schol of thinking comes immediately in mind and I am sure that you will understand what I mean because you come from fluid dynamics even if, unfortunately, it will be wasted on most commenters amateurs of the 1D hypothesis.
2D and 3D Navier Stokes.
Why is 2D N-S easy?
Because the vorticity is conserved in the inviscid limit.
Of course it is not conserved for 3D N-S and we live in a 3D world.
So people who would only learn 2D N-S would never understand why N-S is really hard and why we can’t correctly explain things that are easy to explain in a 2D world.

To resume the 1D climate hypothesis is per definition unable to explain anything that happens in the neglected 2 dimensions and must rely on the axiom that all these 3D phenomena do not matter. This is cannot be proven in the frame of this theory and must be postulated.

I did not really understand Hypothesis 2. It seems to me like a curve fitting exercice where I add periodic signals which are superposed to a linear signal. The whole exercice happens still in 1D though and that’s why it would be just an avatar of the Hypothesis 1.

Here it is not really a hypothesis but a physical reality.
I hope nobody would defend the idea that the system doesn’t obey the dynamical equations transcribing energy, momentum and mass conservation (e.g Navier Stokes &Co).
And obviously it happens in a 3D world.
As obviously the fields interact and are coupled in a non linear way.
The Lyapounov coefficients are clearly >0.
So this the only way to take the system seriously.
Sure, it is much harder to solve than unrealistic 1D linear lodels but since when did Nature care about what was hard and what was easy to solve?
For me this is the only serious paradigm with some physics inside and it is a disgrace that there are still people who don’t understand it.

Seems to me that the point at which you should crow about predictions of future temperatures being wrong is the point at which any known magnitude of year-to-year fluctuation (anomalies?) would still leave the overall trend outside the projected range. (I believe that mosher has said some things along similar lines).

In other words, if 2012 were as much warmer than 2011 as 1997 was compared to 1996, and still the overall trend would not fall into the predicted range, then there are some serious problems with the predictions. That logic could be extended to two year differences, or ten year differences, etc.

In other words, if the previously observed degree of variability, if repeated, would not substantiate the predictions, then it would seem to me to be reasonable to assert that the predictions were in error.

Of course, you’d also have to account for any established trend of increased year-to-year, or two-year to two-year, or decade-to-decade variability.

Can some “skeptic” take pity on me, read what I just wrote, and clear up my silly attempt at understanding how to evaluate the validity of the IPCC’s “predictions.”

Joshua. Youa re absolutley correct. Girma’s graph, the URL of which I have lost, makes the issue crystal clear. Temperatures have been rising since 1850 or so at a rate of 0.06 C per decade, with nearly all temperatures lying within +/- 0.25 C of the mean value. The recent pause in temperature rise still lies within this +/- 0.25 limit. When the actual observed temperature falls outside these limits, as predicted by the proponents of CAGW, then, and only then, will I start to worry. If Girma sees this, I am sure he still has the URL.

Needless to say, what I have just written has been repeated ad nauseum for years. It is just that people like yourself seem to ignore it.

I didn’t see a question. You seem to want skeptics to prove that something is correct, whereas we mostly point out limitations in the science: showing that the IPCC/CAGW view does not rest on a solid base. In this instance, the IPCC projections are shown to be unsubstantiated by subsequent events, so there is no good reason to believe the forecast for later decades.

Ignorance is the hardest state to recognize and admit to. We document the ignorance (i.e. limits of the knowledge) and you want us to turn it into an alternate certainty. The only proper way to do that will be to continue the research until there is more evidence of all kinds. I give you “Raymond T. Pierrehumbert’s book ‘Principles of Planetary Climate’ is mostly correct, but inaccurate in detail, and the details require more study”. You want proof that some other presentation of details is correct, but every presentation is inaccurate in details. This seems to cause you anguish. I attribute to you a belief like “The consensus has to be correct because there is no strongly supported alternative”, but I give you “The consensus is full of cavities, is not strongly supported, and there is no strongly supported alternative.”

MattStat,
Joshua is working far too hard to avoid such an obvious point.
His is a tar baby defense- to ask insincere questions / or to make non-questions, and to bog down everyone involved .
Just let him twist and troll.
AGW apologists have to resort to wack-a-mole, tar baby and shouting defenses, but their favorite is to simply ignore what they do not like, fail to respond, and then to claim only their views are valid.

I didn’t see a question.You seem to want skeptics to prove that something is correct, …

Good point. The question was whether or not my construction (to the extent that it was even understandable) was correct. The “question,” such as it was, was to ask whether or not there is a flaw in my thinking. Not to prove that something is correct, but to show me how it is incorrect.

This seems to cause you anguish. I attribute to you a belief like “The consensus has to be correct because there is no strongly supported alternative”,

That isn’t how I see it. It is interesting that no matter how many times I have to correct people in that regard, they still see my opinion to be one as you describe above.

My “belief” (more like general sense of how it works rather than a belief – I think that “belief” is too strong a word) is that absent hard proof otherwise, it isn’t irrelevant that a “consensus” of expert opinion says that a certain interpretation is probably correct. I don’t think a “consensus” is dispositive in any way – in the sense that you attribute such a belief to me. Because I can’t evaluate the science for myself, I have to look at it as playing the probabilities. It doesn’t help the “skeptical” cause when smart people such as yourself, who understand the science much better than I, make simple mistakes of attribution w/r/t my beliefs.

Here’s the thing. I read threads like the “Sky Dragon” thread, or posts up at WUWT where very smart and knowledgeable “skeptics” say that AGW is impossible, and then I read other “skeptics” say that I should disregard such arguments because they are outliers, and then the same “skeptics” turn around and say that I shouldn’t disregard their opinions simply because they are outliers. Do you see the problem?

In fact, I don’t disregard any opinion because it is an outlier. I look at the information available and make a best guess.

That all said – if you can figure out what I was attempting to say in the post (I’m not sure it makes any sense), I would appreciate it if you could either confirm the logic or explain where it goes wrong.

It is not a problem, hence “the problem” describes something that is not there. Skeptics do not agree about very much except a few generalities: the absorption/emission spectra of gases measured in laboratories; the laws of thermodynamics, etc. They dispute each other’s reasons for being skeptics.

My “belief” (more like general sense of how it works rather than a belief – I think that “belief” is too strong a word) is that absent hard proof otherwise, it isn’t irrelevant that a “consensus” of expert opinion says that a certain interpretation is probably correct. I don’t think a “consensus” is dispositive in any way – in the sense that you attribute such a belief to me.

You have put the burden of proof on the wrong side. In science, the burden of proof lies with those promoting complex theories. That’s because most complex theories have been false as proffered, or else have required decades of work to clear up the details. Thus, a “consensus” of the experts is irrelevant. Until the cavities have been filled, there is no solid foundation for any policy, and odds are that the consensus is wrong.

I find “burden of proof” arguments about as useless as arguments about what constitutes an ad hom. Both sides always think that the other side has the burden of proof.

It is not a problem, hence “the problem” describes something that is not there.

So you say. However, I have read many an argument at WUWT between smart and knowledgeable people, arguing with absolute certainty, that their opposition is arguing in favor of a perspective that fundamentally defies the laws of nature. Now I can’t evaluate the science of their arguments – all I can do is: (1) evaluate it when they seem to make illogical assertions in a non-scientific domain and use that as information about their proclivity to allow biases to influence their thinking and, (2) evaluate, based purely on probabilities, whether they seem to be an outlier among other smart and knowledgeable people, realizing, of course, that being an outlier is not dispositive to anything.

Thus, a “consensus” of the experts is irrelevant.

So you say – but I doubt that you truly live your life in accordance with that statement. My guess is that if you had an illness that you didn’t know anything about and had no personal experience with evaluating, you would do the absolute best you could to understand and consider the opinions of experts, but at some point you would consider the preponderance of expert opinion in making a treatment decision. Of course, the preponderance of expert opinion wouldn’t necessarily be dispositive- but you would consider it to be a factor in your decision making. I find it hard to believe that you would say it is “irrelevant.”

Anyway, we’re not likely to make further progress on this. You made an incorrect assumption about my belief. That is what it is. I’d still appreciated it if you’d give me feedback on my post that started this mini-thread.

Joshua: I find “burden of proof” arguments about as useless as arguments about what constitutes an ad hom. Both sides always think that the other side has the burden of proof.

Yet you clearly stated where you believe the burden of proof lies. Have you forgotten the order of these comments?

However, I have read many an argument at WUWT between smart and knowledgeable people, arguing with absolute certainty, that their opposition is arguing in favor of a perspective that fundamentally defies the laws of nature.

Debate or discuss with them, quoting them exactly.

I’d still appreciated it if you’d give me feedback on my post that started this mini-thread.

That was poorly expressed. Write what you believe to be a logical development whose conclusion you believe to be well-supprted.

I suggest that the key issue is whether or not we have confidence in the models that forecasted the dire conditions 100 + years into the future. Temperature as a function of CO2 levels is only on characteristic predicted by the models and the “actuals” are coming in lower than the models predicted.

The issue then goes to the other characteristics that the models predicted- all of them and let’s compare observations to forecasted results. It appears that by any reasonable measure the models are doing very poorly. Based on the models doing poorly to date, what is the basis of the fears that the same models predicted for 100 years into the future.

Maybe conditions will change, but there is no reason now to think that is inevitable or even probable. How does a reasonable person accept the predicted outcomes based on what we know today?

Sure, as we stretch out the time horizon, it is entirely possible that the implications of any short-term inaccuracies, across different projections of impacts, would grow exponentially. That is a legitimate concern, IMO, but it doesn’t justify exploiting short term divergence from projected long-term trends when caveats about short-term trends were provided. (whether those caveats were sufficiently stressed or not is another matter).

The issue then goes to the other characteristics that the models predicted- all of them and let’s compare observations to forecasted results. It appears that by any reasonable measure the models are doing very poorly.

Could you be more specific? Are you referring to sea level change? Glacier melting? Frequency of extreme weather events? All of the above? Are you referring to scientifically qualified predictions based on modeling where observations fall outside of error ranges? As far as I can tell, in each of those areas the debate takes basically the same shape as with temperatures: “skeptics” claiming that short-term observations disprove long-term predictions and “realists” saying that the short-term variability is within long-term error ranges. Are you referring, for example, to the statements about glaciers melting in the Himalayas?

Plenty of members of both organizations had many chances to stand up and protect dissent. Many did not even if they realized the wrong they were protecting. This part of the conversation isn’t even worth discussion, it’s a social fact of record.

The real issue and taboo is what cultural do all of these groups have in common that create such a willful blindness that existed in the AGW movement and reaction?

I believe you referenced this Climategate email link if indirectly just the other day;

Consider the date of the email, the parties and how arrogant, tired and boring the debate was even then.

That the topics of temperture sets, political consensus as a serious science
argument ended for me long ago. While I have my own nuances on the topics Dr. Lindzen best represents my views on why these topics are a waste of time (while I still respect the exercise of M&M and others concerning the abuse of these records and data);

At some point it’s just silly to debate flawed (worthless might be a better word) temperture records that even if assumed accurate tell us very little about “climate” in the complex meaning of the term. The debate itself reflects the child-like simple minded nature of the science of AGW and the advocates. I’m sure that many skeptics are well intended in this engagement there is clearly a validation to the agw minions that are will represented on this thread. Better to spend time over the silly insider views of “consensus” and nearly worthless land data records then face the real issue which is their political culture and motivations that are the actual drivers of the movement.

Lindzen was 25 years ahead that long ago, he’s 25 years ahead today. The science is limited in reality but the politics is and was very very real. Do you seriously believe either the leadership of the APS or RS are any more qualified or informed on the issue at hand? Of course you don’t, know sane person could. There is only one way to explain their posture and that is the cultural rot politics that they in fact share with most of the vocal “consensus” and the larger climate science community.

Boycotts and resignations are helpful, addressing the politcal culture specifically is more helpful. This site is largely a backwater for the standards of political obfuscation regarding topics that are maintained by the moderator and largely supported by skeptics. Useful idiots they often are.

Do you think any of those links are going to change APS or RS leadership positions?

cwon14 you write “Do you think any of those links are going to change APS or RS leadership positions?

I have a bridge to sell you.”

I respectfully disagree. The positions of the Royal Society and American Physical Society, together with dozens of other learned societies are so obviously anti-science that sooner or later, they are going to have to change their positions. So, no, I cannot be certain that those particular links are going to change things. But things ARE going to change; sometime. And maybe a discussion of the subject might speed up the process.

One of these days something will happen and the RS and APS will have egg all over their face. However, I cannot predict what will actually cause this to happen. It happened before with the RS. There was a disagreement when lightning conductors were first invented. The Americans and British had different designs; one was a spike and the other a ball on top of the building being protected. Which was which I have forgotten. We now know thast it does not make a blind bit of difference, as long a you have a large conductor on top of the building. The RS, of course, supported British industry. Need I say more.

It’s good to be optimistic Jim, some of the time. The toady green infiltration into physical science, its dependency on debt finance and the usual trappings of government excess are becoming far more clear to more people. That’s part of the reform movement. As the AGW left tends to be one note, boring and totalitarian in nature is also becoming more widely understood. Ultimately it’s this issue that is the driver of real reform.at trade and professional associations.

A more honest and direct discussion of political specifics in these matters will help advance reform. On the third link from WUWT, which is a scream by the way, we get to visit FULL STUPID on display from the APS. The actual political motives of the party in question are not referenced directly but everyone (around our circles) knows. Only some idiotic decorum or convention prevents the direct discussion of APS AGW leader and the uber left-wing agenda he is shilling for and is part of. It’s this sort of make-believe that drags this topic on forever and huge wastes of time discussing of all things meaningless temp data crowd this forum. The same time is being wasted at the APS all by letting one side control a technical narrative that is nonsense and obfuscating a political narrative that is most relevant to the story. Even Dr. Lindzen is far to gracious in the RS reform article.

We see this pattern here on these boards all the time. We accept silly rhetorical standards and the pace of change remains dismal. Byers has nothing to worry about at the APS while the wimp factor of skeptics remains this high for example. How obvious are their political motives? How obvious is it they are not discussed directly and specifically as they should?

Your links illustrated it perfectly. Expect nothing if this is a general demeanor and rules of discussion going forward.

I doubt that APS, NAS, RS or any other venerable society, itself, will have “egg on their face” as the scientific support for CAGW unravels further..

It will simply be a matter of changing out the political leadership (early resignation) with someone who sees the ongoing climate debate more as a true scientist and less as a politician.

[It is actually tragic to see a renowned Nobel Prize winning scientist blow his reputation by making silly political proclamations about a science of which he is totally ignorant, as cell biologist Sir Paul Nurse of the RS did.]

The societies can then quietly issue new statements, not based on defending the IPCC “consensus” position, but based on the status of the “science” at that time.

I apologize in advance regarding the temperture set reference, I realize you aren’t on this thread whining about it as so many are. I linked the two topics here in error to your post. I should have just focused on the questions your links raise.

They are good links worth reading, I appreciate the sentiments but I stand on my conclusion. Nebulus shilly-shally conversations and points aren’t going to change them.

Girma, no they haven’t. You don’t appear to understand how to even test IPCC projections properly, let alone describe them accurately or directly.

If you try to spell out a projection, and the nature of a refutation, then it will be easier to explain where you are going wrong. That’s been done many times. Is that why you aren’t now even TRYING to state what projection you think is falsified or describe what observation falsifies it?

I understand your point, I’ve followed the thread. On the other hand would you consider the concept of “being dragged down to their level”?

Arguing the temperture sets is rolling in the mud with swine on the other side. The whole notion of the “climate” being reduced to “warmer” or “colder” in itself was an idiots delight from the very beginning. It’s one of the main reasons I have climate science near to the bottom of science universe of fields. I have more respect for “experts” with vitamin hyperbole.

I respect McIntyre and shooting down the bogus hockey stick. I see the magic bullet of exposing fraud in the temp data but on the other hand skeptics are also validating a red herring that short term temp records really matter when in fact they don’t. Nothing about this topic explains cause for example. That’s it’s flat or down near peak co2 production might seem like fun to expose but you are also perpetuating a core warmer myth; temperture is really important in driving climate not the other way around which could well be the case. Same for co2 levels, the other bag of bogus shells.

You know when you are having extended conversations with Joshua or Martha shows up it’s worth a review.

Agree with all the above Sol, but I’d just comment that RSS and UAH are not measuring the same thing as the others. They are providing an estimate of temperature along a big slice of lower atmosphere, where the others are doing air temperature about 2m above the surface.

The lower troposphere temperature shows much larger and more rapid swings, which means that you generally need a longer window to reveal the longer term trend. They are also shorter records overall, so it’s not going to be easy to calculate appropriate window sizes as has been done for the surface record. There are also major sources of error and uncertainty with piecing together satellite measures, which is why RSS and UAH are quite different from each other, despite being based on exactly the same underlying raw microwave data. People seem to think satellite data is somehow “more reliable”. For temperature of the lower atmosphere, it’s actually substantially less reliable; for a number of reasons. The extraction of a temperature value is certainly one heck of a lot more mathematically complex.

That is an oxymoron. The IPCC projections are NOT for 10 year windows. They are for the central tendency of rise, over a longer period. 20 years is enough to get the central tendency of trend showing up reasonably well, though of course there’s still a bit of up and down due to the unpredicted short term variations apparent in shorter windows.

Be that as it may:

The most recent 20 year windows I gave previously. HadCRUT3 (which is known to be biased a little low) gives 0.155 as the most recent trend.

GISS gives 0.206

Lots of people seem to like the UAH data; that gives 0.206 also

I like to keep an eye in the NCDC construction. That gives 0.164

Since you like HadCrut3, the last time you could get a 20 year window showing a trend of less than 0.1 was 1977-1997.

From 1979-1999 onwards, the HadCRUT 20 year window has never gone below 0.15.

Will it do so in coming years? Possibly… 0.15 is after all at the low bound of expectations. But note that HadCRUT4 is coming out soon, which is likely to give better global coverage, of the Arcticin particular, and that is likely to give results aligning more with other datasets that already take the full globe into account.

Using 15 year windows is just wrong. That will show up too much of the unpredicted short term variation. You get 15 year windows with a trend below 0.15 from time to time in all the data sets, and this is not in conflict with expectations.

I agree with your point about length of trends, and generally would go even further..

You mention that –

From 1979-1999 onwards, the HadCRUT 20 year window has never gone below 0.15.

I keep an eye on the trends since 1990 for the sole reason that this was the year of the IPCC FAR. I’d therefore make the obsevartion that since then, Hadcrut3 has a trend of 0.14 [as does RSS)]

There are of course caveats aplenty to be had. Those with a ‘cooling’ agenda might point out that had Pinatubo erupted in 2009 rather than 1991, the trends would have been less than 1.0.

I’m not making great claims for these things, but saying that even a 20yr moving window has its limitations, as I’m sure you know. One obvious example of this is the likely trend of 1997-2006. Barring very dramatic changes, it will have a very low positive trend. I don’t think this is significant, but making a point of the 20 year trends of the past opens you up a little to the ‘what about 1997-2006!!’ if it is indeed very low [or even negative].

I’ve just found and added RSS to my spreadsheet; I get 0.142, which matches your calculation.

Overall, its right on the low end of expectations. NCDC or GISS will be, I think, the proper comparison; though it doesn’t make a lot of difference if you prefer one of the others.

I’m not making great claims for these things, but saying that even a 20yr moving window has its limitations, as I’m sure you know. One obvious example of this is the likely trend of 1997-2006. Barring very dramatic changes, it will have a very low positive trend. I don’t think this is significant, but making a point of the 20 year trends of the past opens you up a little to the ‘what about 1997-2006!!’ if it is indeed very low [or even negative].

Agree on the limits of a 20 year trend; and I’ve said so in my comments a number of times. That window still incorporates a substantial contribution from untrended short term variation; though it is long enough for the main trend to dominate over the main short term variances.

However, 1997-2006 is a 10 year window; not a 20 year window.; and for what it is worth, it is a window with strong positive trends in all datasets. Was this a typo? Can you confirm what you meant there?

As I said previously; the HadCRUT3 20 year window hasn’t been below 0.1 since 1977-1996 (0.099); and all 20 year windows including any part of this century are above 0.15… so far. It could dip below 0.15 in coming years, quite possibly. We’ll see; but for GISS or NCDC; that is less likely.

Apologies for the typo [and well spotted that it might be] – I meant 1997-2016.

That window still incorporates a substantial contribution from untrended short term variation; though it is long enough for the main trend to dominate over the main short term variances.

My point was in a way a slight disagreement with your statement above. I would agree with it but would add usually and I was thinking about the 20 year trends starting 1997, 1998, and perhaps 1996.

If the RSS data can show a negative trend over 15 years (which they do now, just) then an unexceptional next 5 years will very likely lead to a low positive trend for a few 20 year periods tarting in 4 or 5 years.

The important point for me is just that trying to extract meaning – particularly statstically convincing meaning from data such as we have, is always fraught with danger. My conclusion in a way – which is more philosophical than scientific – is that hard and fast rules about lengths of trends needed for relevance are themselves not very supportable. Of course we’re desperate to pin down ‘how many years are needed’ etc. I think the nature of the beast is such that the harder and faster the rules we apply, the less we are likely to glean the most useful information from the data. And maybe there is a limit to how useful very complicated statistical analysis can be. We should allow ourselves to shift our perspectives to and fro depending on the circumstance – which would be OK if we were all profoundly objective…

I revert back to my original comment – that mostly I would err on the side of using the longest period available and settle for saying ‘we just don’t know’ more often than not.

But that doesn’t detract from your point that 20 year windows are considerably more useful than decades – which are barely worth studying.

The length of trend you analyse should really depend on what question you’re trying to answer. If people were really serious about disproving the existence of a continuing underlying positive trend, rather than politicking on a blog forum they should really be calculating what would actually be necessary.

One way to calculate this theoretically would be to produce a time series containing a linear 0.2ºC trend + noise with similar properties to the global surface temperature series, then find out what length of ‘time’ would be needed before you can definitively say that every trend drawn is pretty close to the actual 0.2ºC trend.

I did this a while ago. As I recall I calculated that about 25 years of annual data is needed to get within +/-0.05 of the actual trend 95% of the time, though I think I included a low-amplitude sine wave, representing the 11-year solar cycle, in that particular experiment + random ‘noise’.

We can then state a null hypothesis that there is an underlying trend of 0.2ºC/Decade, and this null hypothesis can be falsified with 95% confidence if any 25-year trend across the period is more than 0.05 away from 0.2ºC/Decade. Of course this wouldn’t necessarily be particularly meaningful since the 0.2 figure is clearly not meant to represent an exact estimate and I haven’t accounted for the possible effects of volcanic eruptions or significant changes in solar activity.

“The IPCC and its proponents are emphatic that the flat, cool trend from mid 1940′s to mid 1950′s is not natural variability, but anthropogenic aerosol forcing.”

In fact, the IPCC statements about this include estimates of the both natural (the AMO, MOC) and human-caused (aerosols, land use) factors that are expected to have led to this cooling, along with estimates of the continued warming trend and anticipation of ongoing research that will continue to add to our understanding.

The IPCC statements do not involve the sort of declarations that ClimateEtc so often asserts and is the subject of shadowboxing. Quite the opposite. It’s interesting that ClimateEtc perceives itself to be radical readers and defenders of objectivity but instead engages in some of the most repetitive rhetoric; and believes that bogus arguments about claims that aren’t made are evidence of independent thought.

You are a nit picker. What you do is micro-examine various IPCC statements looking for exact matching wording, and finding no match to a general impression, declare Dr Curry to be a liar.

Clearly the impression one gets from poring through the various documents etc is that the aggregate concusion is that it’s aerosols as per what Dr Curry says. Sure, there are weasel-worded “mights” and “coulds” pertaining to natural causes interspersed with the actual conclusions they want you to reach where any *normal* reader would rightfully conclude that the inclusion of the weasel words is specifically intended to form an inclusive contrast (why yes we looked at natural stuff and sure there’s a slight possibility this could be natural or even the result of evil dolphins, but really? No.)

The upshot is that they think it’s “X” and there’s a slight possibility they could be wrong. Martha looks solely at the admission of a vanishingly small possibility that they could be wrong and accuses Dr Curry of reaching an invalid conclusion. In truth the conclusion Dr Cury has is the correct one, and Martha is an apologist for the IPCC relying solely on the inclusion of weasel phrasings as the crux of her argument.

Is there a common latin phrase like *ad hominem* referring to “argument from weasel words”? If not there ought to be.

“Results from climate models driven by estimated radiative forcings for the 20th century (Chapter 9) suggest that there was little change prior to about 1915, and that a substantial fraction of the early 20th-century change was contributed by naturally occurring influences including solar radiation changes, volcanism and natural variability. From about 1940 to 1970 the increasing industrialisation following World War II increased pollution in the Northern Hemisphere, contributing to cooling, and increases in carbon dioxide and other greenhouse gases dominate the observed warming after the mid-1970s.”

Other diagrams show greenhouse forcing only really taking off in the late 1950s (as compared with net forcings from other influences) after which Agung puts a spanner in the works by erupting twice in the early 1960s.

So “emphatic” seems to be a strong word for the mid 1940s to mid-1950s. In fact I personally can’t recall anyone ever focussing on this particular period (not that that means much). Usually the focus is normally on the 1940 peak or the mid 1940s all the way up to the 1970s.

Judith isn’t a liar, she’s just got it wrong on this. I don’t know how, but it’s obvious, and given another recent post on ‘error cascades’ I would have hoped that judith would have nipped the Leake error in the bud rather than propogate it even further.

Your first quote says “contributed” to cooling. The context of the quote is that natural variability also contributed (my post above includes more of the context from the quote).

Your second quote refers to “other sources” of aerosols but also refers to 1950s and 1960s. Judith was referring to mid-1940 to mid-1950. Now I’m not being picky by pointing this out because once you get into the 1960s you can add in the effect of the eruption of Agung – something that is made clear in the second FAQ you quote from.

“…aerosols from fossil fuels and other sources cooled the planet. The eruption of Mt. Agung in 1963 also put large quantities of reflective dust into the upper atmosphere.”

So both quotes emphasised the impacts of natural variability as much as the impacts of pollution. “Emphatic” is undefendable. Sorry.

yeah but, yeah but yeah but…… as far as I’m aware the people at Climate etc don’t have much/any influence on government policies to “de-carbonise” our economies whereas the IPCC…………. surely to any dispationate observer the science should have been settled or at least not turned into a “cause” before all these measures which might/might not have an effect were implemented? Cart before horse anyone?

We know that smoking kills, we’re not so sure about secondary smoke, we know that humans have an influence on the climate, if “we’re” honest nobody knows by how much and what it could mean so surely the best thing to do is “lay down our swords” (both sides) and use the best scientific brains about to establish by how much without agendas (either side) leave the politics at the door and get back to real science . All this wondering about peoples motivations (marxist, socialist, right wing, creationsist, fossil fuel funded (i wish!), republican, democrat, labour, conservative) means sweet f.a. in the end. Professor Curry it seems to me you have some serious people on this blog who if they worked together might be able to figure some of this out . alas i’m not amongst them! don’t worry be happy

Unfortunately global warming became politiczed before it was generally accepted in the scientific community. This tended to polarize the debate before enough evidence could be obtained to either prove or disprove the theory. Now, too many people have staked their careers and/or reputations on either side to be able to backpeddle with tripping over their own feet. It may take an entire new set of scientists, who have no ties to any of the organizations you list, to remedy this situation.

Conclusion: trends are obviously decreasing far below IPCC forecast and we are heading to a plateau or even a slight cooling which may last until 2030 according to hypothesis II (change driven natural variability and more especially PDO, AMO, NAO… cycles).
There is no cherry picking in this !

It’s a good snap shot of long and short term trends at one point in time.

It doesn’t show decreasing trends with time; it shows that shorter trends are more influenced than longer ones by a recent lull.

How is this significant? It’s a simple consequence of what trends do, and no conflict with anything anyone has expected.

Once again, with feeling. The IPCC does not predict trends over short windows, like 10 or 15 years. Such windows are expected to show substantial slow down and speed up, above and below a longer persistent trend.

But you aren’t even LOOKING at how trends change over time here.

Here’s a direct link to a spreadsheet people might like to play with, which I have mentioned before. TemperatureAnomalyTrends.xls. It includes data for UAH, Hadcrut3, GISS, NCDC and BEST; but not RSS. I’ll find and add that shortly.

It allows you to plot the change of trends over time. Look in particular at the way 15 year windows change over time. Here’s an image I supplied previously. (Note the vertical axis is trend, and the horizonal axis is the center time of the window. Hence this is NOT plotting the data itself, but the way the trends change.) Plot of 15 year trends for all 15 year windows.

There is indeed a sharp downturn in trend recently; which corresponds to the end of the window coming past the temperature increase around 1995-1998.

The ONLY one of the windows chosen above that is long enough to make sense in terms of the actual projections being used is the 30 year window. A 20 year window would work as well. a 15 year window is a bit too short.

Dikran marsupial | February 9, 2012 at 9:28 am |
“It isn’t difficult at all. If the observations (within their stated uncertainty) lay outside the stated uncertainty of the projections, then that would falsify the models. Yes the credible interval is broad, but that is because there are large uncertainties involved.However, that is beside the point. Prof. Curry has claimed that the observations are not consistent with the IPCC projections. If you plot the projections and their uncertainty, then it clearly not the case. I really don’t understand why some can’t accept this.”

If the respected atmospheric scientist can bring up the topic then methinks she should be able to answer to holes poked in her argument… that is what a good blogger/professor would do…

@Joshua Yes, given that it was in the SUMMARY for policy makers it isn’t that suprising that the detail of the credible interval was simplified in that way for policymakers rather than scientists. Had all the details been left in it wouldn’t be a summary and it wouldn’t have been suitable for policymakers.

O.K. so it may be possible to misconstrue the summary for policymakers, but if you actually go back and look at the science (in this case the AR4 model projections) it is completely clear that the observations are consistent with the models. Prof. Curry’s assertion is unequivocally false; it would be to her advantage to withdraw it.

I think you can stretch that bit about uneven rises beyond its natural breaking point…

It might be more reasonable to interpret it as saying year by year values will be quite varied but a) We’re can give a decadal figure and b) We predict the rise to be 1 degree by 2025.

From that, it doesn’t mean to say we have to wait until 2025 to give our assessments about the prediction. It might be pointless after 5 years, not very meaningful after 10, more significant at 15 and very much coming out of the wash at 20…

This is my reason for suggesting the FAR predictions are worth examining – for the purposes of learning. They were made 22 years ago, rather than 5 for AR4. I’m not suggesting passing final judgement, using terms like falsified or anything similar. The fact is, the predictions after 22 years look very high. That’s all!

I’ll share your observation that some/many will use that to claim proof of a hoax or some such baloney. It shouldn’t hold us back though from being fearlessly honest about what we do and don’t know, and what we do to fill in the gaps [ie use imagination/confirmation bias]

Please bookmark this page (http://bit.ly/zA0a2j) so that we could compare IPCC’s projection with observation in the coming years. This the most easily verifiable graph IPCC ever gave regarding the performance of climate models in the near term.

Why has the realclimate’s error shades look like a diverging tube instead of a cylindrical one?

Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially. Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level. Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

Why does Realclimate insist the business as usual case in Hansen et al model is scenario B instead of A?

Girma, you write “Why does Realclimate insist the business as usual case in Hansen et al model is scenario B instead of A?”

Simple. If they agree you are right, then the IPCC predictions are clearly garbage; which we know anyway. By pretending that what has happened is Scenario B, then they may be able to keep their scam going for a few more months. But then again, they may not.

Why does Realclimate insist the business as usual case in Hansen et al model is scenario B instead of A?

The answer to that is given in the article. Just read it.

We noted in 2007, that Scenario B was running a little high compared with the forcings growth (by about 10%) using estimated forcings up to 2003 (Scenario A was significantly higher, and Scenario C was lower).

Scenario B is the one that comes closest to what actually occurred, so THAT is the one to use.

We did in fact have a downturn in emission rates, in line with scenario B. This was mainly from the global downturn, not a deliberate policy decision, but be that as it may, but that’s beside the point. Of the three possible futures considered, scenario B is the one that turns out closest to actuality.

Check Figure 3 (this is the graph, which I posted earlier – but will post again):

Hansen’s 1988 study stipulated:

Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth rate averages about 1.5% of current emissions, so that the net greenhouse forcing increases exponentially.

Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level.

Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

Based on CDIAC data, the actual CO2 emission growth rate increased from 1.5% in the 1970s and 1980s to 1.7% from 1988 to today, so the actual rate of increase was actually around 13% greater than that assumed by Hansen for Scenario A.http://cdiac.ornl.gov/ftp/ndp030/global.1751_2008.ems

Obviously, Scenarios B and C are way off the mark.

The problem is that Hansen’s Scenario A grossly overestimated the GH warming that would result, very likely because he used a climate sensitivity estimate that was high by a factor of 2 or more..

Actual warming turned out to be the same as Hansen’s Scenario C, based on the complete shut down of GHG emissions in 2000 ” such that the greenhouse climate forcing ceases to increase after 2000”. But this did not happen, did it?

You can wiggle and squirm all you want to Chris, but all-in-all it was a forecast that turned out to be grossly exaggerated (like all of Hansen’s “predictions”).

It is difficult for me to tell whether you are simply misinformed or are purposely fabricating a story in defense of Hansen’s failed forecast.

I’m taking available data, and doing my own calculations, and show the results for anyone to check or repeat. I get the same result as is used in various published work looking back at the old 1988 model.

The actual forcings which turn out are a bit less than scenario B and a lot less than scenario A.

Why do you think otherwise? I think it is because you are not even looking at what Hansen et al actually calculate.

They are climate modelers. Their input to the climate model is an atmospheric composition. They don’t try to calculate from emissions; but they propose rates of increase of atmospheric composition based on models of rates of emission increase.

I compared the model input (which is an atmospheric composition in ppm for each year) with the actual composition. That’s the correct way to check which scenario is closed to actual; reality is a bit less than scenario B and a lot less than scenario A.

Your objection appears to be based on criticizing predictions of atmospheric composition based on emissions. That’s a level of indirection you would do well to avoid if you are wanting to check the skill of the climate modeling back then.

But hey. Even if you do use emissions as a guide rather than the actual model input, your description of that data is still just wrong. There was a major reduction in the rate at which emissions increased around the 1990s before emissions took off again this century.

That is also reflected with a dip in the rate of atmospheric increase in the 1990s, as I mention before.

I’m not making anything up here or trying to fabricate a story. You’ve cited some emissions data, but have you actually done any calculations with it? Where are you getting your descriptions of trends? It certainly doesn’t match the actual data you are linking.

For the posters here who visit gavin’s Real Climate site. I find it interesting that they would not post my comment.

All I asked was how it made sense to average the results of multiple models when done of them had been demonstrated to accurately predict future conditions and how did it makes sense to have faith in models before they are shown to match observed results.

Your comment is #663 at the Bore Hole. I suspect the reason it’s there is not simply because it states a skeptical viewpoint but because it’s very general and can’t be answered briefly. I think if you had made a very specific criticism regarding an element in the current thread, it would have been posted in the thread, but that’s just my guess.

Fred- I suppose it would take a long nonsense explanation of why it makes sense to average the results of models of unknown quality of have faith in believing models that have not on not been shown to be accurate, but appear to be inaccurate.

Far more general questions supporting gavin’s view are posted. I challenge you to find anyone who understands modeling and is not an AGW advocate to support believing the implementation of policies based on the current GCM’s

I suspect it is also because it is a canard. The models have shown an ability to model observed results (hindcasts) and it is unreasonable to expect models to have been demonstrated to predict future conditions as we don’t have access to observations from the future. Try reasing the chapter from the IPCC WG1 report that deals with climate models and their evaluation, then go an ask questions about model validation at RealClimate, you are likely to get a better reception.

“I suspect it is also because it is a canard. Try reasing(sic) the chapter from the IPCC WG1 report that deals with climate models and their evaluation,”

Ah come on Dikran marsupial, any opposing view is a blasphemy over on the RC blog. I don’t visit there myself out of the principle of free speech.

I myself prefer to check out whats up myself.

If I had ever bothered to look at one of the convoluted IPCC reports, I’d probably end up being lost amongst the trees, as some of the more faithful have become.

You should give liberty of thought more of go, and consider that whats up, not. That way, you can be rest assured your not just joining some mass groupthink delusion.

There is tons of material outside the IPCC. Works from noted physicists who are probably to busy with real science, to be hobnailed into the watchamagigs of the greenhouse theory. Get a hold of some, It’s fascinating.

I suspect (unlike Fred) that the real reason your question was censored out by Gavin is that it raised embarrassing questions, which he was unable to either answer or brush aside (my experience on this site).

The problem is that Gavin has been playing around with models so long that he has forgotten that they are simply multi-million dollar extensions of the old slide rule and has started to actually believe them.

Your comment was not posted because it made a boring reference to unjustifiable “faith” in “unproven” climate models. (Yawn!). I got snipped here yesterday for politely suggesting that Latimer had missed the point!

The post had, however, already answered your question in its list of qualifications. The AR4 model archive is an “ensemble of opportunity”. ie the models in it are the models submitted by various institutions without formally judging each model’s qualification. I guess the comparison is a purely a game to counter the accusation that has been put, that the temperatures are outside the bounds of “what the IPCC has predicted”.

I believe the more formality has gone into the choice of projections run for AR5 though I don’t know the details.

Are you sure your comment got snipped here? I;ve been posting since the beginning and I think I can count in the fingers of one hand the number of times people have been so censored (*). And JC notes that she has done so and why.

Disagreeing with me should not be a snipping offence…an indication of some other deep seated malaise perhaps ( :-) ), but not a reason for being snipped. Please repost.

Yes I got snipped along with some comments by someone whose name begins with “A” (can’t remember) who was making similar points to me, equally politely. Some of your replies to our comments got snipped. You must post so much here that you cannot remember what you posted.

Obviously you do not realise you are one of Judith’s Chosen. Use your exalted position wisely! ;)

Because you’ve got no statistical test for significance, and FAR too short a time to look for a long period oscillation like that — ESPECIALLY in the absence of any physical theory that would explain such a thing or give a prior expectation of periods.

We do agree on the data itself. When I give diagnostics, like trends and so on, I use standard mathematical tools and significance tests. You eyeball patterns with no significance test and no physical basis for guiding the choice of model to test against the data.

The idea of a very long term fixed 0.06 C/decade rise is really bizarre. It can’t be long term on the scale of millenia; that would have the Roman Empire living in the mother of all ice ages. There’s no physical factor that has been holding nice and steady like that to drive such a rise over the last 100 years.

You’re doing the statistical equivalence of finding pictures of the virgin Mary on your breakfast toast..

The AMO is a genuine quasi-periodic cycle of internal climate variability persisting for many centuries, and is related to variability in the oceanic thermohaline circulation (THC).

Chris:
The idea of a very long term fixed 0.06 C/decade rise is really bizarre. It can’t be long term on the scale of millennia

I agree. It cannot be fixed. But for the period since 1850s this is a fixed straight line, because that is what the data shows. Although the trend line for the period since 1850s is a straight line, in a longer time scale, it is part of a very long curve that contains the Little Ice Age, Medieval Climatic Optimum, Holocene Maximum, etc.

Why do “climate scientists” at Realclimate choose the starting year for trend calculation from the 1970s or 1980s?

If I start trend calculation starting from year 1910, here is how my projection would be wrong => http://bit.ly/w5P3c9

That is what might happen to the current projection of “climate scientists” at Realclimate.

They DO look over a much longer time; look at figure 2 of the paper. That’s what needs to be done if you are testing ideas about effects with a long period. Just eyeballing 150 years of instrument record can only give weak support to the hypothesis; the look at longer periods of time is essential.

Note also that their model is “quasi-periodic”. It is not a simple sine wave with a definite period. It is rather shows a characteristic time scale for changes, but shifts up and down somewhat chaotically at that scale. That’s pretty standard for these kinds of effect.

Finally, although you’ve agreed that the long term underlying linear line is unrealistic, you use it crucially for “predicting” or “falsifying” your supposed model into the future. You need to look at tools for identifying a periodic (or quasiperiodic) signal on top of a base trend that is NOT linear; because there’s a heck of a lot more going on with climate that you can capture on such scales with one line a sine wave. There ARE such tools, but as I’ve said, you really need a professional statistician to deal with that. It’s not trivial. I just work at the level of basic significance tests for regression lines and so on, which are okay as a ball park starting point but not really up to a proper hypothesis test.

I doubt any professional statistician would be much interested in how you’ve made your proposal, especially as there is no physical basis whatsoever being proposed which could be the basis of a test of prediction against data.

This looks at ENSO at downwelling short wave radiation – perfect correlation.

Clouds vanish in an El Nino and form in a La Nina – energy dynamics.

The problem is not that a theory about why these things happen but that too many people in blogosphere are making it up as they go along. The information is there and there needs to be an effort to understand.

Girma is showing you actual physical observations (warts and all) of the globally and annually average land and sea surface temperature anomaly over time.

Is this “statistically significant”?

You bet it is.

And it shows multi-decadal cycles of warming and slight cooling of about 30 years each with an amplitude of around +/- 0.25C, like a sine curve on a tilted axis with an overall warming trend of around 0.6C per century.

I calculate the trend to be 0.45 C/century. I don’t think Girma is doing any calculations at all. His 0.6 per century is way off what his graph shows.

The 95% confidence bound on the regression is 0.002 — although that is based on a trend plus noise model. A sensible look at GIrma’s “model” would need to consider significance of the oscillation period and amplitude and phase, as well as the trend. That’s four degrees of freedom.

I’m not doing the calculations here, but straight off the bat I can tell the significance is going to be low. Also, there’s no physical model to back up this bizarre model — and excellent physical reasons to be confident that climate is NOT increasing with a simple linear trend plus pure sine wave over that long a time.

What you need is an actual physical model or cause, so you can check a theory against data. That’s what is done in the paper Girma cited earlier by Knight et al… and note that they use a “quasi-periodic” signal. Pure sine wave would be extremely surprising, and even if you had any reason to expect such a thing, showing one cycle only means Girma is proposing steady behaviours on much much longer scales than anything the IPCC considers.

The real physics of the situation is not for a single consistent linear trend. The forcings change over time. The enhanced greenhouse effect took off in a big way from mid century — not based on temperature, but based on physics of the known forces at work. You test that physical theory against the data. That’s how science works.

Chris, you write “You test that physical theory against the data. That’s how science works.”

Yes and no. We have not got that far yet. All we have, and all Girma shows, is ALL the data plotted on one graph. If there was more data, it would be plotted. What the data shows is that temperatures have been rising linearly since the data started. On top of this linear trend is some sort of sine wave. So far as I am aware no-one has any idea why this is happening. So there is no theory to explain the data. All we know is that there are factors which affect temperature, and which have produced the observed results. What these factors are is unknown. Yes, people have ideas what they might be, but there is no coherent theory to explain the data.

However, what the data clearly shows is that there is no CO2 signature, as hypothesised by the proponenst fo CAGW. If there was a CO2 signal, then by now the observed temperatures would be outside the +/- 0.25 limits; on the high side. This has not happened; this is Trenbeth’s “missing heat”. this is what the CAGW hypothesis completely fails to explain.

Of course this trend has not been going on for millenia. Nor will it last for millinia into the future. There are clearly long term factors which affect temperature, and which for the moment are not having any effect.

For example. Girma shows HadCrut3 from 1850 to 2011 inclusive.
I calculate the trend to be 0.45 C/century. I don’t think Girma is doing any calculations at all. His 0.6 per century is way off what his graph shows.

You cannot arbitrarily pick the beginning and end years in a trend calculation of data that shows oscillation. The start and end years must be at the same stage of the oscillation cycle. We know that the 1880s where global mean temperature peaks. We also know that the 2000s were global mean temperature peaks. As a result, these two years may be used as start and end points in calculating the global warming trend :

I know that; I’m glad to see you know that. We seem to be on the same page again.

I guess you know that your original mention of 0.6 C/century was with respect to the older graph in which you actually have 0.45 C/century; rather than acknowledge a mathematical slip you just put up a different model to allow you to keep the 0.6 value.

Suggestion. If you make a mathematical mistake, it’s best to acknowledge it. I make them too; it’s no big deal and it’s healthy.

Manaker at least has seemed to perceive a significance to your model which has not been determined and which you and I both appear to agree is not really there. Manaker, would you like to reconsider and agree that the description was “very crude and approximate”?

That’s pretty similar to saying “not particularly significant”, IMO.

Grima and I both agree, I think, that a steady linear trend is not going to work as an underlying basis for long term trends over the last several centuries. We both agree, I think, that there are “quasi-periodic” factors impacting climate.

Where we probably still disagree (Girma, feel free to correct me if I misrepresent you anywhere here) is whether quasi-periodic factors could plausibly explain the warming seen in the instrument record. My position on that is … no; the only hypothesis which has any legs involves a non-periodic factor with a major warming influence in recent decades. As I said previously, there’s a lot more than just looking at temperature records to test that idea.

I’d say you’re comparing apples and oranges again. That IPCC graph is not an attempt to model temperatures; it is simply to show how trend lines over different periods compare.

The proper comparison is with conventional expectations as described in the IPCC reports is as I told you previously: the only hypothesis which has any legs involves a non-periodic factor with a major warming influence in recent decades.

To see where the actual IPCC idea is given, try this figure:Figure TS-22. Also go on to TS-23 which works as well.

The models used in climate science are based not on extrapolated linear trends, but on expected consequences of all known physical forcings — which are not periodic.

Then the expectations can be compared with observations, as the figures show. The expectations are not perfect. But they are better that long line lines plus a sine wave.

Suit yourself. As far as I can see, you have yet again quoted material which confirms what I have been telling you, and taking away the complete opposite meaning.

First, what you have quoted confirms, as I told you, that the figure is not an attempt to represent the IPCC model, but to show how trends are changing.

Second, the inference of acceleration in warming from those trend lines is straightforward. I have no idea why you think it is “sad”. The data does show a plain acceleration on the scale of the last and shortest window; which in this case is 25 years.

You might like to note that the precise same technique, of a series of successively shorter trend lines to the same end point, is used above by Eric Ollivet (this comment) in reverse, to show a deceleration on shorter time scales, which is the recent down turn. It’s a fair way to illustrate an acceleration or deceleration of trend. (It’s not the best technical test, but it’s fine as a useful graph to convey the sense of acceleration or deceleration in the rate of change.)

1) Global warming rate for the 150 years period (RED) from 1856 to 2005 was 0.045 deg C per decade.

2) Global warming rate for the 100 years period (PURPLE) from 1906 to 2005 was 0.074 deg C per decade.

3) Global warming rate for the 50 years period from (ORANGE) 1956 to 2005 was 0.128 deg C per decade.

4) Global warming rate for the 25 years period from (YELLOW) 1981 to 2005 was 0.177 deg C per decade.

IPCC then states:
“Note that for shorter recent periods, the slope is greater, indicating accelerated warming.”

Okay, let us apply this “IPCC interpretation of data” procedure to compare the global warming rates in the last 25 years to that in the last 13 years going backward from 2010 as shown in the following plot.http://bit.ly/fMwWl1

This result gives:
1) Global warming rate for the 25 years period (RED) from 1986 to 2010 was 0.17 deg C per decade.

2) Global warming rate for the 13 years period (GREEN) from 1998 to 2010 was 0.00 deg C per decade. (No warming!)

Like the IPCC, I can then state.
“Note that for shorter recent periods, the slope is smaller, indicating decelerated warming.”

Can you set me straight on something that may be a terminological confusion on my part? Is there a commonly used term for what James Hansen is referring to when he says Equilibrium Sensitivity [including slow feedbacks] is 6 C for doubled Co2 compared with 3 C for ECS only considering fast feedbacks.

I suppose I’d like to know if this is a widespread distinction or whether Hansen has his own interpretation. Perhaps I’m not sure when scientists speak of equilibrium climate sensitivity, what they usually mean, and are understood to mean. Do many people make a specific distinction between fast and slow feedbacks as well as between transient and equilibrium sensitivity?

I just did some research on RC’s latest updates on temperature trends vs. models. The updated graphs all seem to me to show a temperature trend of roughly 0.1K or so per decade even for 1965-2010. Their Hansen graph claims 0.17K per decade for the last 30 years. But UAH lower troposphere looks like about 0.1K per decade. There was some mumbling about how lower troposphere will show wider swings that surface which makes sense. I note that actual temperature is still below Hansen’s Scenario C. There is some discussion of how long it will take before Schmidt might admit that the model trends are too high and how he can “get out in front of it” by coming up with a pre-emptive explanation if things continue to go badly. Anyway, it looks to me that a variety of sources seem to show that multidecadal trends are quite a bit lower than the models. Maybe I missed something.

David,
In 1988, Hansen predicted that CO2 emissions would increase at a rate of 1.5% per year if continued unchecked. Since the actual increase has been slightly higher than that (closer to 2%), even his scenario A could be considered conservative. The scenario A temperature increase from 1984 through 2011 amounts to ~0.33C/decade. The actual rate of increase has been 0.16C/decade (CRU). Some people are hyping Hansen’s Scenario C as being similar to the recent temperature increase, but as you mentioned, it is still high (0.21C/decade) compared to the observed changes. This was his “best case” scenario, whereby CO2 emissions ceased in 2000. We know that has not happened.
My suspicions is that Schmidt, and many others, will never admit that their models are too high. They have invested too much in them being right. You have not missed anything.

Dan H: My suspicions is that Schmidt, and many others, will never admit that their models are too high.

Schmidt: As we stated before, the Hansen et al ‘B’ projection is running warm compared to the real world

Another failed prediction?

Regarding Hansen’s scenario A, the main difference between its scenario and what has actually happened is not the CO2 increase (which is incredibly almost exactly on point) but changes in CFCs and methane. Both are way below what was assumed in scenario A.

Really PaulS, An integral part of Hansen’s projections were the portion of emissions that remained in the atmosphere. His model obviously failed by about a factor of 2 on this critical issue. In any case, the actual data is below Scenario C, which assumed that emissions ceased in 2000. That to me says his predictions are falsified. You can try to parse the obvious to say that part of his model might have been right, but that’s the tactic of the lawyer, not the scientist.

In any case, I note on RC, Gavin’s plot of data vs. AR4 model predictions shows that the data has a trend of roughly 0.1K/decade. This I think supports Judith’s point in this post. This trend is a 30 year trend.

An integral part of Hansen’s projections were the portion of emissions that remained in the atmosphere.

This is true but it’s not the reason for lower observed atmospheric concentration of gases compared to Hansen’s ‘A’ projections.

CFCs represent the main difference between Scenario A and reality, and they were cut by the Montreal Protocol. In terms of CFC storyline Scenario C is the closest match to reality. As you say, a climate projection should be judged on its ability to track the quantities of certain gases and it seems to have done a remarkably good job in this respect.

The tactic of a lawyer is to throw any argument at the other side that they think they can get get away with, rather than checking if it genuinely fits the facts. If you really want to get to the truth of why Hansen’s scenario A and B appear to be overpredicting temperature rise you’ll need to develop a better understanding of the factors involved. If you want to continue making simplistic “It’s wrong!” arguments to try and get a win for your ‘side’ amongst those who don’t know any better please continue as you are.

Clearly, there is no such thing as ‘climate science’. Science requires a definite, testable prediction to be attached to a hypothesis. As Chris Ho-Stuart has argued consistently on here, no such prediction has ever been made. The IPCC merely supply projections (not predictions) for a range of scenarios from an ensemble of climate model runs.

Yep. My local is the Great Western in Rockhampton. It has an indoor bull ring and pool tables. It is one of 2 places in the world where a man can drink, spit, swear and ride bulls at the same time. The other is in Texas. I love Texas too.

I don’t rightly think we have ever seen a pissant warmist in the Great Western. It would be like a cowboy went up the the bar and ordered a chardonnay. It’s just not right.

Oh yea. My local is the Miriam Vale Hotel in Miriam Vale. It has an indoor bull ring and pool tables. It is one of 2 places in the world where a man can drink, spit, swear and ride cows at the same time. The other is the Great Western in Rockhampton

I did not say “no prediction” There ARE predictions a plenty, and they’ve been quoted here by many people.

The source of confusion is not whether there are predictions or not (unless you are genuinely confused on that point!) but whether any of the predictions tell you what to expect over a short period of about a decade. The shortest prediction that has been quoted here is for two decades; and the IPCC states explicitly that this prediction (of a trend of “about” 0.2 C/decade) is largely independent of scenarios. The scenarios only really start to make a discernible difference on longer terms.

I was speaking of the short term prediction/projection for trends over two decades which everyone here has been talking about. It’s explicitly noted as being independent of the scenarios.

It’s still a “projection” in the sense that it is conditional on, for example, no huge volcano showing up and cooling the planet down for a few years.

But in practice this (in my view) does stand as a good falsifiable prediction, without strong preconditions on the scenarios.

The distinction you appear to have made is that projections are something at the level of a pub debate. I think they are a lot more than that, and that they do allow for a subsequent test against observations. They are falsifiable. Wait until the time window is up, identify which scenario applied, and then check the corresponding projection.

A projection is, IMO, a prediction with an associated (unpredicted) precondition (the scenario). The one we are speaking of is largely independent of scenario; hence the distinction doesn’t really matter here.

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected.

In actual fact, there was NO warming – zero, zilch, nada – for the first decade of the 21st century (January 2001 through December 2010).

The most recent decade (January 2002 through December 2011) has shown a net cooling of around -0.1°C instead.

In AR4 WG1 Ch.10 (Figure 10.4 and Table 10.5), IPCC shows us how the warming forecasts for the first decades of the century tie into the longer-range forecasts up to the end of the century.

The first part of the forecast did not occur as the models had projected, so why should we believe that the later portions will?

Chris, it is pretty obvious that we should NOT believe that the models will do any better from here on out than they have done until now.

That is the point of this whole discussion.

THE MODELS HAVE DEMONSTRATED THAT THEY ARE UNABLE TO PROJECT FUTURE CLIMATE CHANGE

Why is this, Chris?

I would suggest that the reason is quite simply:

THE MODELS ARE USING A VALUE FOR CLIMATE SENSITIVITY, WHICH IS EXAGGERATED (BY A FACTOR OF 2 TO 3).

[Sorry for putting this into bold caps, but it appears it has been impossible for anyone here to get your attention with normal words.]

Manaker, the term “degrees per decade” is a unit; not a window or timeframe.

The time frame given there is “for the next two decades”.

You are taking here as settled that the prediction is meaning that each of the next two decades will show a trend of about 0.2 decades. You don’t even acknowledge that this has been the point of argument over the whole discussion.

I am saying that the correct meaning of what is written is that the observations over two decades are the window over which a trend with the given magnitude should be observed.

I base this, not, on quibbling over minutiae of wording as people keep accusing me, but on looking at the whole context of the report.

(1) The large variations on 10 year windows is a consistent feature of the available data record as far as we can see.

(2) The models used by the IPCC in AR4 also show the same kind of large variations over 10 year windows. (See Figure TS-23 of AR4 to see individual model runs giving this characteristic variation on a that time scale.)

(3) It’s been a consistent part of IPCC reports that these shorter scale variations exist and an open research question. (See, for example, section 3.2.2.6 of AR4 on “Temporal Variability of Global Temperatures and Recent Warming”).

It looks to me that it’s the people who are trying to use 10 year windows to test the predictions who are the ones focused on minutiae of wording and not looking at the whole report or the history of the science.

Furthermore, I think that “degrees per decade” is not just narrow technical jargon. The use of units like this is well and truly established in general discourse… as Louise noted with “miles per hour”.

I’m not just inventing some story to protect the IPCC. I’m repeating the same general point of information on the nature of climate that I’ve explained to folks for many years. There’s lots of short term variation.

The 10 year window over the last 20 years has varied from -0.10 to +0.40.

(Using HadCRUT3 monthly data, and all 10 year windows within 1992 to 2011 inclusive)

The 10 year window over 1982-2001 has varied from -0.01 to 0.35
The 10 year window over 1972-1991 has varied from 0.00 to 0.42

The 20 year window over all windows within 1972-2011 has varied from 0.10 to 0.24. (The smallest of those trends was seen over 1977-1996; and the most recent is running at 0.16)

Now, you can continue to look at “degrees per decade” and treat that as the window, or complain that they should have worded this sentence better to underling that it was the unit, and that the twenty mentioned in the same sentence was the time frame.

Or perhaps you can think that the IPCC really did make a prediction that flies in the face of what data was already available to them at the time.

But at least acknowledge that this is the point of dispute! Should you look at any 10 years within those two decades, or should you look at the whole two decades.

You apparently think the prediction can be tested against decade within the specified 20 years. I say the test should be for the trend over the whole 20 years. Is that a fair statement of what we disagree upon?

If I am planning a long journey and want to estimate how long it will take me, I look up the distance and say at an average of 50mph (assuming most of the journey is motorway or similar) I can work out how long it will probably take me. It doesn’t mean I expect to be travelling 50 miles in each and every hour. Some of my journey I will be stationary and drinking coffee, some of my journey I will be breaking the speed limit and travelling 80mph.

The problem arises when you are stationary and drinking cofee longer than you expected. After that you continue the journey, but in the opposite direction for some time! At the end you have to turn around and travel at 400 mph to achieve an average of 50 mph.

And you’d be better at estimating your arrival time if you’re going to the next town than if you’re going half way across the country. Here’s where the climate pub debate differs. The claim is that they’d can’t possibly be expected to estimate the near term, but if you live long enough, they’ll be right in 2100, just you wait and see. Clearly complete nonsense. Oh and they don’t even make a prediction for that, it’s a projection because they can’t possibly be expected to make a prediction because it’s impossible to predict man’s emissions. So climate pub debaters who like warm beer find it harder to predict man’s emissions than to form a climate hypothesis.

Just as the derivative f'(x) and second derivative f”(x)</i) can tell you what's happening with the principal function f(x) so too we can learn from plots of the gradient of the temperature and the trend of that gradient as shown in the yellow line in the plot at the foot of my Home page at http://climate-change-theory.com

There we see an apparent long-term trend which is decreasing slightly in its rate of increase, This fits with the concept of a cyclic pattern with maximum in the 12th century, minimum in the 17th century and future maximum in the 22nd century, minimum in 27th century, roughly anyway.

The plot also shows a very clear shorter cyclic pattern which has obviously been the cause of all the debate. Just take a look at it all and draw your own conclusions. Remember, the plot is not a plot of temperatures but instead it is a plot of the gradients of moving 30 year trend lines.

Personally I operate at one level: I “try to read equilibrium sensitivity from a temperature trend” as you put it. Since I apparently to need to learn a lot more before I can even follow discussions, if you could convince me you had a more reliable approach you would have my full attention!

There are two basic definitions of sensitivity used; “equilibrium sensitivity”, and “transient response”.

The equilibrium sensitivity concerns how much warming will occur in response to a net forcing up until the whole system is back in balance. It will take some time for this response to occur, and the value of equilibirum sensitivity is all about how much warming you get if you wait long enough; not about how quickly the warming occurs. Hence you can’t read it from a trend.

The transient response sensitivity (or “transient climate response”, TCR) on the other hand, concerns how quickly warming occurs in response to a continuously increasing forcing. This is about the rate of warming, and it does in principle show up from a trend.

That’s my attempt at a simple (low level) explanation. It’s not fully correct; I’ve simplified. TCR is not quite as simple as being the trend, but it’s certainly closer to that than the ECS. You can see two two concepts explained more carefully and with nice diagrams in the third assessment report, chapter 9, section 9.2.1. I don’t think such a careful description is given in the fourth report.

Here’s a link, and see figure 9.1 on that page for a nice graphical illustration of how they are defined. AR3, section 9.2 on “Transient Climate Response”.

Note that the TCR is always less than the equilibirum sensitivity; it is also better constrained. (See 4AR, technical summary, section TS.6.4.2 Equilibrium and Transient Climate Sensitivity.)

In describing the difference between “equilibrium sensitivity” and “transient response” you get into descriptions which sound very much like religious belief or dogma.

Show me the empirical data, based on real-time physical observations or reproducible experimentation (NOT climate model runs), which support the premise that GH warming requires decades or even centuries to reach “equilibrium”.

Until you can do so, I will have to assume that it takes only a matter of a year or so for this to occur and that all the “hidden in the pipeline” postulations (Hansen et al. plus copies) are simply “balderdash”, “poppycock” (or B.S. = “bad science”).

Manaker, the best empirical evidence for it taking significant time to get to equilibirium is the measurement that the Earth is well out of balance now. There’s been quite a lot of research on that; it’s basically about trying to determine the rate of heat flow into the ocean. It’s not a solved problem; the amount of imbalance is known only imprecisely; but it is indeed well supported empirically that you don’t get to equilibrium quickly.

The physical theory bearing upon this is much MUCH more straightforward than model runs. For the planet to come to equilibrium with a temperature rise of 1 degree, you have to get ocean temperatures up by that amount. The heat capacity involved is massive. It’s pretty elementary physics that this is going to take a significant period of time before the equilibrium.

For starters, when you refer to climate sensitivity yourself, do you mean the equilibrium or the transient variety?

And in either case, do you have a one-sentence or one-paragraph definition of what you understand by “climate sensitivity?” In particular is it something that could in principle be measured in our lifetime, or are we stuck with calculating it based on our best understanding of how heat flows between the ocean and the atmosphere?

So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

I think the big issue is that people to often use a knee-jerk approach to modeling thermal transients. They do stuff like force-fitting damped exponentials, whereas the more realistic approach should model a kind of diffusion.

I am working on a much more comprehensive view of what I call dispersed diffusion by applying it to much smaller systems such as oxide growth.

I haven’t actually referred to it much here, I think; apart from a side comment or two on the equilibrium sensitivity when it’s been mentioned by others. The equilibrium sensitivity is the one that is used most often (by far) in blog posts like this one.

There has been a recurring confusion popping up in this thread, which is that the changes in trend could be addressed somehow by thinking that estimates of sensitivity are too high. It doesn’t work like that, as could be seen from the definitions.

For a one sentence definition of sensitivity, I’d propose:

How much temperatures will rise in response to a doubling of CO2 concentration.

This is the usual “equilibirum” definition. I use “doubling of CO2 concentration” because that is indeed the usual reference for forcing, and because it is a physically very well understood forcing. In other units, you could say “to restore an energy imbalance of 3.7 W/m^2″, which is what doubling CO2 gives you.

Note that this is how much rise you get, not how rapidly the rise will occur.

It’s not something to observe directly in our lifetime. It is calculated right now; but to limited accuracy. The calculation needs to give the whole planet response to the forcing; that is — a climate model. It is likely that climate models will continue to improve. I also suspect that within our lifetime (and I’m more than middle aged already) there will be enough data and theory accumulated to validate models sufficiently to have a substantially better hold on this value. At present, 2.0 to 4.5 degrees will have to do.

Furthermore, it’s important to constrain all the other forcings at work rather better because it isn’t just CO2 that’s driving changes.

So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

My perspective is that the problem is not that the definition is inadequate. The problem is that the calculations (the models) are still not good enough to be confident of a precise value. We don’t, for example, have an adequate understanding of the internal variability going on; and some of the other forcings (aerosols being a classic example) are much less well understood than the CO2 forcing.

Can you comment on James Hansen’s differentiation between climate sensitivity taking account of only fast feedbacks and that including slow feedbacks. He gives a ball-park 3 degrees for the former and six degrees for the latter. Is this widely accepted as reasonable?

For a one sentence definition of sensitivity, I’d propose:
How much temperatures will rise in response to a doubling of CO2 concentration. … This is the usual “equilibirum” definition. …..My perspective is that the problem is not that the definition is inadequate. The problem is that the calculations (the models) are still not good enough to be confident of a precise value.

I submit that the definition is worse than “inadequate,” it is useless for forecasting the temperature in 2100. The definition assumes a transition between two equilibrium states. But we are nowhere near an equilibrium state now, and the odds of our being any closer to one in 2100 are zip assuming business as usual.

Basing climate sensitivity on equilibrium states is about as useful as determining the cornering ability of a car by weighing it.

Anteros, I’m sorry I missed your question to me about Hansen’s thoughts on long term sensitivity.

The distinction he draws between slow and fast feedbacks is fine and serves as a widely recognizable characterization of what he is describing.

The value of about 3 for the standard “Charney” equilibirum sensitivity is also unexceptional; it’s just background to the discussion, referring to the current best available rough estimate.

The longer term sensitivity he means needs feedbacks that are slower than the response of the ocean to come up to an approximate thermal equilibrium. They are a proposed gradual response to the new higher temperature reached after the Earth comes into energy balance again.

The value he proposes, of 6 degrees, on the other hand, is not generally accepted. Nor is it easily rejected or falsified, given the time scales. The testing of those ideas needs to be more indirect (which is okay in principle) but will need much better modeling. For my own part, I don’t think a single number makes a lot of sense; since the magnitude of the kinds of response being considered is unlikely to be even approximately linear, I think. But going into my reasons would be another topic; and I’m not really an expert anyway.

Vaughan, the equilibrium sensitivity is not used to project temperatures to 2100; for the reason you mention. We don’t expect 2100 to be in equilibrium.

To see the IPCC projections to 2100, refer to AR4 technical summary section 5. In all cases, the projections are simply based on models.

Some people, mostly non-experts I think who are misusing the concepts, may use sensitivity to project temperature in that way; but in the literature I think the major use of equilibrium sensitivity is as a diagnostic for comparing or characterizing models.

For example, Hansen’s work in 1988, which has been discussed here, is universally recognized, in Hansen’s own retrospectives also, as “running hot”. The model has a sensitivity that is rather higher than the more advanced models in use today. That’s an example of a straightforward use of sensitivity as a model diagnostic.

Vaughan, the equilibrium sensitivity is not used to project temperatures to 2100; for the reason you mention. We don’t expect 2100 to be in equilibrium.

Good, but then why did you pick equilibrium sensitivity as the kind to define in response to my request for a one-sentence definition of it? Your assessment of its utility seems to be quite consistent with my original complaint, which was

So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

Although you seemed at first to be taking objection to this (unless I misunderstood you), I haven’t seen anything from you or anyone else that’s inconsistent with that complaint.

I think the major use of equilibrium sensitivity is as a diagnostic for comparing or characterizing models.

That would seem to make equilibrium sensitivity only as useful as the models they serve.

In terms of having anything useful to say about the likely temperature profile over the coming century, climate models seem to me to have too many weak links. As DR would put it, there are unknown knowns (we know what methane and feedbacks are from a physics standpoint but not their expected contributions to the climate), known unknowns (we understand long-term ocean oscillations quantitatively but can only speculate as to their basis in physics), and unknown unknowns (we understand neither the physics nor the expected future contributions of whatever brought about such climate-relevant phenomena as LIA, MWP, etc.).

The known knowns (the Stefan-Boltzmann law, the HITRAN tables, Beer’s law, and lapse rate) are only giving us a quarter of the picture needed for climate models to tell us what CO2 has been doing to the temperature lately and what it’s likely to do to it in the coming decades.

My bet is on modern climate history, which I believe offers us a much clearer crystal ball into our future than climate modeling. This is because equilibrium is a state while disequilibrium is a transition. Climate modeling focuses primarily on the state and treats the transition as secondary. This is backwards. The 20th and 21st centuries are jointly a transition between equilibrium states, which is what we should be studying if we expect to be able to say anything useful about the likely climate profile of the coming century. (I would further claim it is a smooth transition, but I have great difficulty picturing either side of the climate debate going along with that.)

Paleoclimatology has its transitions too, but those were on such a different time scale as to make their coefficients of disequilibrium essentially irrelevant to the current transition.

Good, but then why did you pick equilibrium sensitivity as the kind to define in response to my request for a one-sentence definition of it?

You asked which definition I used. That’s that one I use.

This is because it’s the one being used and quoted most often in discussions like this. You and everyone else are consistently using ES numbers, and I’m giving the definition to help explain what that means, or why it SHOULDN’T be used for projections or to compare with trends.

This isn’t to say it has this definition has no use at all; it is a useful definition for several reasons. But as I also said, I’ve NOT being using it in this discussion; I’ve been focused on trying to explain what is projected about trends and why. I’ve only mentioned sensitivity here a couple of times, in contexts where other people had already been taking numbers from the usual ES definition. Or in response to direct questions, like yours.

Your assessment of its utility seems to be quite consistent with my original complaint, which was

So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

Although you seemed at first to be taking objection to this (unless I misunderstood you), I haven’t seen anything from you or anyone else that’s inconsistent with that complaint.

I’m simply pointing out that you don’t read that in the literature because that isn’t what it’s used for.

It shouldn’t be used that way here in discussion, either. I’ve jumped in here to help explain the concepts because I saw people trying to use it in ways that don’t make a lot of sense.

The one way you will see it crop up in the literature, in this context, is as a diagnostic of models. If a model has sensitivity wrong, then it’s not likely to be reliable for showing what’s happening now, or over recent or coming decades. But that is as far as you should go; you don’t try to calculate responses on that scale from sensitivity directly; you use a model for that calculation.

I quoted above one example of how it is used in the literature; another relevant one is the paper by Tsonis and Swanson (2009) which I quoted below. (this comment) It’s used correctly there also; not as a means to calculate multi-decadal variability, but as a diagnostic of the models that are used to calculate multi-decadal variability.

Another other appropriate way in which to use this sensitivity diagnostic is as a diagnostic of the Earth itself; where it stands as the measure of how much temperatures change over times where the Earth is roughly in equilibrium. Hence, for example, it’s an appropriate definition for looking at temperature changes involved in the ice age, or millennial scales within the Holocene. You need an independent estimate of the forcing and the temperature as well, of course.

It’s an appropriate definition for looking to the changes well past 2100 as Earth settles into equilibrium again (by which I mean approximate radiative energy balance between solar input and thermal emission to space).

Maybe others are but I most certainly am not. Where did I ever say I was using “climate sensitivity” to refer to the equilibrium concept? Except when used in conjunction with a GCM, ES is a meaningless concept for modern climate. To date I’ve been unable to see how either GCMs or ES help us predict the next half century of global land-sea temperature.

I’m giving the definition to help explain what that means, or why it SHOULDN’T be used for projections or to compare with trends.

Good. My original point was that it shouldn’t be used for those things. If I seemed to be making some other point my apology for being unclear.

Since you only defined equilibrium sensitivity and not some version of transient climate response, does this mean the latter is less meaningful or relevant to you than the former?

Another other appropriate way in which to use this sensitivity diagnostic is as a diagnostic of the Earth itself; where it stands as the measure of how much temperatures change over times where the Earth is roughly in equilibrium.

Now that I can agree with. What I don’t see is the relationship between an Earth that is roughly in equilibrium and one that is badly out of it. The former places far fewer demands on our understanding of the Earth’s energy budget than the latter. In my mind equilibrium climate sensitivity has little bearing on the influence of CO2 when Earth is as seriously out of equilibrium as at present.

It’s an appropriate definition for looking to the changes well past 2100 as Earth settles into equilibrium again (by which I mean approximate radiative energy balance between solar input and thermal emission to space).

Agreed. Actually my guess would be that by 2075 new technologies like inertial confinement fusion will have decreased CO2 emissions to the point where nature is drawing down more than we emit, so that CO2 will decrease for a while, starting well before 2100. The Archer-Schmidt view of CO2 hanging around for centuries seems based on a model of residence time having what I see as at least three problems: fallacious appeal to paleoclimate, irrelevance of average residence time per molecule, and neglect of disequilibrium coefficients. Neither Mark Jacobson nor I buy their pessimistic forecast, which Archer’s book The Long Thaw exaggerates to an absurd degree.

Maybe others are but I most certainly am not. Where did I ever say I was using “climate sensitivity” to refer to the equilibrium concept? Except when used in conjunction with a GCM, ES is a meaningless concept for modern climate. To date I’ve been unable to see how either GCMs or ES help us predict the next half century of global land-sea temperature.

My apologies; I think I have not been approaching your question as you had intended. The remark of yours I was working from is the one I quoted above when I started out this subthread.

Personally I operate at one level: I “try to read equilibrium sensitivity from a temperature trend” as you put it. Since I apparently to need to learn a lot more before I can even follow discussions, if you could convince me you had a more reliable approach you would have my full attention!

I took that at face value; as a question for me; and I have been trying to explain definitions of the concepts and how they are used — at a comparatively basic “level”. It wasn’t you, however, who wrote anything in this comment stream that was misusing the sensitivity in that way… so I may well have missed a bit of gentle sarcasm in your question. I’m really not trying to defend anything in particular here; so much as give the context in which any published references to sensitivity should be understood — whether one is skeptical of them or not.

Good. My original point was that it shouldn’t be used for those things [projections or comparison with trends]. If I seemed to be making some other point my apology for being unclear.

No problem.

Since you only defined equilibrium sensitivity and not some version of transient climate response, does this mean the latter is less meaningful or relevant to you than the former?

They are both equally meaningful, in the sense of both being carefully defined. Their relevance depends on the topic; and both are used appropriately in the literature where they are defined.

The equilibrium concept has, I think, wider applicability. The transient response is probably more immediately relevant to projections over the coming century; and equilibrium sensitivity remains better as a model diagnostic (I think…). My own main interest is the basic background physics itself; but in discussions at blogs I try to engage making use of the concepts already being used in the discussion. And that is almost always “equilibrium sensitivity”. It’s a well-defined and widely used diagnostic, and when anyone in discussion simply says “sensitivity”; this is almost always what is meant.

For what it is worth; my own view is that the models ARE useful for projections for this century, and that the wide confidence limits on those projections is a reasonable and cautious recognition that models are a long way from perfect.

[snip some points of agreement]

Agreed. Actually my guess would be that by 2075 new technologies like inertial confinement fusion will have decreased CO2 emissions to the point where nature is drawing down more than we emit, so that CO2 will decrease for a while, starting well before 2100. The Archer-Schmidt view of CO2 hanging around for centuries seems based on a model of residence time having what I see as at least three problems: fallacious appeal to paleoclimate, irrelevance of average residence time per molecule, and neglect of disequilibrium coefficients. Neither Mark Jacobson nor I buy their pessimistic forecast, which Archer’s book The Long Thaw exaggerates to an absurd degree.

I haven’t read that one; but I’ll just comment briefly on the time scales for CO2. My understanding is that the conventional view (which shows up in pretty much every carbon-cycle model I’ve seen; the “Bern” model being a good representative) doesn’t use residence times per molecule. In fact, that’s the classic error by people who suggestion that the conventionally understood persistence scales are too long; they look at the short residence times per molecule; which is quite a different thing when there are multi-way exchanges and multiple reservoirs.

As it turns out, Isaac Held’s blog has just put up a comment on this. It’s a very good blog for looking at the unsolved problems in climate science; though at a very high technical level; well above my level. Not for the faint of heart. Held is a quietly spoken careful scientist with a well-deserved top level reputation, who tends not to engage the usual denier/alarmist type exchanges. He looks at the problems that are driving conventional research. See: Cumulative emissions. It is dealing with longer terms which are impacted both by equilibrating of the carbon cycle and by changes in emission rates.

Held concludes by recommending more emphasis on transient response rather than equilibrium sensitivity; the reasons are set out in the blog post, and in some of his other blogs. Well worth a read.

“As it turns out, Isaac Held’s blog has just put up a comment on this.”

I appreciate that Held is looking at this, but he has taken a slightly naive approach to determining the response function. A diffusional response in such a situation is never a damped exponential. What you will find is that the response notches down as a squareRoot(time) dependence. So if a forcing function is a unit step, corresponding to a linear cumulative, the response will follow approximately a time^(1/2) profile. And if the forcing is a ramp, corresponding to a parabolic cumulative, the response will be time^(3/2).

I am also puzzled by the fact that he doesn’t apply or explain the convolution operator, which is the recommended way to solve for the response with an arbitrary forcing function.

Coming from a semiconductor processing background, I see the entire planet’s surface as a planar diffusion problem. The only hiccup is that we have a dispersive profile with respect to varying diffusion coefficients and interface depth.

Again, I like what Lubos said about these kinds of physics problems recently:

“People including me (and especially the climate skeptics) often like to say how complicated the climate is, and so on. But we shouldn’t forget that in many contexts, the physical problem is rather simple, clean, and doable. I am confident that a proper physicist who studies this physical system – or another system – has to know these simplified situations very well. This is the real knowledge he or she builds upon. Starting with hopelessly complicated situations that can’t be solved simply isn’t the right scientific attitude. The right scientific attitude is to cover the “space of possible situations” by special cases which are solvable and whose physics you largely remember and by calculating the more complex intermediate problems by various detailed numerical and perturbative methods and interpolation.”

Isaac Held has the right attitude in trying to simplify, but it still needs some fine tuning to get something that is representative of the real transient that we should observe. As Lubos said, it does not have to be a “hopelessly complicated” situation, and so if we can apply the right abstraction to the problem domain, we can make some progress.

This comparison shows the observed global mean temperatures (GMT) are less than model projections if human CO2 emission were held constant at the 2000 level.

In addition, there has not been any change in the climate as there has been only a single GMT pattern since record begun 160 years ago. This pattern can be clearly observed in the data from NASA and the University of East Anglia as shown in the following graph.

This pattern has a unique property of a warming trend of only 0.06 deg C per decade and an oscillation of 0.5 deg C every 30 years.

This result shows, for 160 years, the GMT pattern (the climate) has not been affected by human CO2 emission, volcanoes and aerosols! These variables did not have effect because the GMT pattern before and after mid-20th century were nearly identical.

This result shows, for 160 years, the GMT pattern (the climate) has not been affected by human CO2 emission, volcanoes and aerosols! These variables did not have effect because the GMT pattern before and after mid-20th century were nearly identical.

Very creative, Girma. Bravo.

You and Arfur Bryant have been this blog’s leading resident experts on methods for hiding the curvature in HADCRUT3. The method you’ve both relied on in the past was to fit a trend line to HADCRUT3 and then point out that the trend line was straight. Very straightforward reasoning, so to speak. Max Manacker uses essentially the same method for the log of total (as opposed to anthropogenic) CO2 to prove that its CAGR is a mere 0.5%, which exploits the fact that the substantial natural component of CO2 is stationary thereby hiding the 2.2% CAGR of anthropogenic CO2, our contribution.

In the meantime this blog seems to have grown in sophistication during its year or so of existence. Arfur’s response has been to take his argument elsewhere, presumably to less sophisticated blogs. 19th century vendors of home remedies maintained market share (not to mention their liberty) in that way, always keeping one town ahead of the law and angry citizens.

Your response would seem to have been the more professional one of keeping pace with your audience’s sophistication. Putting science ahead of marketing is most commendable.

As can be seen at the Wikipedia article Curvature of a graph, the signed curvature k is given by k = y”/(1 + y’ ²)^{3/2} where y’ and y” are the first and second derivatives of the graph. When the graph is scaled by a factor of s this becomes k = sy”/(1 + sy’ ²)^{3/2}. For very large s this simplifies to k = y”/(sqrt(s)*y’ ³), which tends to zero as s tends to infinity, while for very small s it becomes k = sy”, which again tends to zero but this time as s tends to zero instead of infinity.

Hence curvature can be hidden by making s either very large or very small. The only way to see curvature in a graph is to keep s in the general neighborhood of 1/y’ ², as in this graph, which fits one trend line to each of the two equal halves of HADCRUT3.

Your technique of decreasing the scale s by using offsets at ±2 reduces the curvature considerably as can be seen here. But while much less pronounced the curvature is still visible. How to fix that?

Easy. Don’t draw anything that would give away the curvature. Instead throw in three copies of the old Girma-Bryant trend line in this way. If one is straight, three must be even straighter! Seeing is believing, after all. That’s how Chief Hydrologist argues: “open your eyes” as he puts it.

I would give your argument a 7 on a scale of 1 to 10 for creative ways of proving climate scientists are monkeys.

As I’ve said before, in my judgement Harry Huffman still deserves an 8 for his heavily technical way of proving that CO2 is not the reason why the surface of Venus is so hot. Proving zero curvature by appealing to the straightness of the trend line is by comparison just a cheap trick with no serious underlying science that’s not worth more than 5.

Don Easterbrook’s creative technique of hiding the entire rise of the 20th century under the right-most vertical grid line of a graph covering hundreds of centuries is a neat bit of sleight-of-hand that seems to work on 98% of visitors to YouTube, justifying a 6.

But because curvature k depends in a more delicate way on scale s, tending to zero for both large and small s, I would say the subtlety of that dependency fully justifies a 7.

Just to help you out – old buddy – and as physicists can’t dance for spit – I recast it in iambic pentameter.

I yield at once, with humbled AAS
I crave indulgence, love our NAS
We all agree you’ve got it wrong,
That’s not the way to wear a thong.
Belief in what you see must stop,
Our truth we get from photoshop.

If you want to be a poet – you need a bottle of bourbon, a pack of cigarettes and a trailer park. You then write maudlin love sonnets about good looking but lonesome cowboys and strong women.

You were ok with my humdrum aabbcc rhyming scheme but for some ridiculous reason (to which however I’ve no desire to be disloyal) you felt obliged to replace my unconventional tpptpp scansion scheme with the even more humdrum tttttt scansion scheme beloved of fifth grade teachers. To your rhythmic da-da da-da da-da da-da I can only say tpptpp.

Your replacement of “throng” by “thong” is more Jake Gyllenhaal than Johnny Depp. Google for
“no desire to be disloyal” “common throng”

Like all these things you need to walk first before you can gambol. BTW – how’s that third law going? It sure is a doozy but I’m certain you can get the hang of it.

I am lying back with laptop balanced and Miles Davis in the headset – and am reminded again of the endless opportunities for rythmic variation.

In truth however – your subject was mundane and your scansion a mess. I went back to the basics and added a little much needed frisson. Although the latter was a mistake as now I am haunted by nauseating thoughts of Pratt in a thong.

Dikran:
“Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?”

It think you’ve got your answer. Telling, isn’t it?

Markus Fitzhenry:
“Ah come on Dikran marsupial, any opposing view is a blasphemy over on the RC blog. ”

And that’s just false. On almost any thread there, you can see the RC mods swatting down the same zombie arguments you guys keep putting up over and over. Just ask Dan H. Hint: their replies are in brackets.

Yes, it is rather telling, isn’t it? Plenty of ad-bloginems, and bluster, but no attemt to address the substance of the argument, and no admission that Prof. Curry’s assertion is factually incorrect. :-(

– IPCC has been clear in its wording – no “coffee pauses” were postulated, but a clear warming of 0.2C per decade was projected by the models for “he next two decades” in AR4 (and a warming of 0.15C to 0.3C per decade in the previous TAR)
– In AR4 Ch.10, Figure 10.4 and Table 10.5, IPCC show us how the projected warming of the early decades ties into the longer-term forecast for the entire century, IOW the warming of the early decades is an integral part of the “entire postulated journey”..

The projected warming did not occur (subject of this blog).

Ergo: The models failed.

Ergo: There is no reason to assume that they will not fail for the longer-term forecast.

This phenomenon is described in Nassim Taleb’s The Black Swan as one of the rationalizations used by forecasters when their predictions fail to be correct.

The other (too ludicrous to have been mentioned by Taleb) is well, we may have missed the 20-year projection due to (add in excuses cited above), but our long term projection still stands, because these unforeseen factors cancel out over the long term

Girma, you are on the right track… except that the graph you present shows the prediction as a range from 0.7 to 1.0; and it can be tested at 2025, which is less than twenty years away. That’s the red, green and blue bars on the right.

I’m not moving any goal posts. I’m going by what is written down in the report.

The other prediction, which I am not rewording either, is for about 0.2 C/decade trend over the next two decades. That’s the range 2005 to 2025.

People who accuse me of moving goal posts are not reading my comments very clearly. I’m describing the goal posts expressed in AR4.

On my own behalf, I’m expecting the twenty years from 1997 (extending the 15 years Leake was talking about by another five, up to 2017) to show a trend a bit below the 0.2 value; somewhere from 0.15 to 0.20 most likely.

For the twenty years 2005 to 2025 it could easily be up well over 0.2 again, or still back at 0.15 or so. I’d be surprised to see the 20 year trend go much lower than that.

But yes, there’s nowhere to hide, and no-one is hiding or moving goal posts. You’ve got that graph as a record, and it’s a perfectly good expression of a range of 0.7 to 1.0 by 2025; (starting out from about 0.16 in 1990, so a rise of anything from 0.55 to 0.85 degrees over 35 years. (That corresponds to trend from 1990 of something between 0.16 to 0.24 C/decade)

One point to emphasize, in case anyone mixes up what trend means. The prediction concerns the mean trend; not the particular anomaly value you get on the year 2025. Year to year anomalies vary by 0.2 in a single year. The trend, over 20 years, however, is more stable and that is what is predicted.

There is a need to evaluate data sources. Since 2003 – the SORCE project is the best available. TSI is peaking in the Schwabe cycle.

‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records

Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records (Verdon and Franks, 2006)

Manaker, the term “degrees per decade” is a unit; not a window or timeframe.

The time frame given there is “for the next two decades”.

You are taking here as settled that the prediction is meaning that each of the next two decades will show a trend of about 0.2 decades. You don’t even acknowledge that this has been the point of argument over the whole discussion.

Here is an example of how a trend “for a decade” is related to the trend “for two-decades.”http://bit.ly/yAP8Yu

For 1990 to 2000 => 0.25 deg C per decade (Based on 10 years)

For 2000 to 2010 => 0.03 deg C per decade (Based on 10 years)

For 1990 to 2010 => 0.16 deg C per decade (Based on 20 years)

This example shows that the trend based on the 20 years data appears to be approximated by the average of the two trend values based on the 10 years data.

Therefore, approximately,

Trend for 1990 to 2010 = ((Trend for 1990 to 2000) + (Trend for 2000 to 2010))/2 = (0.25 + 0.03)/2 = 0.14 deg C per decade, which is approximately close to the correct value above of 0.16 deg C per decade.

Now, in order to get IPCC’s 0.2 deg C per decade for the period from 2000 to 2020, we have,

The trend from 1997-2006 inclusive was 0.094 C/decade
The trend from 1987-1996 inclusive was -0.0065 C/decade

The trend from 1987-2006 inclusive was 0.198 C/decade

If this violates your intuition, then your intuition about trend needs a bit of help. You’ve been using “Wood for trees”. Here’s a graphical representation: http://bit.ly/xFJErP

If you look at this, it will probably help you see what’s wrong with your calculation.

Let me add… full credit to you for going to the data and trying to analyze it yourself. That’s something to encourage. It’s a good way to learn more about data and how to analyze it. A few mistakes along the way is normal and not being afraid to keep trying is how you’ll keep improving.

Do note that it’s also going to be important to keep up some book study on the methods you are using as well; learning more about regression and how it works from text book or tutorial web page. Best of luck with it.

Okay, given 0.03 deg C per decade warming rate for the period 2000 to 2010, what warming rate for the period 2010 to 2020 gives you a warming rate of 0.2 deg C per decade for the period from 2000 to 2020?

It is easy to find mistakes, but hard to find solutions. Please give me your estimate.

I have given the maximum possible warming rate for the period 2010 to 2020 to be about 0.14 deg C per decade.

Girma, as I told you, you can’t give a warming rate for the next 10 years that would let you infer the warming rate for 20 years.

It’s mathematically possible to have a NEGATIVE warming rate for the next ten years, and still get the 20 year trend well over 0.2.

Physically, that’s unlikely; but not nearly as unlikely as you think. What it would take is another big El Nino event (analogous to 1998) that turns over the ocean waters somewhat, followed by another lull, but at a higher temperature. You can see that there’s a kind “step” change around the huge 1998 El Nino. That’s not a co-incidence, I think. It has to do with intermittent transport of heat into the deep ocean. The details of heat flow into the ocean is much harder to measure than surface temperature and the details of it much less well understood. It’s one of the sources of quasi-periodic internal variability.

Be that as it may, it would be much too much of a co-incidence to get that showing up at just the right time. As I have already said, my bet is on the 20 year trend from 2000-2020 to be at the low end of projections. Below 0.20; but unlikely to be less than 0.15.

PS. I’ve said repeatedly that 10 year trends are not predictable. So asking me for my 10 year prediction is not really sensible. I am, however, willing to take a stab at the 20 year interval 2000-2020.

This is going to require a short term upswing sometime soon. Not necessarily a 10 year up turn, and not necessarily right away. Short term up and down swings have been going on all through the record. It looks plausible for a short term upswing to show up by 2014, which is not far away now; though we are still in a La Nina at the moment. We’ll see.

Chief Hydrologist; I’m not sure what understanding you are trying to give me here. I can indeed stand to learn more about PDO and so on of course; but I’m already familiar with the kind of summary you give above.

I’ll take you more seriously about being “patronizing” if you can show how to address basic mathematical errors without coming across as knowing more than the person you are correcting. I mean, if my comments on trend calculation are patronizing, aren’t your comments on ENSO to me patronizing also?

I’m not complaining; I think it would be fine to see something more substantive and informative of how ENSO impacts surface temperatures; and I’m not the person to do that well. You might be, and I’ll never take offense at someone who knows more than I do on a particular aspect of science seriously trying to help me improve in a constructive way. That’s what I’m trying to do also.

Chris the only message is that any trend to be meaningful must encompass the decadal variability that we are aware of thus anything less than 65 years is nonsense. You play with decades, 20 years or less. It is simply not meaningful.

Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

There are a couple of articles here by me – it might contain one or 2 things you are not aware of.

It’s indeed true that I’m not up on the details of quasi-periodic variability.

I’m not just a simple apologist for the IPCC, despite what some people seem to think. I’m fine with them being wrong about things; but what this thread has done mostly is misunderstand and distort what was actually claimed.

The IPCC made a claim for a 20 year window. You are suggesting that they are wrong to look at such a short window. That could well be the case. I did look at the Tsonis et al paper briefly when it came out, and also was helped a lot by the discussions provided by his co-author Kyle Swanson at a guest post in realclimate. Ray Pierrehumbert (who invited the post) also gave very useful comments on it. But that’s about it; I’ve not studied it any further and couldn’t describe it well now.

I don’t think I agree with you about the need for 65 year windows, but that’s another debate. Swanson — if I read him correctly — seems to think you can get by with shorter windows as long as they don’t encompass anomalous jumps. I’m reading this paragraph (from Swanson’s blog post):

Now, anomalous behavior is always in the eye of the beholder. However, the jump in temperature between 1997 and 1998 in this record certainly appears to pass the “smell test” (better than 3 standard deviations of interannual variability) for something out of the ordinary. Nor is this behavior dependent on the underlying time interval chosen, as the same basic picture emerges for any starting time up until the 1980′s, provided you look at locations that have continuous coverage over your interval. Again, as the temperature anomaly associated with this jump dissipates, we hypothesize that the climate system will return to its signal as defined by its pre-1998 behavior in roughly 2020 and resume warming.

If Swanson is right, then the lull has almost ten years to go. That would invalidate the IPCC prediction. It also propose the appropriate correction. Not a change to the underlying trend, but a better understanding of the variability around it… something I know is an open research problem.

Suffice to say, if you are right, that will become apparent as we see longer lulls than is expected according to AR4.

What I’m really trying to do here is not prove the IPCC correct, but address confusions in what they predicted and how to test it. The prediction is out there, and if it fails, we learn something.

Swanson’s RC post is another way of looking at it. The 1976/1977 and 1998/2001 events are extreme events assoiated with chaotic bifurcation.

‘We develop the concept of “dragon-kings” corresponding to meaningful outliers, which are found to coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings. We present a generic phase diagram to explain the generation of dragon-kings and document their presence in six different examples (distribution of city sizes, distribution of acoustic emissions associated with material failure, distribution of velocity increments in hydrodynamic turbulence, distribution of financial drawdowns, distribution of the energies of epileptic seizures in humans and in model animals, distribution of the earthquake energies). We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king.’ http://arxiv.org/abs/0907.4290

So the idea is of course to exclude these points to arrive at a residual trend. It is again not the 0.17/decade trend commonly discussed.

The Pacific phases seem stable for 20 to 40 years as standing spatio-temporl waves in the Earth’s climate system. They can be seen in hydrology, oceanography, climatology and biology. They have an influence on energy dynamics through clouds.

Swanson is right and we have a decade or three of moderate warming if not cooling.

I’m understanding where you’re coming from with thus, but Chris has a point. The period 2010 to 2020 does not have to have a warming rate of 0.37 deg C per decade. It can be anything including a negative number and still produce the 0.2 deg C per decade for the whole 20 years…

Of course – and as Chris said – it is very unlikely to be negative, but it has no need to be anywhere near 0.37. It doesn’t work like that!

Let’s say IPCC projected warming of 0.2C per decade for the first decades of the new century. This is a projection of 0.4°C warming over the first two decades.

The 11-year period 2001-2011 showed cooling of, let’s say 0.1C.

The 9-year period 2012-2020 must now show warming of:

(20*.02 + 0.1)/9 = 0.5/9 = 0.0556°C per year or 0.556°C per decade, in order for the 20-year projection to have been correct.

Right?

I personally believe this will not occur. It appears that the next 9 years may even show continued cooling, but who knows? I just do not think that it is reasonable to assume that warming will resume at a rate that is three times the peak rate seen in the 1990s.

And that is what will have to happen for the IPCC projection for the first two decades to be correct.

The IPCC projected a value over a window twice that size. Given the nature of short term variability, the ten year window really doesn’t give you a good basis for expecting what the enclosing 20 year window will do.

This IPCC projection for the 2 decades may well be wrong; we’ll see. I won’t shy away in the least from it being wrong if that turns out to be the case. But to look at the projection, you are going to have to look at the 2 decades… because that is explicit in what the projection says.

A normal observer would say, “if they can’t get one decade right, why are we to believe that they can get several decades or even centuries right?”, and this is a very logical question, which is difficult to answer.

The general conclusion is that it points to some real problems with the IPCC assumptions if it continues.

It’s sort of like watching a dog race. The gates are opened and the favorite dog (whom you have betted on) starts off by running in the wrong direction.

OK, the race is only one tenth finished, but your dog (called “actual”) is going to have a hard time catching up with the “projection” dog, who is already one-tenth down the track.

Will he do it?

Or will he continue running in the wrong direction – or just stop?

Who knows?

Max

PS I don’t have “a dog in this race” – I’m just watching; but I’ve become skeptical that your dog can win.

‘A normal observer would say, “if they can’t get one decade right, why are we to believe that they can get several decades or even centuries right?”, and this is a very logical question, which is difficult to answer’

And does the fact that nobody even tries to answer it other than with hand-waving lead either of you to conclude anything?

Saying ‘gosh that’s a tough question’ but not answering it does not make it go away. Why are alarm bells not ringing all over modelling land rather than wallowing in your own complacency that it’ll all be all right at some distant future time when those of us who aren’t already dead probably won’t have the mental faculties to understand anyway.

So – please try to answer. If you can’t forecast for 10 or 12 or 17 or whatever the number du jour is, why should we believe your predictions for 20 or 40 or 100 years out? If a racing tipster fails to pick the winner for ten races on the trot, why should we give any credence to his 11th prediction.

Let me add that the Chief is 100% correct in saying that all this talk about 10 or 20-year trends is actually meaningless, since nothing under 65 years or so really means anything.

I’d say that even 65 years (1946-2011, with linear warming trend of 0.1°C per decade) is probably too short to tell us much, in view of all the natural oscillations.

We have a 161 year record (HadCRUT3), warts and all, that tells us the warming was around 0.7°C over the entire period, or a linear warming rate of between 0.04 and 0.05°C per decade, so this is probably more meaningful than the 65-year “blip” (or the even less meaningful most recent 30-year “blip”, 1976-2005, used by IPCC to demonstrate AGW).

I’m fine with this as a summary (and tell me if you agree with this summary or not!)

(1) We both agree that 10 year windows don’t actually say much about the longer term. I also claim that the IPCC recognizes this as well, and is not predicting 10 year windows; but that’s only my understanding which apparently is still disputed by some folks.

(2) We both agree that the IPCC is expecting 20 year windows to say something useful about the longer term. We both agree that this is a prediction that can be tested against data. The IPCC says “about” 0.2 C/decade; I take that as being 0.15 to 0.25 based on the graph we’ve seen many times now; but I speak for myself there.

(3) Robert I Ellison proposes that 65 year windows are needed to show the underlying trend. I think you agree? My inclination is to think that overstates the case. The length of the window needed to smooth out quasi-periodic internal variability is not only based on the frequency of those changes, but their amplitude as well. It seems to me that the underlying forcing is strong enough to show up on shorter windows even given a longer term quasi-period variability. So I guess this point 3 is where we differ most?

I’ll continue to keep an eye out for developments on that score, and I don’t mind at all if the IPCC turns out to have been wrong about the 20 year window.

1. That 10-year windows do not mean much. [But I interpret the IPCC statement, For the next two decades, a warming of 0.2°C per decade is projected” to mean exactly what it says, i.e. each of the next two decades is projected to warm at a rate of 0.2°C per decade.]

2. That a 20-year period will be more meaningful than a 10-year period.

3. That a 65-year period, as proposed by the Chief, is even more meaningful.

I would add my personal opinion that a 160-year period is even more meaningful than the ~30-year period starting in 1976 cited by IPCC in AR4 (Ch.3, p.240):

The 1976 divide is the date of a widely acknowledged ‘climate shift’ (e.g. Trenberth, 1990) and seems to mark a time (see Chapter 9) when global mean temperature began a discernable upward trend that has been at least partly attributed to increases in greenhouse gas concentrations in the atmosphere (see the TAR; IPCC 2001).

This, along with the answer to the questionHas global warming stopped for now? may be the two points, on which you and I cannot find agreement.

I’d say the 30-year period is simply a “blip” (or maybe an “oscillation”) in a longer record, and is meaningless in itself.

I’d also say that the record shows that “global warming has stopped ” (but could very well start up again).

Your choosing of 1997 for the two periods as start and end points (http://bit.ly/xFJErP) is a “clever trick” of obfuscation because it is a discontinuity in global mean temperature as the temperature suddenly rises.

The proper way to show that some rule is wrong is to find and show counter examples, so I did.

The same thing shows regularly in the record, or in any auto-correlated random series. Here are a couple more (using the values you should enter in Wood for Trees) The window is from the start of first year to the start of the second.

This isn’t obfuscation. This is an attempt to help show you why your rule is mathematically incorrect.

You can look for an additional condition which would allow your rule to hold. (You spoke of needing no “flip”; but you’d have to define that before I could tell if the condition is mathematically sufficient.) The point is that this “flip” or whatever it is shows up quite a lot in the record.

For IPCC’s projection of 0.2 deg C per decade to be correct, at the end of the second decade of the 21st century, the increase in temperature should be about 0.4 deg C. All this increase should occur in the second decade of the 21st century. As a result, to satisfy the IPCC’s projection, we should see a global warming rate of 0.4 deg C per decade for the period from 2010 to 2020.

Girma, your intuition that the trend for 2000-2020 is unlikely to be as high as 0.2 C/decade is sound. I agree it is unlikely to be that high.

It’s just that giving a formal mathematical basis for that likelihood is not particularly simple.

For myself, I’ve suggested that 2005-2025 is likely to be below 0.2; and specifically somewhere from 0.15 to 0.2, but that’s a quick guess not a calculation, based on rough consideration of decadal scale variability/

As for the trend extending the 15 years Leake was looking at from 1997; I’ve hacked my spreadsheet to look for parallel cases in the past, where a low 15 year trend has developed into a high 20 year trend.

However, one thing I think — and on this I certainly agree with Chief Hydrologist — just looking at trends alone isn’t all that useful. Knowing the physical causes for things is a better basis for looking at what may happen.

Chief cites interesting work by Swanson and Tsonis, which proposes a physical basis for a lull extending out to roughly 2020. Furthermore, their proposal also highlights the two “jumps” I list above. This all constitutes a physical reason for suspecting the 20 year trend won’t be up to 0.2. They do predict the same underlying non-periodic warming trend will continue to be the main factor through this century; their model is that the internal variation is greater than the IPCC projection would suggest. This also implies GREATER climate sensitivity, not smaller. You need more sensitivity to get a big response to these internal factors. They make this point explicitly in their work.

Be that as it may, their proposal, if it pans out, would almost certainly result in the IPCC prediction failing over the immediate future out to 2025.

JCH, I am inclined to agree that HadCrut3 is not quite as good; but you don’t get a whole heck of a lot of difference with GISS. My spreadsheet uses both, plus the NCDC series, plus the two satellite series for the lower troposphere — UAH and RSS.

I steer clear of the lower troposphere data (RSS and UAH) as that really is measuring something different, and because it has greater associated uncertainties. But it’s a handy second comparison.

For the surface record, any of GISS, HadCrut3 and NCDC would be okay by me; so I just use what other people have been using already.

HadCrut4 is due soon. I expect that to bring HadCrut more into line with measurement of the full globe, as GISS is doing.

IPCC projected warming of 0.2C for each of the “next two decades”. As the lead article points out, the first of these two decades has already passed without such warming, so the IPCC projection was clearly wrong for the first of the two decade..

Our discussion has now shifted to whether or not IPCC’s projection will be wrong for both decades

Agree with your statement that for the IPCC projection of 0.2C warming for each of the two first decades to have been correct overboth decades, this would require warming of 0.4C per decade for the second decade.

One year of the second decade has already expired, with again no warming (in fact slight cooling), so it will take a rate of a bit more that 0.4C per decade over the next 9 years to arrive at the forecast level averaged over both decades.

You predict that this will probably not occur.

I agree that it is highly unlikely to occur.

Let’s see what happens.

As to JCH’s quibble about whether to use GISS or HadCRUT, it appears to me that IPCC has used HadCRUT throughout its reports for many of the claims of past warming, so I think it is logical to stick with the indicator used by IPCC, rather than switching to UAH, GISS, RSS, NCDC, BEST or any other indicator when discussing IPCC statements.

However, I agree that your general use of a combined indicator may not be a bad idea, since these appear to show different trends.

I may be a little late to the party here, but I agree that warming over the next 9 years would need to be quite high in order to satisfy the IPCC projections. I am really curious as to what will happen if we get to 17 years with little or no warming.

IPCC reports are overviews of existing publications. All conclusions that are not directly supported by the published research are either sloppy writing or more probably simplifications formulated to make the message clearer and unfortunately and unavoidably by that also more questionable. This applies obviously to the warming trend discussed in this thread.

The IPCC statements are based on average model results. Thus they represent background trend that’s modified by the natural variability. Most climate models have great variability lasting commonly 10-20 years in and in some cases up to 30 years. Thus deviations of this length are to be expected, while the models cannot make any more specific forecasts about their phase. The flat period of 15 years fits in the expected range of variability, but its likelihood is certainly rather low, perhaps of the order of 5%.

What we have seen is not in contradiction with the scientific understanding that’s represented in the IPCC reports, but it does certainly give some support for the lower estimates for the strength of the trend or equivalently for the transient climate sensitivity.

In this situation there are two kinds of error ranges. The first type shown in the IPCC graphs concerns the uncertainties in the background range. The other part discussed much less visibly in the IPCC reports corresponds to the extent of natural variability around this background trend. Leaving this second type of uncertainty out of most discussion may have appeared as a reasonable choice when the text was formulated, but now we can see that the decision has caused quite a lot of confusion and also offered unnecessary opportunities for the critiques of the IPCC.

IPCC projected warming of 0.2C for each of the “next two decades”. As the lead article points out, the first of these two decades has already passed without such warming, so the IPCC projection was clearly wrong for the first of the two decade..

Except that this is, as I said very clearly in my summary, NOT what the IPCC says.

If we are going to summarize please for the love of Harry let’s not just pretend agreement over things we consistently state as different.

In my summary, I said this:

(1) We both agree that 10 year windows don’t actually say much about the longer term. I also claim that the IPCC recognizes this as well, and is not predicting 10 year windows; but that’s only my understanding which apparently is still disputed by some folks.

How did you miss this?

The IPCC did not, and never has, predicted trends over a single decade; and in my summary I was careful not to attribute this understanding of the IPCC to you. And indeed it seems you continue to disagree with me on what the prediction means. Let’s acknowledge it.

The only basis given for your interpretation is the use the unit “degrees per decade”; the same unit used through the report for every scale. Some people — even you it seems — take this as “clearly” meaning the IPCC prediction was for “each” of the next two decades. A word which is certainly not used by the IPCC!

It’s just weird to think they’d do this, when the whole available record shows such large decadal variation. We DO NOT agree that this is the IPCC prediction.

We agree that they have a prediction for a 20 year window. That’s it.

Now if you continue to disagree with me, and claim that the IPCC meant “each” of the next two decades — despite all the context of the rest of the report I might add — fine. Suit yourself.

But PLEASE. At least acknowledge that we DO NOT AGREE on this point of 10 year prediction. Read my comments for heavens sake. I tried to give a clear summary and you have turned it on its head.

I also asked you to comment on my summary if I had it wrong anywhere. You seem to have skipped it entirely, and just invented some agreement on this point which has never been agreed.

What’s so silly about this is that you have a good chance of the actual prediction failing. That prediction is one we do all agree on; the prediction of the trend over the 20 year window from 2005-2025.

A failure of 10 year windows to match long term trend is really uninteresting. Those windows are all over the place for the entire record. It would be nothing new for them to continue to vary a lot from trend.

A failure of the 20 year window to be an approximate match to the trend would be a lot more interesting because it really would be something different from the immediate past; and a genuine indication of something different physically from what is used in the conventional models.

Not enormously different, and we can go into statistics if you like, but given potential and testable physical explanations being proposed (Tsonis et al), it matters.

to satisfy the IPCC’s projection, we should see a global warming rate of 0.4 deg C per decade for the period from 2010 to 2020.

Do you agree?

Yes. I agree.

Since we already have data for the full year of 2011, I have calculated the warming trend required for the next 9 years to reach 0.2 deg C over the entire 20-year period (and that is a linear warming rate of around 0.556 degC per decade, or a linear warming of 0.5 degC over the 9-year period that is still left.

Let us assume that the first “decade” of the new century has passed (2001-2010) with slight cooling instead of projected warming of 2 degC., based on the surface temperature record of HadCRUT3, used repeatedly by IPCC.

And let’s also accept the HadCRUT3 temperature for the 11th year, 2011. We then have a linear trend over the 11-year period of around -0.1 degC, or a net cooling over the 11 years of close enough to -0.1 degC.

So for IPCC’s projection of 0.2 degC per decade to be valid over both decades, we can calculate the warming required over the next 9 years.

Max, since year to year variation is very large, up to 0.2 degrees and more between two successive years, people use trend lines over windows to estimate rate of increase. There isn’t a stable end point value to support the calculations you are doing.

Interestingly, if you do the proper regression calculations with a pure linear rise extending the current series, you need a lot more than 0.555 C/decade linear rise over the next 9 years. Or perhaps not that interesting, as pure linear rises aren’t what happens.

But just for the heck of it, appending a pure linear rise to the end of HadCRUT3 gives this for trends (starting at 2001 as you have done)

Appending linear rises like this isn’t a very useful thing to do; but I’m giving it here as another illustration that your intuitions about mathematics of trend lines are letting you both down.

What it would take to get to 0.2 (which we both agree is unlikely) is more like having the next 9 years with data around about 0.4 above the present and not a lot of trend. A kind of step change, if you like. That kind of variation can be seen occurring in the record, but not often enough to give more than an outside change it could occur just now in time to push 2001-2021 up to 0.2 C/decade.

Chris – Of interest regarding what interval(s) are specified, the AR4 Synthesis Report was published in 2008, with a SPM in November 2007. From the Report: “For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios. {WGI 10.3, 10.7, SPM} “. This implies that “the next two decades” could mean a 2007-2027 interval or even a 2008-2028 interval, or alternatively, it might simply mean an interval estimated from model projections in 2005 as you suggest. It does not mean the first two decades of the 21st century, where the IPCC made no prediction.

My understanding of the IPCC work is that they are expected trends over 20 years to be comparatively stable, and on that basis gave a fairly strong statement for 2005-2025. From the graph, I take their “about 0.2″ to be something in the range 0.15 to 0.25

I personally think 2001-20021 is equally as good as 2005-2025 for considering what 20 year windows are doing; the IPCC was not making any suggestion of significance of one 20 years over another; so although I agree that 2005-2025 is the stated range, I won’t quibble with looking at other 20 year windows. Indeed, I think looking at all the 20 year windows is the best way to avoid getting lucky, or unlucky, with an outlier.

As it happens, starting in 2001 is quite likely to become a new standard for “skeptics”; because it excludes a little bit of strong rise beforehand.

Had Max decided on starting in the year 2000 (which is more likely to become used conventionally because it’s a nice round number, regardless of the precise definition for the start of the millenium) then the trend to the present is +0.01 rather than the -0.065 obtained starting in 2001. Statistically, this isn’t particularly significant; but it does mean starting in 2001 is going to show smaller trend numbers; so we can expect that to be preferred by people who are motivated to find low trends. I’m willing to assume that’s a co-incidence and Max was just picking the first year of the new millenium.

I’m going to continue to look at all the windows, and won’t quibble about Max proposing the one starting in 2001.
I

Chris – Your comment makes perfect sense, and my own comment was meant merely to correct apparent misrepresentations of what the IPCC predicted.

Twenty year windows, wherever one starts, are better than 10 year windows, but there remains a serious misunderstanding of how to interpret even these. If the interval is long enough (where “enough” could be variously defined, but could be cited for example as 17 years), one can begin to get to good idea, statistically, as to whether we are seeing a rising, falling, or flat trend for that interval. The misunderstanding lies in interpreting such a trend as controverting a longer term effect from some particular climate dynamic such as CO2-mediated forcing. It is possible for the “true trend”, as estimated statistically, to be flat because of the composite phenomena operating over the specified interval, even though the longer trend is positive (for the last century) or negative (during certain paleoclimatologic intervals). In fact, the 25-30 years from mid-century to about 1976 were indeed quite flat in reality, not just statistically, and the reasons are fairly well understood from the work of Martin Wild and others to reflect mainly aerosol cooling as a counteracting influence on GHG-mediated warming; whether there were other phenomena also operating is possible but more speculative.

The bottom line is that true bumps, dips, and flat times punctuate the climate record, and need not be spurious in order to understand them to be fluctuations around a longer term trend, which for the past 100 years has been upward, with the years since 1950 well explained mainly by GHG-mediated forcing, plus a smaller contribution from other factors.

In which case the IPCC prediction might be seen as an inappropriate expectation about the nature of short-term (20 years being short term) projections.

That’s seems quite plausible to me, but I’m still willing to wait and see what pans out in coming decades. I expect the nature of quasi-periodic variation to become better understood; it’s an active research issue with a number of proposals that scientists are making and testing.