To Tell the Truth: Will the Real Global Average Temperature Trend Please Rise? Part1

To Tell the Truth: Will the Real Global Average Temperature Trend Please Rise? Part I

A guest post by Basil Copeland

[NOTE: After seeing some other analyses posted in comments by Basil, I’ve invited him to post his work here. I hope you will enjoy it as much as I have so far – Anthony]

Everybody talks about the weather, but rarely has a scientific debate engaged the public as have concerns about climate change and and anthropogenic global warming. It is a scientific issue or debate that everyone can have an “informed” opinion about just by going outside, or by thinking about how climate has changed in their lifetime. If they cannot understand the physics of GCM’s (global climate models) they can read a thermometer and opine whether it is getting colder or warmer “than it used to be.” Few scientific issues or debates are as reducible to an everyday metric — a thermometer reading — as the debate over global warming.

The experts merely fan the fires when they issue press releases about how this year or that is the warmest since whenever, or that the earth’s temperature is rising at X degrees per decade and is likely to continue to rise Y to Z degrees for the rest of the century. The truth is that taking the earth’s temperature is no easy task. Some would argue that it is not even possible to speak of a global temperature as such, e.g. that climate is regional, not global. Others, such as the host of this blog, have drawn attention to serious questions about the accuracy of the station records on which estimates of global average temperatures are frequently based. Then there are the stat geeks, like myself, who understand how hard it is to accurately or meaningfully measure the “average” of anything! It begs reciting the old saw about a statistician being someone who can stand around with one foot in a bucket of boiling water, and the other foot in a bucket of ice water, and say that “on the average” they feel fine.

But despite all the legitimate reasons to question the usefulness of global average temperature metrics as measures of climate change or global warming, we’re not likely to stop using them any time soon. So we should at least use them the best we can, especially when it comes to divining trends in the data, and even more so when it comes to extrapolating such trends. In a series of recent blogs, our host has drawn attention to the dramatic drop in global average temperature from January 2007 to January 2008, and more recently to what appear to be essentially flat trends in global average temperature metrics over the past decade. Not surprisingly, a vigorous discussion has ensued about how reliable or meaningful it is to base inferences on a period as short as ten years, not to mention a one year drop like we saw from January 2007 to January 2008. While there are legitimate questions one might raise regarding the choice of any period to try to discern a trend in global average temperature, there is no a priori reason why a period of 10 years could not yield meaningful insights. It all depends on the “skill” with which we look at the data.

I’m going to suggest that we begin by looking at an even shorter period of time: 2002:01 through 2008:01. Before I explain why, I need to explain how we will be looking at the data. Rather than the familiar plot of monthly temperature anomalies, I want to call attention to the seasonal difference in monthly anomalies. That, in a sense, is how this all started, when our host called attention to the sharp drop from January 2007 to January 2008. That 12 month difference is a “seasonal difference,” when looking at monthly data. The average of 12 monthly seasonal differences is an estimate of the annual “trend” in the data. To illustrate, consider the following series of monthly seasonal differences:

These are the 12 monthly seasonal differences for the HadCRUT anomalies from February 2007 through January 2008. During that 12 month span of time, the average monthly seasonal difference was -0.097, and this is an estimate of the annual “trend” in the anomaly for this 12 month period.

With that by way of introduction, take a look now at Figure 1. This figure plots cumulative seasonal differences going back in time from the most recent month, January 2008, for each of the four global average temperature metrics under consideration.

Figure 1

While they vary in the details, they all turn negative around the end of 2001 or the beginning of 2002. At the point where the series cross the x-axis, the cumulative seasonal difference from that point until January 2008 is zero. Since the “trend” over any period of time is simply the sum of the seasonal differences divided by the number of seasonal differences, that’s just another way of saying that since near the end of 2001, there has been no “net” global warming or cooling, i.e. the “trend” has been basically flat, or near zero. Yet another way to put it is that over that period of time, negative and positive seasonal differences have worked to cancel the other other out, resulting in little or no change in global average temperature.

But Figure 1 tells us more than just that. Whenever the cumulative monthly seasonal difference is below zero, the average monthly seasonal difference over that time frame is negative, and the annual trend is negative also. For most of the time since 2001, the cumulative seasonal difference has been negative, indicating that the average seasonal difference, and hence “trend,” has been negative.

This is shown, in somewhat different fashion, in Figure 2. In the most recent 12 months, the trends vary from -5.04% to -9.70%. They diminish as we go back in time toward 2001, but are mostly negative until then, with the exception of positive trends at 36 months for GISS and UAH_MSU.

Figure 2

Finally, in Figure 3, we have the more familiar anomalies plotted, but just for the period 2001:01 through 2008:01. The basic picture is the same. At the end of the period the anomalies are below where they were at the beginning of the period, indicating an overall decline in the anomalies over this period of time. Interestingly, the UAH_MSUn series dips below the x-axis four times during this period. When we consider that the metrics have all been normalized to a zero anomaly around their 1979:01 to 2008:01 means, that indicates that within the last six years, the UAH_MSU series has returned to, and dipped below, the 1979:01 to 2008:01 mean anomaly four times. All of the metrics have dipped below their 29 year mean twice in the last six years, and are well below the mean at the end, in January 2008.

Figure 3 – click for larger image

However you look at the data, since 2001 the “trend” in all four metrics has been either flat, or negative. There has been no “global warming” since 2001, and if anything, there has been “global cooling.” But is it “statistically significant?” I imagine that one could fit some simple trend lines through the data in Figure 3 and show that the trend is negative. I would also imagine that given the variability in the data, the trends might not be “statistically significant.” But since statistical significance is often measured by reference to zero, that would be just another way of saying that there has been no statistically significant warming since 2001.

But that may not be the most insightful way to look at the data, or frame the issue. Prior to 2001 we have a much longer series of data in which there has likely been a positive trend, or “global warming.” What can we say, if anything, about how the period since 2001 compares to the period before it? Rather than test whether the trends since 2001 are significantly different than zero, why not test whether the trends since 2001 are significantly different than the trends in the 23 years that proceeded 2002? We will look at that intriguing possibility in Part II.

We always speek of tropospheric temperature (and of course of surface too)
But look at stratospheric temperature and you see no cooling since about 1994. So this is 14 years of no trend, where we would expect to see a strong cooling trend according to GEG-theory.http://www.remss.com/msu/msu_data_description.html (scroll down to the end of the page)
I think 14 years is long enough to call it a trend!

Ok. My degree was in geology, but I’ve always been a science-monger and I read extensively. Having said that litte bit: I find these charts Very interesting! And a question:
If I recall rightly, there was some sort of solar ‘hiccup’ in 2005. The anomoly approaches zero at that point in many of the comparisons. Could they be tied together?
I really must go back and read some of the earlier articles here and elsewhere.REPLY: See thishttp://wattsupwiththat.wordpress.com/2008/02/13/where-have-all-the-sunspots-gone/

Basil,
I don’t think i can do math notation here. But as a matter of algebra, is your cumulative seasonal difference not exactly the same as the 12-month moving average, just multiplied by 12, inverted, and shifted in the y-direction so that the current value is zero?
In other words, just a smoothed, inverted version of the original monthly plot.
Check it out. The big dip at 1999 is just the Nino 1998 peak – the moving average makes a 6-month lag. The 2005 peak pops up at start 2006, etc.

Can I ask – because it’s never addressed in the post – what physical reality is this measure of “seasonal difference” supposed to describe?
This so-called measure is bogus. You’ll recall no doubt that the earth has two hemispheres and that January is summer in the south but winter in the north, so the term “seasonal” here is very misapplied.
You might reasonably say that annual differences (Jan-Jan, Feb-Feb etc) have something to do with the earths orbit around the sun. The orbit is elliptical with the sun at one locus (and nothing at the other) which means that the earth is closer to the sun in January than in July. Now that’s important because the variation in solar flux on the earth from January (near the sun) to July (near the vacant locus) is about 7%. That’s why Australian summers are hotter than those in the Med despite similar latitudes.
A measure of “annual differences” would say something about the earths reaction to the annual cycle in the solar driver to the earths climate, but it sure as %^*& says nothing about AGW.
So since you haven’t controlled for the variation in energy input into the earth’s climate, what you’re looking at here is just statistical noise. There might be something of interest there, but it isn’t visible using your dimensions of “seasonal difference”
And what in heavens name is the purpose in looking at “cumulative” change when you’re looking at the turn of the seasons – a known cyclical effect?

Let me be sure I have this right. The way to calculate the average “monthly seasonal difference” for a year is to take the temp anomaly of the first month of year A and subtract the same month anomaly for year A+1, then after doing the same for all of the other 11 months adding them all together and dividing by 12. If so, then we can get a variance and confidence intervals on the average that will tell us something more. What happens if we do a running average month-to-month?

An observation in light of Professor Lindzen’s note to Anthony, for those who are interested: if you look at the charts of the cumulative seasonal differences, they also revert back across the x-axis around 1998. So we could even go back that far and conclude that there has been no “net” warming or cooling — that positive monthly seasonal differences have been offset by negatively monthly seasonal differences. I believe that if one were to run the cumulative seasonal differences back to the beginning of the satellite period, 1979, prior to 1979 the cumulative seasonal differences are always positive, and they never revert back down to the x-axis.
I’ve chosen to highlight the “break” at 2001, rather than 1998, because the negative trend at that point has proven to be more persistent. In the results I’ll present in Part II, I do control for the effect of 1998, however. All the cherries get plucked, and none are ignored.

Re: stratosphere — the RSS “TLS” channel is only sensitive to the lower stratosphere (peak sensitivity is 15-20 km). Biggest impact from CO2 on cooling in stratosphere is much higher, approx. 50 km (or ~0.5-1 hPa).
See, e.g., figure 2 herehttp://www.gfdl.noaa.gov/aboutus/milestones/ozone.html
And figure 4 herehttp://www.atmosphere.mpg.de/enid/20c.html
In the latter case, note the radical difference between the trend at 22 km and the trend at 50 km. That figure (from Ramaswamy et al. 2001) is a bit out of date and doesn’t go past the mid-90s, but you can see how different the trends are (and the difference in impact of volcanic eruptions … Pinatubo had a big warming effect at 22 km but not at 50 km).
To look at CO2 induced cooling of the stratosphere, you really need to use something that’s sensitive to higher altitudes/lower pressure ranges than MSU. This is a point that few people seem to appreciate.

Following up on the previous comment, there was a paper by Shine et al in GRL (2007) that included corrections to weighting functions for SSU channels. They show trends of ~2K/decade cooling (!) at 1 hPa, but only 0.5 K/decade in the 10-100 hPa range. MSU TLS channel is in the latter range.

Well done, Basil. Very interesting.
I wonder what constitutes a significant difference over a 100-year period, considering all the adjustments.
(I am assuming GISS & HadCRUT are land-sea measurements and UAH & RSS are lower troposphere?)

Well, pretty much all temperature data that exists is pretty much a “cherry pick” since the thermometer was invented just as we started coming out of the Little Ice Age. Pretty much all temperature data available are measuring the recovery from the LIA. One would expect that there would be considerable warming and much of it happening in the pre-industrial period, which is what we see.
In the last 10 years we have a period of massive industrial development in Asia and yet we have no warming. Global fossil fuel consumption is exploding as China and India continue to develop (those two countries accounting for 50% of the population of Earth).
You will notice that the only number bandied about recently by the Church of AGW is the surface record. And a look and the composition of the surface networks seems to show a removal of a lot of “cool” stations leaving the remaining “warm” stations to have a larger impact on the average. If you remove a rural station surrounded by meadow but leave an urban station positioned on a rooftop in the network your results are going to be skewed.
Instead of cherry picking results, it appears that one can cherry pick which inputs are used to generate those results when the output begins to disappoint the producers of it.

Hi,
J “If you look at the overall pattern, extending from the 1900, there is clearly about a 30 year cycle of warming, 30 years of cooling, 30 years of warming, and now, after peaking in 1998, temps are starting to drop. Since CO2 has been increasing at a stead rate than entire time, there IS NO CORRELATION BETWEEN CO2 AND TEMPS!”
Here we call that AMO and PDO LOL

Dell: (06:16:39) : “there is NO CORRELATION BETWEEN CO2 AND TEMPS!”
Prove it, don’t just assert it. I don’t see no correlation coefficients in your post.
C’mon boy, I’m sure you’re big enough to do that. You wanna rant? Back it up.

Hi J
thanks for the information about stratospheric cooling. But, your graphs end all in 1994, the time when stratospheric cooling in the RSSS-chanel stops. So before we accept further stratospheric cooling we would need updated graphs of the different levels.

A couple points.
From an purely observational viewpoint one can pick any damn time period one chooses to pick. Just don’t draw a CONCLUSION about the future. Over the past 60 seconds there has been no warming in the seat
of my chair. I’m not cherry picking, I’m just observing. So, Basil or anybody who wants to can pick any damn period of time they want to and report the numbers.
That’s not cherry picking. why do warmists cherry pick this last century?
On the other hand, I was reading Atmoz and he had some interesting things to say about the “right” time scale to look at things.
Question: how long does a weather ‘pattern’ last, he put ENSO at 3-7 years.
I would say then I would need records of about 30 ENSO events to characterize that element of weather variability..hmm

Interesting that the Earth goes negative the same year that the sun turns off and sunspot activity starts to decline rapidly. Now, what I’d like to see is the same graph over the period for each of the last several solar minima to see if this is normal behaviour for a minimum period or we are seeing something more than that. Nice work- thanks!

Basil – If you could please clarify what you mean by “cumulative seasonal differences”, it would be much appreciated. If I understand you correctly, you define a monthly seasonal difference as the year to year change in the temp anomaly for a given month (e.g., Jan08-Jan07). Is the cumulative seasonal difference then sum of the monthly seasonal differences for the preceding 12 months? For example, is the value plotted for Jan 08 equal to the sum{(Jan08-Jan07)+(Dec07-Dec06)+(Nov07-Nov06)….+(Feb07-Feb06)}? It would be helpful if this is clearly spelled out.
Thanks,
Bob North

Bob North – and Basil
I’ll try to set out the proof (see comment (4:37;43) above) that there “cumulative seasonal difference” (CSD) plots are just rescaled 12-month moving averages of temperatures.
Suppose we have 100 months of temp, and the average for month 100 is T100, etc.
Then the seasonal diff SD100 = T100-T88
And the moving average MAV100 = (T100 + T99 +…+T89)/12
So SD100 = (MAV100 – MAV99)*12
And SD99 = (MAV99 – MAV98)*12
so SD100+SD99 = (MAV100 – MAV98)*12
so SD100+SD99+SD98 = (MAV100 – MAV97)*12
and so on. The plots are just (MAV100 – MAVn)*12
Because it’s all differences, it doesn’t matter whether T is a temperature in C, or an anomaly.
So when you see a low point on the CSD plot, it doesn’t say anything about the trend; it just says that MAVn, the moving average, was high. Which means temperatures in the year preceding month n were high.

I downoaded an WMV from Anthony’s site which shows the distribution of historical temperature stations across the globe since 1880. File name is stationhistory_v10.
It distinctly show a majority of stations , including all of China, most of Russia and Australia, alot of South America, the majority of the Philipines, Japan, Indonesia, Canada, dropping out of the network.
Most of the equatorial stations dropped out in the later half of the 80’s, when Tamino’s graph shows a negative step drop in the SST.
The rest, the Arctic, Australia, China, dropped out in 1991, when Tamino’s graph shows another negative step drop, followed by a double positive step up.
I think we can argue that all of the preceived warming of that 23 year period are artifacts of temperature station data handling.REPLY: I’d agree that there is some correlation, but I wouldn’t go so far as to say “all”. We really have no idea of the total magnitude of the effect of station loss. In some cases, the stations are still reporting, but NASA GISS hasn’t updated their database, sometimes for years. This is one of the reasons I have concerns about the representivity of the GISS database.

I cannot agree with the SteveMc statements although I will concede that he snips off topic comments, impolite comments and replies to incorrect comments. One thing is absolutely certain, 100% , is that SteveMc is a fully qualified statistician and has immense experience in analysing ‘wayward’ statistical claims. His CV qualifies him way beyond Tammy’s ability to criticise his work.

Lee,
I think I sort of understand your point about hinges (statistics always makes my head hurt and I am most certainly NOT a professional statistician). However, I do not see the relevance if you’re saying this indicates that temps haven’t been flat or nearly so over the last 8-10 years. It may say that the last 8-10 years still falls within the trend over the last thirty, but again, it doesn’t seem to me to negate the idea that temps are flat or possibly starting a downward trend.
Just on the surface I would think a hinge point, if I understand the term correctly, would naturally tend to fall into a period with a sharp change in direction of temperatures, which the period 75-79 certainly was. I remember that time, being an older type.

Hi,
Here you go from the horses mouth.http://tamino.files.wordpress.com/2008/01/bet2.jpg
“If the “continued warming” hypothesis is correct, future values should fall between the dashed red lines. If the “no more warming” hypothesis is correct, future values should fall between the dashed blue lines. If the earth has actually started cooling, future values will eventually dip below the blue lines.” – Tamino “You Bet”
As for my 1880 to 2008 it is pretty accurate. A bit simplistic drawing a line from 1880 to 2008 but I like start and end points.http://tamino.files.wordpress.com/2008/01/44s24s.jpg
Gotta love Tammy Town

Hi,
Forgot to add the kicker;
“Finally, I’ll add one last condition. It’s unlikely but possible that a value can fall outside either range just because of noise. So, my “bet” is that as soon as there are two years (not necessarily consecutive) which are in either decisive region, the side with two decisive years is declared the winner.” Tamino – “You Bet”
Well we are well on the way so I guess we will know in about two years. ..LOL

Lee–
1) Could you link to the post in which Tamino determine the hinge point so those of us who know a little statistics can read it and learn what, precisely, he did.
2) Of course 30 data points fall within the 95% confidence bands of a linear regression. At worst, we’d expect 2 to fall outside those bands. That’s the way linear regression lines work. Period. It tells us nothing about the confidence in the slope. To get the confidence in the trend, you calculate that using a standard method.
Atmoz–
With all due respect, that graph you showed doesn’t prove anything other than (T1+T2+T3 + ….+T30)/30 isn’t much different from (T1+T2+T3+ T29)/29, and that you don’t get much variability when you add 1 year to a large set.
To show what you want to show, you need to calculate the standard error in the trend (using the standard method ) and then show that it drops with the number of years. I actually sort of did that when I discussed how one might falsify IPCC projections using annual average data. I discuss the math here:http://rankexploits.com/musings/2008/can-ipcc-projections-be-falsified-sample-calculation/
And the results here:http://rankexploits.com/musings/2008/what-weather-would-falsify-the-current-consensus-on-climate-change/
I didn’t actually plot the standard uncertainty in the slope as a function of year, or how the “t’s” vary when doing a hypothesis test, but that should be straightforward based on the formulas.

Lee – why not pick 1946 as your “hinge” as this correlates to when CO2 levels began to show their increase. If CO2 and global temps correlate, then we should see your nice upward line with a correlation of >0.50 between the two. I’ll answer for you since I already know the answer from my own dive into surface records. The trend is nearly flat with virtually no correlation to CO2.

With all due respect, that graph you showed doesn’t prove anything other than (T1+T2+T3 + ….+T30)/30 isn’t much different from (T1+T2+T3+ T29)/29, and that you don’t get much variability when you add 1 year to a large set.

This is of course one reason why there isn’t much variability on the right side of the graph. Although I used monthly data and not yearly, so the formula would need to be modified a bit. I’m also a bit confused as to what you thought I was trying to show. I’ve included below my conclusions, which I think are still valid (the last one being specific to his earlier post here of which my post was a response).
My analysis was the same as his; find the temperature trend over the last X years using simple linear regression. He chose X to be 10 years. I chose lots of other Xs, and showed that a different choice of X would have led to different conclusions. If you have problems with the basic methodology, you’re pointing the finger at the wrong person. If there is a better way of presenting the results, I’m open to suggestions. (I probably shouldn’t have included that last graph on that page, but I already made it and had it uploaded to the server.)

From the above plots, it should be clear that choosing the timescale over which to calculate temperature trends should not be done capriciously and arbitrarily. Changing the period of interest from 10 years to 9 years results in drastically different conclusions. This is because of the effect of the strong ENSO event in 1998 which caused a minor divergence in the datasets. Since the ENSO signal is noise, much longer time intervals are needed over which to calculate the trend in order to minimize its effects.
By choosing the start of the time series at the height of the positive ENSO event and the end of the time series during a negative ENSO event, the calculated trend will be much smaller than reality.
Watts’ concern that the GISS data are contaminated is not apparent in this analysis. All four of the global temperature metrics show widespread agreement when more than 15 years are used in calculating the temperature trend. Previous work has shown that 15 years is about the timescale when the trends start to become important.

Lee said
“The slope of that line is not the direct point- it is that all the data is consistent with that slope -whatever it is, being unchanged in recent years. The analysis with 95% confidence intervals shows that the data is ALL consistent with that slope”.
Well it depends also on what range of years since 1975 you calculate it over.
Tamino set a test by calculating the trend and +/- 2 sigma trend interval around this (95% confdence) for the period 1997-2007. He then speculated that:-
1. if for 2 years (not necessarily consecutive) the temperature falls above the 95% confdence interval continuaton of the warmng trend is proven.
2. if for 2 years (not necessarily consecutive) the temperature falls below the 95% confidence interval then the warming trend has stopped.
Why wait, why not backtest?
Take HADCRUT3, calculate the 1975-1998 trend and +/- 2 sigma trend interval around this (95% confidence). Then plot each further year, 1999-2007 on this and see if any years have fallen below the bottom 95% confidence interval.
I haven’t done this yet but it would an interesting test of the hypothesis “warming has stopped since 1998”.

Taking a break from Part II, to post a couple of quick replies.
Nick Stokes,
I don’t think it is exactly the same, and I’m not surprised that you can see the 98 El Nino in the data. If you can run a transformation on the data and show that it produces the same chart, I’ll believe it. But I don’t know what that would add to the point I’m taking away from the data.
JM,
I’m looking at the data from a purely technical point of view, without saying what it means about the underlying physical processes. Seasonal differences are encountered in all kinds of data series, and the meaning from a technical point of view is the same regardless of what produces them.
Gary,
Running the seasonal difference from month to month is hardly unheard of in time series analysis. If you fit a simple ARMA model to monthly seasonal differences with a constant, the constant is the annual “trend” in the series, and the AR and MA parameters model the variation of the series around the “trend.”
Robert Cote,
The pun was intended. 🙂
Evan Jones,
I’ll be addressing a “significant difference” in Part II.
Everybody,
I think Stephen Mosher gets it better than some of you do. I’m not (yet?) weighing in on what any of this means. I’m just trying to shed a little light on what is going on.
Bob North,
Look back at the series I posted for the last year for HadCRUT:
0.077, 0.056, 0.116, 0.036, -0.067, -0.03, -0.119, -0.007, -0.121, -0.176, -0.334, -0.595
Moving back in time, the first observation is -0.595, the second is -0.925 (-0.595 – 0.334), the third is -1.105 (-0.925 – 0.176), and so on. At the end of the 12 months, the cumulative seasonal difference is -1.164. The series continues to accumulate (or shrink) based on each prior months values.
Nick Stokes,
Back to you. I’m not sure of your point. My point is simply that the chart provides a way of seeing whether the positive and negative seasonal differences cancel out over any given period of time. If they do, the “net” change in the anomalie is zero.
More later, probably after I post Part II.

Basil,
An interesting review, no doubt. I encourage you to extend the analysis back over the last 30 years or so. I believe you’ll find that the trends are also relatively flat before 1993, rise steeply to 2001, and then flatten again. Which suggests to me that much of this controversy is based on a relatively short 8 year rise.

My analysis was the same as his; find the temperature trend over the last X years using simple linear regression. He chose X to be 10 years. I chose lots of other Xs, and showed that a different choice of X would have led to different conclusions.

Sure. If you don’t add uncertainty bands, yes. (And even if you show uncertainty bands, yes! )
But I guess I don’t know what the purpose of then showing how the recent trend varies as a function of average backwards.
But, I think I need to actually plot what I mean to show you because, yes, there is a better way to show what I now think you are trying to show. I can give that a shot tomorrow, because I think it may clarify what I mean. (Of course, this may turn out to be based on a misunderstanding of what you are trying to say!)
For clarification: Are you trying to show that the uncertainty intervals vary with number of years in the average? And then, do you want to show the trend calculated at each time period, inside the uncertainty intervals? Because that can be show, and it is worth showing. (Basically, we can easily show uncertainty band around some “hypothesis” and/or around the trend.

Lee, what is it that makes you have such respect for Tamino? For the record, Steve gets rid of comments that “waste” bandwidth sometimes. It’s happened to me. I try not get my feelings hurt. Usually I realise how pointless my comments were later, and often I’m actually grateful they are gone. Tamino on the other hand…I’m told he has a habit of heavy handed moderation, and makes no bones about it. Using all the data can tell you one thing, that evidently is what you want to hear. There is nothing “wrong” with looking at a short period, so long as you don’t asign some great cosmic significance to it, but recognize the limitations of your limited inquiry. Incidentally, wasn’t it some “warmer” who said that it was acceptable to cherry pick becuase “that’s how you make cherry pie”?

Basil,
However I tried, I couldn’t get to reproduce your graphs from HadCRUT3 – then I realised I was adding the 12 months forward in time from the beginning of the time series (1850), instead of backwards from the latest month in your recipe. Its easy using copy down with a formula in Excel, difficult to copy up! Done that way, everything reverses, so that 1998 for example is a peak, rather than a trough, and the beginning of 2008 is heading down a trough corresponding to the current La Nina. However, it puts the x axis firmly between the peaks, instead of skewed above.
Seems more intuitive somehow.
I cannot comment why the peaks and troughs since 2002 have been so regular and small compares to the 20th century peaks, except perhaps it’s our clean stratosphere losing as much heat as we receive – a balance for the time being, until something mucks it up, perhaps.
However, it is interesting if taken back to much earlier in time, starting arbitrarily in 1958 – shows how Pinatubo/Cerro Hudson signal was overridden by the strong El Niño in 1992, but the signals from Agung and El Chich’on broke through the regular cyclical oscillation – the QBO perhaps? A funny little signal in 1975 due to? solar minimum perhaps; and 1962/63 – the Cuban Missile crisis maybe with all the tests going on near the Arctic and Johnston Is?
Thanks for the method
Chris

Dell (11:55:56), your referenced article is analysing US land temperature.
The US is not the world.
A local analysis like that is flat out irrellevant to your argument. Come up with something better.
Basil: “I’m looking at the data from a purely technical point of view, without saying what it means about the underlying physical processes. ”
Don’t be disingenuous. You most certainly are trying to say something that is physically meaningful. Don’t back away when you get caught throwing out noise and waving your arms.

That cracks me up how tamino has a secret identity.
Sure my using a secret identity makes sense. It’s useful to CMA when I say something stupid like “always” or “never”.
I
But for Tamino it doesn’t make much sense, unless he works for an oil company.
heh

Lucia,
Just to make sure we’re talking about the same thing: we’re discussing this figure on this page, correct?

Are you trying to show that the uncertainty intervals vary with number of years in the average? And then, do you want to show the trend calculated at each time period, inside the uncertainty intervals?

No and no. My post was a response to this post. Several of the things said just didn’t seem true:
1. “3 of 4 global metrics show nearly flat temperature anomaly in the last decade”
2. “Given some of the recent issues Steve McIntyre has brought up with missing data at NASA GISS, it also makes me wonder if the GISS dataset is as globally representative as the other three.”
3. “By treating the NASA GISS data as being an outlier due to that data confidence difference, and by taking a “3 out of 4 approach” in looking at the plotted trends, one could conclude that there has not been much of a trend in global temperature anomalies in the past ten years.”
The entire point of the post was to show that these three statements were the result of the arbitrary choice to use 10-year trends. Nothing more. But based on your last comment, I think we may be talking about two different things.

(The slope of the most recent 10 year period does NOT tell us anything about whether warming is slowing or accelerating or staying steady.)
It tells us that it hasn’t gotten warmer in 10 years.
The AGW theory rests on the idea that rising CO2 will cause temps to rise, and it will do so by an avg of 0.2 C per decade. You can defend this as you want, and you can claim that longer term trends show warming. You want long? OK, long term it is, then —http://www.worldclimatereport.com/index.php/2008/02/04/1500-years-of-cooling-in-the-arctic/
There’s a lot of these. I have more. But hopefully you get the idea. The real long terms don’t show warming. (Unless of course you’re Mann or a follower thereof, who seems to have invented the only long term reconstruction that works on sesame street — “one of these reconstructions is NOT like the others” — therefore it’s dismissed with prejudice.)
Meanwhile, looking at the recent trend…
It still tells us that it hasn’t gotten warmer in 10 years.
As I read this, the short term trend and the long term trend tell me the same thing — the stuff you posted is just one more cherrypicked example that tries to use a *specific* trend within the historical framework to argue a point.
I’m going to come back to CO2, because the Tamino graph is all about CO2 as is your ultimate point and the reason for your cherrypick. You can’t say that CO2 is driving all of this. No. Obviously solar activity matters a great deal more than Tamino/RC et al will ever admit to, as does the ENSO activity. The reason why the short term trend matters (and matters a great deal) is because this highlights both things, that they are far stronger an influence than CO2.
If CO2 were the culprit, solar activity (which is dismissed as playing a minor role since supposedly TSI is constant enough) would not stop the warming. And surely the model builders who understand deep things about nature and invent various forcings to demonstrate this understanding aren’t going to let themselves get clobbered by well understood cycles like ENSO and PDO. No way. Those are factored in. (They flipping well better be.) No. The models say that there’s going to be warming.
As in this one from The Reference Frame —
“Entertainingly enough, January 2008 was also 0.27 °C (anomaly-wise) colder than June 1988 when Hansen gave his infamous testimony before the U.S. Congress, predicting a dangerous warming in the following 20 years. No, I am not comparing apples and oranges here. January 2008 was also 0.39 °C colder than January 1988.”
(See http://motls.blogspot.com/2008/02/giss-january-2008-was-coldest-month.html)
So. Bottom line.
It tells us that it hasn’t gotten warmer in 10 years, and this is significant enough to ask why.

Just a point referring back to the very first post in this discussion in which Lee posts links to two graphs. It is the second one that concerns me because we are told that ‘Tamino’ is an expert on statistics.
The graph purports to show a trend line with 95% confidence intervals. The lines delineating these confidence intervals run parallel to the trend line.
Now I’m not a statistician but do know that the confidence interval envelope is not defined by parallel lines. The envelope should get wider the further away one moves from the centroid of the data.

Temperature is difficult to measure accurately, especially since so many of the US stations are outside the NOAA site parameters. We have also lost many stations since the demise of the USSR. Both problems give rise to upward biased readings. I also feel that we are measuring the wrong thing. Weather is driven by heat and temperature is only a proxy for heat. If we measured atmospheric heat content instead of temperature we might get nearer to the truth.

Thank you Nick, and Basil
All I ended up doing was:
|A |B |C |D
n |date |raw hadcrut |=Bn-B(n-12) |=Average(C(n-11):Cn)
Plot A vs D as a scatter plot, and change chart type to an area plot. Lots of regular oscillations after 1882 – much noisier before.
You lose the first two years of data due to the method of calculation (n has to >=24), so the first plotted data point is December 1851. Then the timeline needs to be adjusted so that any events concur with their dates rather than the 12/2=6 month displacement that this method leads to
So what does it really tell us apart from a smoothed down version of the anomalies?
They go up and down and sweep out areas, which could be measured by integration, but it seems a little OTT.
I look forward to Basil’s Part II.

Atmoz…. I got confused. I read the first part in detail, and then must have skimmed over the later bits. That’s a problem with me in blog comments! (I should learn to never comment on a post on “blog B” at “blog A”, relying on my memory.)
I see what you intended to show now. I posted comments over there about a few caveats.

(10 years is too short to capture the underlying climate trend accurately if you have weather events that can last 3-7 years.)
I believe you.
You’re stating it’s a Nyquist problem; you need N times the frequency (3-7 years) to see a worthwhile signal. Sample interval needs to be N times that of the highest frequency you intend to detect where N should be at least double.
So let’s assume you’re correct.
This is exactly why the Tamino chart Lee shows is worthless; if you look at ENSO and PDO and solar cycles then you have the same underlying Nyquist problem. Anything less than say 120 years of data is suspect; you can’t see the PDO/solar/ENSO signal effect otherwise.

I knew somebody would bring up nyquist. I was hesitant to. I’m thinking
that ATMOZ should do some artifical temp series, with some random
ENSO type events and see what pops out.
Its sems intuitive that if you are looking for a DC bias of sorts that you cant find it by sampling over a period that is dominated by “natural” weather frequencies..
Conceptually, if you have events that last 7 years, then your not going
to get any good look at the underlying bias unless you sample out beyond that period ( 2x being a good guess, hat tip to nyquist) .
Maybe LUCIA will look at this.

JM (17:45:15) :
“Dell (11:55:56), your referenced article is analysing US land temperature.”
“The US is not the world.”
Yes it based up temps in the US, and not the world, but the US temp data set is the most wide spread land, complete long term data set in the world. Do we really know what actual temps were in the first half of the 20th Century in third world countries? Yet much of the pre 1950 world temps are based on estimates, not real temps.
If you can show me a global temp data base that is based entirely on actual temps, and not guesstimates for pre-1950, then we can talk century long Global temps, and solar trends.
But instead, we have the very persons who are promoting global warming alarmism, who are estimating much of the pre-record global temps. The fact that there is so much discrepancy the between the patterns of the US actual data and the global estimated data pre-1950, yet post 1960, we see almost identicle patterns, which would make the pre-1950 global temp estimates highly suspect.

Hi,
Bob B and randomengineer, the frequency of events varies. ENSO 3 to 7 years and PDO 25 to 30 years. I would think for something like PDO you would need at least 3 cycles and maybe 4 to see the net effect on climate and we wont go into solar cycles of 200 years 400 years and so forth.
Lee , yes we are still waiting for the formula and data for hinge points.

My understanding of the Nyquist frequency is that it’s used to determine the minimum sampling rate to detect the high frequency –i.e. fast — variations. When discussing the trend in AGW, we don’t care about those so much. Those are already averaged in the monthy or annual averages.
Here, we are discussing problems detecting low frequency oscillations. Detecting the lowest frequencies is also difficult.
And the arguments over whether or not they exist are impossible to fully resolve without a long data sets.
On the low frequency side: We know there are 11 years solar cycles; that means if you don’t know when they start or end, you would need to sample for at least 11 years to average over a cycle. Suppose someone thinks this matters a lot. To disprove that, and convince them, you need to collect data over several 11 year periods and show it to them. (Others might be convinced by order of magnitude estimates– but in come cases in AGW discussions, these order of magnitude estimates pre-suppose that we can estimate the effects we are arguing over.)
How much solar variablity matters to the final computation of the underlying trend in C/century depends on the magnitude of the solar variability relative to other forcings.
AGW theory says the solar variability is now piddly compared to the underlying trend due GHG forcing. But to test that claim empirically, we need to collect data over at least one or two solar cycles. If we can detect the effect of solar variations, that’s a problem for AGW theory. If we can’t detect the effect of solar variations, that tends to confirm AGW. ( Of course, due to volcanism and other factors, we might need many solar cycles!)
Similarly, if the PDO is thought (by anyone) to affect GMST significantly, and we wanted to convince them they were wrong, we would need to measure GMST over at least one or two PDOs and see whether or not the PDO influence could be detected, and to show its effect is less than the over all trend.
Once again, the only way to empirical way to test whether PDO matters enough to affect any underlying trend due to AGW is to get data over several PDO cycles.

Bruce,
I was actually being a little facetious. I assumed TMP meant temperature and the “National” in “National Summary” probably wasn’t Guatemala. It’s just that it looks so odd without the hockey stick on the right end, heh.

Lucia — (My understanding of the Nyquist frequency is that it’s used to determine the minimum sampling rate to detect the high frequency –i.e. fast — variations.)
Sort of… think of this more like the number of cycles you have to have before you can detect them. Works for any frequency if you think of it that way. You are right that you need 2x PDO’s to detect any signal, 2x of 11 year solar, etc. thus nyquist still applies.
And you’re right in that nyquist is certainly more associated with frequency domain stuff like audio CD’s and 44.1 KHz sampling to be able to reliably sample 20 KHz audio.

With this kind of multi-year periodicity in the data, you can’t tell anything from a 10 year run. It’s amazing that the same guys (“my side”) which complains about the reductions in degrees of freedom from autocorrelation, would advocate an analsysis like this with too few data.
The stuff about seasons and such is poorly explained and I don’t see how it adds more degrees of freedom to deal with the short sample period (given ENSO).
In addition, the agnosticism on Watts’s earlier one year analysis is troubling. It’s like he doesn’t want to call out “our side”.
Well guys…our side should be truth.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!OkPrivacy policy