I was disappointed that our friend Jeff at The Air Vent didn’t supply his insight on the matter. His lack of poltical correctness combined with interesting statistical analysis makes for some good reading. He was the first one to bring the GISS flap to my attention.

OK, so after you read through all those posts, you’ll know the summary. But for those who want the nickel summation, here it is: the GISS data was originally released and the stunning conclusion was that October 2008 was the warmest October in recorded history. An interesting exclamation given the anecdotes around the globe of early snows and freezing temperatures. Not to mention that the satellite data showed no such thing. Well, some of our geeky friends on the internet decided to actually take a look at some of the data, especially when the Russian map showed temperatures that averaged over 10 degrees Celsius above normal.

After what really was a pretty simple analysis – comparing October temperatures against September temps – we come to find out that Russian temperatures are screwed up. September readings were duplicated to October. NASA had to pull the data, and recalculate. The anomaly dropped from 79 to 58. But then some further digging found more problems and they had to re-re-release it and now the anomaly stands at 55.

The source of the error was actually NOAA, who collects the data and does some initial adjustments to it. GISS then picks it up and does additional adjustments before releasing the final anomaly set.

There are a few questions that can be asked here: (1) Was there anything intentional here, to artificially elevate temperature readings; (2) who was at fault; (3) is there a good excuse for this occurring; and (4) does this call into question the entire data set? There are numerous other questions, but I’ll address these three.

First, as much as I appreciate the skepticism surrounding Hansen and GISS, let’s be fair here with regard to intent to deceive. I don’t consider these guys above spinning the data in a way that is deceiving, nor do I think they are above incorporating “adjustments” that work in favor of their preconceived hypothesis of warming temperatures. But spin is spin, and the adjustments are available for public scrutiny – and in fact have been scrutinized quite fairly. We can arguye about the validity of these and different people will differ in their opinions. But outright purposeful introduction of incorrect data is quite a different animal, and I don’t believe at all that this was intentional. First of all, it was so plainly obviously wrong that only a complete idiot would attempt to manipulate data so stupidly. So, one really has to believe that this was an unintentional oversight.

As for fault, it would appear that the inital and primary fault lies with NOAA. However, Hansen and team cannot pass all the blame. I find it remarkable that there is such a paucity of testing of the data that such a simple error could pass by both organizations. Particularly when they produce anomaly maps that were so extreme, wouldn’t it seem reasonable to just check the numbers for reasonableness? If I were to overlook such a glaring data error – whether I was the source of the data or not – my boss would have my head. This was negligence on both ends. It is all the more disconcerting that data quality such as this is lacking when the data set produced is such a widely used one. To answer the third question, there is NO good excuse for this having occurred. It is an embarrassment to both organizations.

Finally, it certainly does call into question the data. But this is not new. The adjustment process that continues to change historical temperatures going back a century have been questioned many times. The lack of surface stations in some areas, requiring extrapolation and smoothing of temperatures over thousands of miles in some cases reduces the data quality. But this latest debacle adds the element of data quality and control. Clearly, this has been lacking. If there is insufficient testing to pick up a duplication of records from one month to the next, then one can only surmise what other errors are in the data that are far less easy to detect.

In the end, though, whether we like it or not, this data is used widely. As such, I produce the temperature charts associated with it.

I’ll keep it a little shorter on this post since the most interesting part of this month’s GISS data is the story above. But here are the stats:

Anomaly = 55
Rank, overall = 49th warmest anomaly of 1,546 monthly readings; 5th warmest October, of 129.
The anomaly is basically in line with September (50) and previous October (54).

The 12-month average increased a blip from 41.0 to 41.1. Because past anomalies continue to change, the previous average in this set differs from the average I stated last month. It’s all kind of maddening, actually, but it is what it is.

Interestingly, even though an anomaly of 55 is a historically high one, it still keeps current trends flat. In fact, the period for which we can extend a negative trend line back is now to December 2000. So, even the GISS data set is approaching an 8 year period where a negative trend line can be fit. This extension backward occurred even though we are now in a neutral ENSO state.

Quick hits on the various trend lines: The 60-month trend is negative, of course, but the magnitude diminished with this anomaly. The 120-month trend line is positive but still declined, however. But this should jump around a bit in coming months. The 180-month trend is positive as well, but also declined to its lowest level since the period ending February 2003. The 240-month trend continues to stay at the slope range it’s been in since early 2007, and the 300-month trend is in the same range as it’s been throughout 2008. The 360-month trend line, though, continues to decline and is at its lowest level since the period ending August 2002.

Time has somewhat gotten away from me. The orange light is flashing on my laptop and I am ready to get some shut-eye. I do have the charts updated, but I will upload them tomorrow in a separate post.

I’m just getting back into the swing of things, the economy is creating some trouble for my company so I am a bit busy now. Thanks for the summary.

I agree it couldn’t have been intentional simply because of the obviousness, but it certainly was embarrassing. How many people does this result need to pass by before it is approved? I wonder if this would have slipped by the scientists if temperature had jumped downward by the same magnitude :)

Davesaid

I am getting really worried that we are getting some bad data that is going to cause our leaders to make bad decisions.

In the UK we had a terrible summer, and the Met Office said our August was warmer than average!

October was cold, and November colder. Sea Ice in the Arctic is expanding rapidly and we hear of record snowfalls in Europe and the US.

REPLY:You are not alone. Particularly when looking at the GISS, you have a host of cohorts who question the data. You, of course, do need to consider that your examples are regional, and that temperature elevations elsewhere may be offsetting these things. However, there is an opposing issue to that. When the higher-tech, populated areas can see, feel, and measure one thing and then the temperature results end up being driven by Russia – where the measurement instruments are decidedly lacking and much adjustment required – it makes one question the data.

As for the Met office, nobody but them even knows how they adjust the data they receive to get to their final answer. But interestingly, their data shows that there is no warming since 1997 just as the satellite data does.

Climate Audit and The Air Vent (both links to the side) both spend a lot of time getting more into the problematic issues with data, research, and politics of the AGW issue.
And yet there doesnt seem to be anything that reflects this in the data. Am I alone in thinking that there is a credibility gap between what we are told is happening, and what we can feel is happening?