Watch the Ball

NASA spokesman Gavin Schmidt announced at realclimate that Hansen et al had fixed the Russian (and elsewhere data), following corrections to the data made by their supplier (NOAA GHCN.) Even though errors of over 10 deg C had occurred over the world’s largest land mass, it only reduced GISS October temperature by 0.21 deg C (from 0.86 deg C to 0.65 deg C) and still left a large “hot spot” over Siberia. Schmidt reported at realclimate (which increasingly seems to be NASA’s method of communicating with the public):

Actual temperatures in much of the lurid “hot spot” will average a balmy minus 40 deg C and lower over most of the next few months. Olenek or Verhojansk sound like ideal venues for large-scale gatherings of climate scientists.

The GISS website states that changes were made to incorporate corrected GHCN files (the new file timestamped 12.58 pm today):

2008-11-12: It seems that one of the sources sent September data rather than October data. Corrected GHCN files were created by NOAA. Due to network maintenance, we were only able to download our basic file late today. We redid the analysis – thanks to the many people who noticed and informed us of that problem.

Now look closely at the two figures below and see what else you notice.

Left: NASA GISS as of Nov 10, 2008; right – as of Nov 12, 2008.

All of a sudden, a “hot spot” has developed over the Canadian Arctic Islands and the Arctic Ocean north of North America, that wasn’t there on Monday (it was gray on Monday). A smaller hot spot also developed over Australia.

I had downloaded the GHCN file on Monday (and saved it). I downloaded the GHCN file once again and checked for stations that had October values today, but not on Monday. All but two were in Australia with the other two also in the SH.

I haven’t crosschecked the Australian data but at least there’s some new data to support this part of the change. There was no new information from GHCN on the Canadian Arctic Islands. So what accounted for the sudden hot spot in the Canadian Arctic Islands??

Why can Hansen obtain values for October in the Canadian Arctic Islands today when he couldn’t on Monday?

However, between last friday (when GISTEMP downloaded the first GHCN data) and today (thursday), stations in Australia and northern Canada were reported. People claiming on other websites that Oct data for Resolute, Cambridge Bay and Eureka NWT are not in the latest download, should really check their files again. (To make it easy the station numbers are 403719240005, 403719250005 and 403719170006). Why anyone would automatically assume something nefarious was going on without even looking at the numbers is a mystery to me. None of these people have any biases of their own of course.

If data for the three Canadian sites were added between Friday, Nov 7 and Monday, that would explain matters. Gavin’s accusation that the above question was asked “without even looking at the numbers” is another fabrication by a NASA employee. I reported that I had compared stations in the GHCN download before the issue was public and after the issue the public. The Canadian data was in both data sets. According to NASA spokesman Schmidt, it appears that NASA used an even earlier version of the GHCN data. That answers the question.

It is also reasonable to inquire as to whether changes in methodology had occurred. In September 2007, after the Y2K problem, Hansen changed their methodology without any notice or announcement with the effect of reversing once again the order of 1934 and 1998. Determining that they had changed data sources from SHAP to FILNET wasn’t easy and a change in sources or method could hardly be precluded in this instance, though NASA has now said that this was not the case.

113 Comments

Anthony has a post on this here which I read after finishing the above the post. Anthony noticed that the color scale also changed between the two graphs. Posters at WU also noticed both the changes in Canada and Australia.

In my post, I’ve carried out the additional precaution of comparing Monday’s GHCN data to today’s GHCN data and confirmed that no relevant new northern Canadian data had entered GHCN.

1. Siberia, the Arctic, and most of Asia is still considerably warmer than the satellites indicate.

2. If huge portions of the Arctic were running 4-8C above normal, there is simply no way the Arctic ice would have been recover as fast as it did in October – a record amount of ice growth for that month.

I also think that most of the remaining month-on-month GISS warming (contrasting with a slight satellite cooling) is bogus, too.

The newer picture has about as much orange/positive anomaly as blue/negative. Yet, the “newly learned” temperatures (in Australia and Northern Middle Canada) happen to be positive. Clearly, there must exist some freedom whether individual regions are included as colorful or gray. And it seems that the choice is always done in such a way to increase the anomaly for the newest months. Maybe, they return them to gray/cooler readings for the months in the past.

The September=October bug was easy to be found and too hard to handwave it away. However, there must obviously exist many other errors that are less manifest. Not all errors have “signatures” of this kind. And note that we are talking about errors in 2008 that should be much less likely due to advanced technology and experience. What about similar (and unsimilar) errors in 1908? Because these errors generate up to 0.3 °C of fake warming per month, I find it perfectly sensible to imagine that the 0.7 °C warming per century can be fake, too. At most, it is a “two sigma” signal that can easily be a result of mistakes and coincidences.

This is about the doubts about the very existence of global warming. What about its origins and predictions that are much harder to be measured? There is no thermometer that measures man-made warming only and no thermometer that measures the weather in 2058 already today. Why would a sensible politician trust people who can’t look at a thermometer and send the number to another station (and they’re surely unable to do so repeatedly), in their predictions of weather in 2058?

Australia has been teleconnected to Siberia, what a total farce! Quote Wikipedia
Farce “aims to entertain the audience by means of unlikely, extravagant, and improbable situations, disguise and mistaken identity”
No doubt, this farce should be on the stage, science it is not.

Re: DJA (#9),
The teleconnection sucks the heat from England (around the Hadley Center) and releases it into the Australian bush.
The British Islands have been releaved from a big warming thanks to the GISS.

A funny “analysis” by the NSIDC about the october arctic sea ice (emphasis mine) who forgot to check news at CA :

Near-surface air temperatures in the Beaufort Sea north of Alaska were more than 7 degrees Celsius (13 degrees Fahrenheit) above normal and the warming extended well into higher levels of the atmosphere. These warm conditions are consistent with rapid ice growth.

They should have said “these warm conditions are consistent with a blunder in QC at the GISS/NOAA which inflates the Beaufort Sea temperatures”.

This is a scary explanation, since we know the same error was (at least) in Finnish data in the September revision (July=August). Was that also due to the same source sending wrong data? How many other times has “sources” sent in duplicate data which has gone unnoticed? How do we know that GHCN data is not seriously corrupt?

Just a guess, but the Canadian “hot spot” developed between Monday and Wednesday might be related to this (bold mine):

April 2006: HadISST ocean temperatures are now used only for regions that are identified as ice-free in both the NOAA and HadISST records. This change effects a small number of gridboxes in which HadISST has sea ice while NOAA has open water. The prior approach damped temperature change at these gridboxes because of specification of a fixed temperature in sea ice regions. The new approach still yields a conservative estimate of surface air temperature change, as surface air temperature usually changes markedly when sea ice is replaced by open water or vice versa.

Maybe either one was not available for Canadian arctic (hence gray) on Monday.

Look here to see the startling visual difference possible between global temp anomaly maps, according to what message you want to convey. Humlum compares to 1998-2006 baseline, NASA to 1951-1980. Thus NASA says IT’S HOTTER FOLKS without any outright porkies. Didn’t Oz have a cool year this year? oh perhaps not in comparison with 1951-1980? The NASA pics here have no decent SST – all grey – and we know SST has been dropping overall recently.

The heat has been increasing, not in the troposphere but in the noosphere, as Crichton described in his classic speech Aliens Cause Global Warming where he says those memorable words “Consensus is the first refuge of scoundrels”. For decades, environmental science has got away with serious misrepresentation of data, and Climate Science is only the latest arena for a growing malfeasance.

I hope episodes like this – even if they are of themselves “innocent” – help flush the generic issues of deception and suppression in Science out into the open.

People claiming on other websites that Oct data for Resolute, Cambridge Bay and Eureka NWT are not in the latest download, should really check their files again. (To make it easy the station numbers are 403719240005, 403719250005 and 403719170006).

Who claims something isn’t somewhere? I thought that the question was rather why the plot was incomplete on Monday, if data from Stations hasn’t changed in Canada.

People claiming on other websites that Oct data for Resolute, Cambridge Bay and Eureka NWT are not in the latest download, should really check their files again. (To make it easy the station numbers are 403719240005, 403719250005 and 403719170006).

Geeezzz…. Hi Gavin, Why don’t you say the name of those websites? How are we suppose to believe what you are saying if you hide the name of those websites from us? It looks like you are fond of hiding data.

Demesure wrote:
…. Near-surface air temperatures in the Beaufort Sea north of Alaska were more than 7 degrees Celsius (13 degrees Fahrenheit) above normal and the warming extended well into higher levels of the atmosphere. These warm conditions are consistent with rapid ice growth.

In the Finnish long term temperature data from Helsinki a sudden rise in temperature is visible in December exactly when the sea freezes outside Helsinki. May be NSIDC think of the same phenomena in Alaska.

What are you talking about? The sea in front of Helsinki may freeze on December, January, February, or not to freeze at all, as has happened several times during recent winters. And weather the sea has frozen or not, there are often noticable changes in the temperature, like from -20 to 0 C overnight if the wind is turning from northern side to S or SW bringing warm and humid atlantic air to Scandinavia.

Looking at the gistemp map web pages, the choices available for projection are rather odd.

If someone was selling cat food or finanacial services, it would be understood that you pick and choose the axes of your graph to make your product look better than the competition. if you are a government funded organisation then, really, you should choose graphics that best represent the data. So non numerate people won’t be misled.

Both projections offered overstate, at least visually, the temperature anomalies in the polar regions. Given that this data is averaged to produce a single global temperature, it would be much better to use an equal area projection. Not providing that, at least as an option, is deceitful. Perhaps not deliberately deceitful.

“People’s ideas of geography are not founded on actual facts but on Mercator’s map.” (Monmonier, 21) In 1947, a U.S. State Department geographer wrote in Scientific Monthly that the “use of the Mercator projection for world maps should be abjured [renounced] by authors and publishers for all purposes.”

The warm spot over Australia is probably right and probably appeared due to more data points becoming available.
According to the Australian Bureau anomaly charts, October had a positive anomaly of 1.5C. September 1.38C and August was negative at -.94C .

Just to be clear – is 908 the number of stations in a single corrupt data file, which is one of several such files? Or is 908 the total number of GHCN stations used in GISTEMP? Seems surprisingly low, if the latter.

JP

[Response: The rate at which stations report varies. This month 908 stations were reported by Nov 10 for October. The number of stations that will report eventually is about 2000. Of those 908 stations, 90 had this oddity – which is a significantly higher percentage than one would expect. – gavin]

All this fuss over only 10% of the stations to date reporting false data! What error rate does Gavin’s “significantly higher than one would expect” imply?

Apparently the fun has just begun. What will the rest of the stations show when they report in? Or are the errors not uniformly distributed?

Re: Mike Bryant (#34),
I have just done the same for the suspect area of Siberia.
Bratsk: GISS 8.1, Weather Underground 1.
Irkutsk: GISS 9.9, WU 1.
That’s a difference of 9 degrees C.
Combined with #38, this is highly suggestive that there is still something wrong with the GISS data for Siberia.

Re: Mike Bryant (#35),
Mike, apologies for not labeling that clearer. Those are the Soviet Gulag camps. A thematic play on the data quality.
I was then guessing population distribution might be similar.

Here is the corresponding image from RSS TLT for October 2008 from their FTP site.
The comparison is quite interesting. There is some consistency – both show an anomaly of around +2 in Eastern Canada and central Australia, and cooler spots in Western Canada and around Iceland. But RSS has an anomaly of around +2 in Siberia, which is inconsistent with the GISS claim of up to 8 degrees.

That analysis has now been pulled (in under 24 hours) while they await a correction of input data from NOAA. Yes only after Steve Mac informed you [edit]

They pulled the data AFTER I (Steve Mac sent them an email notifying them of the error (which had been pointed out to me by a CA reader and which I had confirmed). They did not identify the error on their own. [edit]

[Response: You and McIntyre are mistaken. The first intimation of a problem was posted on Watt’s blog in the comments by ‘Chris’ at around 4pm EST. By 7pm in those comments John Goetz had confirmed that the NOAA file was the problem. Notifications to the GISTEMP team of a problem started arriving very shortly after that, and I personally emailed them that evening. However, no action was taken until the next morning because people had left work already. They had decided to take the analysis down before 8.14am (email time stamp to me) since the overnight update to the NOAA file (uploaded 4.30am) had not fixed the problem. McIntyre’s intervention sometime that morning is neither here nor there. Possibly he should consider that he is not the only person in the world with email, nor is he the only person that can read. The credit for first spotting this goes to the commentators on WUWT, and the first notification to GISTEMP was that evening. – gavin]

Paulm #40
WU reports a mean temperature for Bratsk for October 2008 as 1.0 as does tutiempo.net However it is harder to find the long term average for the month. Climate-charts.com reports the october average (no length of record stated) to be -0.4. This gives an anomaly of +1.4. Does the 8.1 from GISS represent anomaly or the actual temperature? Either would seem to be incorrect.

For previous years we see just the usual little GISS exaggeration, but the 08 figure is way off.

As a final check you can get the data from the russian site http://meteo.infospace.ru.
Using their numbers I get an October 08 mean of 0.85 for Irkutsk, 0.92 for Bratsk, in agreement with the WU numbers but not the GISS ones. These temperatures are normal for this area (worldclimate gives an average of 0.6 for October in Irkutsk).

Those two cities both had 999 for Sept and figures of 9.9 and 8.1 for Oct in the ‘corrected’ GISS data.
In fact these are the September numbers!
By an amazing coincidence, Gavin seems to have noticed this shortly after my post here.
What an astonishing comedy of errors.
I am just amazed (sorry, running out of suitable words that won’t get snipped) that GISS can put up some incorrect data, have the error pointed out to them, and then put up ‘corrected’ data without performing even the most elementary checks on the ‘corrected’ data.
It was perfectly obvious that the ‘corrected’ data still looked suspicious, and simple to confirm that it was wrong.

Bizarre! Now the Regular projection has reverted to the first discredited map; but the Polar projection still uses the second lot of partially corrected data – as of 16:03 GMT. Why don’t they pull the figures until they have checked properly?

#22. It’s all too typical of the Team’s tactics – Gavin fabricated a claim that no one had made and then self-righteously scourged the straw man. As you say, the issue was that there were no relevant changes in available Canadian data beween Monday on Wednesday.

John S sent me an email mentioning the Irkutsk problem at 3 am Eastern, so a number of people noticed the problem. I’ve checked Irkutsk and Bratsk and agree that both these sites still have Sept data. Also Erbogacen and Kirensk, in case these haven’t been noticed.

I checked these sites against GHCN-Daily data since the transfer from GHCN-Daily to GHCN was said by some to be the origin of the problem. The GHCN-D data looked OK for Erbogacen and Kirensk, so it’s hard to see how they could have uniquely screwed up some sites and not others in a program.

But something to add to the puzzle. GHCN-Daily has no data for Irkutsk or Bratsk later than 2000. So what’s the connection between GHCN-Daily and GHCN? Where does GHCN get their data from? And what exactly did their most recent patch consist of?

I noticed the same thing with Bor. The .dly ends at Dec 1999 but the v2.mean file has data to present, which is picked up by GISS. The timestamp on Bor’s .dly gets updated daily as well. That’s curious.

So what is the input to V2.mean? What is the significance of the .dly files?

Re: brendy (#60), When the artic ice freezes over, the ice acts as a blanket that stops heat loss to the air. When wind breaks up the ice a little, heat is able to escape. GCMs that treat the artic ice as a uniform sheet end up with the air much too cold. There is no way that the air gets warmer to accompany the freezing over of the ocean. That is backwards.

I still do not understand all the differences between the two charts. So extra data was apparently added to Canada and Australia at the same time as correcting the Russian data (Is it normal practice to publish these charts incomplete and then modify later?) but why has the UK gone from a positive to negative anomaly, and the hot spot over the central Med and Libya cooled? I can see why adding data could fill in the grey bits, but why has the Alaskan cold patch extended into parts of Northwest Canada, that was formerly coloured orange? Does anything NASA has said explain any of this?

There surely seems to be something wrong with the surface station data in Siberia or NE Asia. Here’s an image in which I use a base period of 1979-1998 for the GISTemp map (with 250 km smoothing) in order to compare it with the RSS/MSU global TLT image for October:

There’s nothing on the RSS image to compare to the “brown” on the GISTemp image. What is the possibility that wherever you see “brown” on the GISTemp map, there are some serious data quality issues? Then, you take that, and 1200 km smoothing, and you drag those quality issues all over most of NE Asia.

BTW, I don’t have an answer to Steve’s questions about why all the red in the new image for Canada was not there the day before, but I’m sure that what we see in the newer image is just smoothing a few northern Greenland stations all over the place. And interestingly, the mass of Greenland is blue in the RSS image, and missing (gray) in the GISTemp image.

The Oct-Nov-Dec GHCN-ERSST numbers. Will they be more like 2007 at .53 .48 and .40 or more like 1990 at .40 .41 and .37? Perhaps 2000 at .19 .26 and .20? Oh, those crazy monthly anomalies, what will they do.

Regarding the “warming”. A .3 rise in the anomaly is well under the 1 resolution of the min and max recorded measurements at any given station. And it’s more like a .7 rise in the anomaly trend in 130 years, or .54 per century. But that’s also within the 1 degree measurement resolution.

Gavin Schmidt: [Response: There are still (at least) four stations that have Oct data in place of september data but that didn’t report september data (Kirensk, Irkutsk, Bratsk, Erbogacen). I expect that the SEP=OCT check that NOAA did, just didn’t catch these. Still, this is embarassing – but will be fixed today. Nobody is ‘indifferent’. – gavin]

Is there a NASA policy prohibiting NASA employee from acknowledging CA as a source? These are the 4 stations are listed in my post.

And BTW why does he say that these stations didn’t “report” September data? Did he think of checking the GHCN Daily files where daily max and min data from Erbogacen and Kirensk is readily available.

This is all about the need fro climate science, in general, to appear to be infallible. They are projecting catastrophe and using the catastrophe to require fundamental changes to society. If this was recommended by a bunch of guys who work at some obscure departments in some obscure universities and government agencies then nobody would pay attention

However these are the people who hold the knowledge of arcane computer code and mathematical techniques. Their opinions are not their opinions but the findings of infallible science. Anything that will case doubt on this desired perception cannot be tolerated. it must be cast out and ignored.

Comeon, RC acknowledges you all the time using code. Sometimes you are “many”, sometimes you are “some”. Today you are “People claiming on other websites”.

Gavin is pullout all his classic Gavinisms. Here’s one:

Why anyone would automatically assume something nefarious was going on without even looking at the numbers is a mystery to me

So… Uhmm… Who is anyone? How does gavin know this ambiguous entity assumed something nefarious was going on? Why does he think that assumption is automatic?

Does Gavin own a special Ouija board that informs him what this mysterious “anyone” person assumes? If so, maybe his all revealing Ouija board can also explain why this elusive “anyone” person assumes whatever it is Gavin thinks they assumed.

To be serious: Gavin had developed an unfortunate habit of criticizing unnamed people for things no one anywhere appears to have done. The criticism is unrebutable because there may be someone somewhere in the on this planet (or living inside an AOGCM) who has done whatever it is Gavin is currently whining about. Who knows? If so, we can all agree this “anyone” entity should not automatically assume anything nefarious is going on.

Gavin is incorrect about GISS still waiting for another 1000+ stations to report.

For 2007 GISS only used about 1051 stations in their analysis. Late January 2008 I made up a spreadsheet of the the GISS station inventory indicating the first and last years of annual data for all 6200+ stations in their inventory.
You can find it here. http://spreadsheets.google.com/pub?key=pow204GazN9gv5FwsHw6zVA

In 2006 there were more than 2100 stations with annual data. About 1/3 of the way through 2007 they dropped more than 1000 of mostly US stations by inserting 999.9 for monthly values. I suspect most of the stations still report, but they just no longer update the data.

Here is a kml file you can download and save on your machine for use in Google Earth.http://BobK07.googlepages.com/gissinfo
You’ll have to append a .kml extension to the file. Clicking on it will load it into Google Earth. The pins will show the location of each station that had annual data in either 2006 or 2007. Under Temporary Places/GISS Stations you can uncheck the inactive folder to show only those still used in 2007. Pin colors are green for those stations designated rural by GISS and red for all others.

Lucia, a student at Columbia University is not permitted to “appropriate” results without providing “appropriate credit”. Gavin is doing this all the time. Maybe you understand the “code” but not everyone does. So explain to me exactly why Gavin is exempt from academic code of conduct obligations.

Obviously, Gavin doesn’t give credit. This habit of his causes “many” to distrust him. He has other habits result in additional distrust.

When I watched his debate with Crichton, Lindzen etc. I was tempted to make a list of Gavinisms that result in Gavin sounding like a used care salesman rather than a scientist. But… I didn’t want to watch that whole thing multiple times. Not specirically mentionoing who he is criticising, creating strawmen and debating them is another.

I don’t think he is unaware of some of his habits– but I suspect others are intentional.

It’s hard to keep track of all of this. After Schmidt demanded an apology for some supposed offence in this post, in today’s edition, the northern Canada stuff has disappeared and is now back to what it was when all of this started. WTF??

Re: Steve McIntyre (#85),
There is a new GHCN timestamp 11/13/2008 01:06:00 PM (compare #57). Has there been any statement from NOAA NCDC about the issue? Even a change log file? I think it is quite ridiculous if the only information available is some hearsay blog comments by a NASA employee.

Both historical and near-real-time GHCN data undergo rigorous quality assurance reviews. These reviews include preprocessing checks on source data, time series checks that identify spurious changes in the mean and variance, spatial comparisons that verify the accuracy of the climatological mean and the seasonal cycle, and neighbor checks that identify outliers from both a serial and a spatial perspective.

The darkest color on the anomaly map has been bulldozed off the map in the West. In the East it has been pushed North once, and then shoved again further North, almost entirely into the Arctic Ocean. Is there enough fuel left to topple it over the edge of the map? Stay tuned for the exciting conclusion of:

These are the current polar views for October from Gistemp (I cleared my cache to be sure).

At left 1200km Smooth; at right 250km smooth.

Obviously the graphics bug still not sorted out.

Also on this page:
What’s New
Nov. 13, 2008: NOAA corrected GHCN data for October, 2008 again (second correction).
Monthly figures have been corrected (third version).
Nov. 12, 2008: Monthly graphs and maps were created with corrected NOAA/GHCN data.
Nov. 11, 2008: The monthly graphs and maps with yesterday’s October data were removed.
Nov. 10, 2008: Monthly graphs and maps were updated with NOAA/GHCN October data
which had some problems

When it comes to grid cells and weather stations I consider the definition to be the distance from the center of the cell to the stations around it. Sort of like the earth’s orbital radius around the sun. You measure center to center. Here is the GISS definition from their maps page. “Smoothing radius: Distance over which a station influences regional temperature, either 250 km or 1200 km (standard case = 1200 km).” Reads to me somewhat like my definition. Evidently they mean something else since it doesn’t work out using center to center.

I recently commented about Australia having only a handful of 43 stations report annual temperature in 2007 versus the opposite in 2006 where only a handful did not report annual temperature. I compared maps of the two years using 250km smoothing radius and didn’t notice as big a difference in coverage as I expected. That called for further investigation so I isolated down to a map of winter DJF 2007 using 250km smoothing. I figured I could find a station with no other station co-mingling it’s data. Darwin, Australia(lower right) was the station I selected to look at in the data file created by the mapping process.

I don’t know what method GISS is using to generate all these cells, but it sure doesn’t look right to me. I assume the same situation exists at 1200km radius. With numerous stations over-lapping the same cells to varying degrees there must be quite a number of excess cells being generated around the edges which should contain no data. In this example each area had only one station contributing data.

Re: Bob Koss (#93), Bob, I mad a post here a few months back about how GISS puts together their grid cells. I couldn’t find the exact thread, but here is a summation of that I posted elsewhere as well as a link to GISS’ webpage where they explain how they put the data together. Your one station in Darwin probably has almost every other station in Australia contributing to its temperature reading.

“I went and checked GISTEMP, and they do explain how they divide up the Earth for their statistical averaging on this page. But one thing snagged at my mind a bit. Step three of the process involves dividing the planet into 8000 grid boxes. This would create boxes of roughly 63,759 square kilometers within which an average temperature anomaly is computed and then supplied to the global computation. It is how the average temperature is computed that bothers me.

That average temperature anomaly is computed from the temperature stations within that grid box, and also any within 1200 kilometers. To give a graphic perspective so people can visualize this, let’s say my grid box is centered in St. Louis, Missouri. My local grid average is determined by not only the stations within my grid (roughly within 375 kilometers of me), but also from stations in Pittsburgh, Atlanta, Dallas, and Minneapolis, to name just a few. Basically, the radius of effect means that my one grid box is not determined from the 63,759 square kilometers within it, but from the 4.52 million square kilometers around it, an area 71 times as large.

Based upon this computation design, the United States should be represented by roughly two grid boxes if you are looking at total area involved in determining the average temperature anomaly compared to total area of the US (9,826,630 square kilometers total area of the United States divided by the 4,521,600 square kilometers derived from the 1200 kilometer radius of effect). But the US grid sample is still based upon the 63,759 square kilometer grid box determination, so the total US grid boxes are 154. That would seem to me to heavily weight the US sample.

Why the 1200 kilometer area of effect? Doesn’t this mean that certain stations get counted multiple times? Based upon my math above, it would suggest many stations in the United States get counted over 77 times. Does the temperature in St. Louis really effect the temperature in Atlanta, and vice versa? Surely, we could just use the temperatures provided by the stations within the grid boxes to determine that grid box’s average anomaly and work out a good global average from that.”

By the way, don’t shoot the messenger, I don’t buy it, I am just reporting it.

“An expected paradox: Autumn warmth and ice growth”

“Higher-than-average air temperatures”

Over much of the Arctic, especially over the Arctic Ocean, air temperatures were unusually high. Near-surface air temperatures in the Beaufort Sea north of Alaska were more than 7 degrees Celsius (13 degrees Fahrenheit) above normal and the warming extended well into higher levels of the atmosphere. These warm conditions are consistent with rapid ice growth.

The freezing temperature of saline water is slightly lower than it is for fresh water, about –2 degrees Celsius (28 degrees Fahrenheit). While surface air temperatures in the Beaufort Sea region are well below freezing by late September, before sea ice can start to grow, the ocean must lose the heat it gained during the summer. One way the ocean does this is by transferring its heat to the atmosphere. This heat transfer is largely responsible for the anomalously high (but still below freezing) air temperatures over the Arctic Ocean seen in Figure 3. Only after the ocean loses its heat and cools to the freezing point, can ice begin to form. The process of ice formation also releases heat to the atmosphere. Part of the anomalous temperature pattern seen in Figure 3 is an expression of this process, which is generally called the latent heat of fusion.

In the past five years, the Arctic has shown a pattern of strong low-level atmospheric warming over the Arctic Ocean in autumn because of heat loss from the ocean back to the atmosphere. Climate models project that this atmospheric warming, known as Arctic amplification, will become more prominent in coming decades and extend into the winter season. As larger expanses of open water are left at the end of each melt season, the ocean will continue to hand off heat to the atmosphere.

Near-record October growth rates

As mentioned above, unusually high air temperatures go hand-in-hand with the rapid increase in ice extent seen through most of the month of October.”

I’ll put it up shortly. Doing a little rewrite. If the image doesn’t display properly I’ll email you a copy along with the image. Feel free to edit as I’m quite often clear as mud when explaining things.

When it comes to grid cells and weather stations I consider the definition to be the distance from the center of the cell to the stations around it. Sort of like the earth’s orbital radius around the sun. You measure center to center. Here is the GISS definition from their maps page.

Reads somewhat like my definition, but fuzzier. Evidently they mean something else since it doesn’t work out using center to center.

I recently commented about Australia having only a handful of 43 stations report annual temperature in 2007 versus the opposite in 2006 where only a handful did not report annual temperature. I compared maps of annual data for the two years using 250km smoothing radius and didn’t notice as big a difference in coverage as I expected. That called for further investigation so I isolated down to a map of Winter (Dec-Feb) 2007 using 250km smoothing. I figured I could find a station with no other station co-mingling it’s data. Darwin, Australia(lower right) was the first station I looked at in the data file created by their mapping process. There are no other stations close enough to confound the data.

95.00 -9.00 .2736 407km
97.00 -9.00 .2736 356km
99.00 -9.00 .2736 429km
Nine cells for Cocos and seven for Darwin. That leads to a question. Why not an equal number of cells for each station? Cocos is showing larger grid footprint than Darwin.

What they seem to be using as a qualifier is any cell where the station is less than 250km from any corner. They do something akin to scribing a circle with a radius of 250km around the station, with any cell boundary that intersects even a little getting a value. Boy, that sure can’t be right unless the goal is to maximize cell coverage. Scribing a circle around the cell center and only using stations falling within the scribed area seems a reasonable way to go.

Below is an example of what I think is the GISS method by using the last three cells above that don’t qualify when using cell center to station. As can be seen each cell has four opportunities to qualify.
95.00 -9.00 cell center
cell corners
94.00 -8.00 559km
94.00 -12.00 305km
96.00 -8.00 475km
96.00 -12.00 90km *

I assume the same situation exists at 1200km radius. With numerous stations over-lapping the same cells to varying degrees there must be quite a number of excess cells being generated around the edges which should contain no data. This would seem to magnify the value of some stations over others when creating a grid.

When it comes to grid cells and weather stations I consider the definition to be the distance from the center of the cell to the stations around it. Sort of like the earth’s orbital radius around the sun. You measure center to center. Here is the GISS definition from their maps page.

Reads somewhat like my definition, but fuzzier. Evidently they mean something else since distances don’t seem to work out using center to center.

I recently commented about Australia having only a handful of 43 stations report annual temperature in 2007 versus the opposite in 2006 where only a handful did not report annual temperature. I compared maps of annual data for the two years using 250km smoothing radius and didn’t notice as big a difference in coverage as I expected. That called for further investigation so I isolated down to a map of Winter (Dec-Feb) 2007 using 250km smoothing. I figured I could find a station with no other station co-mingling it’s data. Darwin, Australia(lower right) was the first station I looked at in the data file created by their mapping process. There are no other stations close enough to confound the data.

95.00 -9.00 .2736 407km
97.00 -9.00 .2736 356km
99.00 -9.00 .2736 429km
Nine cells for Cocos and seven for Darwin. Cocos is showing a larger grid footprint than Darwin. Why not an equal number of cells for each station? Maybe there could be a difference of one, but two seems unlikely.

Thought I had a handle on the GISS method, but I don’t. Generating all those cells doesn’t look right to me. I assume the same situation exists at 1200km radius. With numerous stations over-lapping the same cells to varying degrees there must be quite a number of excess cells being generated around the edges which contain erroneous data.

I wonder if they could be confusing miles and kilometers. NASA did that with one of their Mars missions a few years ago. All their cells would be less than 250 using nautical miles.

Concerning Arctic waters freezing and thus releasing heat that warms the adjacent atmosphere, I think the mechanism of heat transfer involved here has to be examined. If the mechanism was a convective one, then indeed Arctic surface stations would measure and show warmer than normal surface temperatures. But convection is not the heat transfer mechanism here. It’s almost completely thermal radiation into space, and so I don’t think surface stations would measure warmer than normal temps – in my view.
The logic of the freezing water warming the adjacent atmosphere near the surface boundary makes little sense to me. Water freezes more quickly than normal because it is colder than normal. If the surrounding atmosphere was warmer than normal, then water would not have frozen as quickly as it did in October. Maybe the heat transfer is much more complex then I’m imagining it to be.

Hot water can in fact freeze faster than cold water for a wide range of experimental conditions. This phenomenon is extremely counter- intuitive, and surprising even to most scientists, but it is in fact real. It has been seen and studied in numerous experiments. While this phenomenon has been known for centuries, and was described by Aristotle, Bacon, and Descartes [1-3], it was not introduced to the modern scientific community until 1969.

Earlier, Dr Osborne, a professor of physics, had visited Mpemba’s high school. Mpemba had asked him to explain why hot water would freeze before cold water. Dr Osborne said that he could not think of any explanation, but would try the experiment later. When back in his laboratory, he asked a young technician to test Mpemba’s claim. The technician later reported that the hot water froze first, and said “But we’ll keep on repeating the experiment until we get the right result.” However, repeated tests gave the same result, and in 1969 Mpemba and Osborne wrote up their results

Repeating the results until one gets the right answer This is similar to parameterizing a GCM until one gets a correct hindcast. One NASA student identified this as the means by which GCMs were verified and validated. After all they give the right answers so they must be correct. This V&V technique has a long history

Re: Stan Palmer (#108)
No doubt. I posted the link because the concept of warmer air during a freeze up was related. However, I do not think that the arctic ocean operates like water in a cup. If anything, I suspect the people making the claim are grasping at anything that could explain the observations without undermining their preferred hypothesis.

Re: Buddenbrook (#112), I hope you would consider a counter argument. Perhaps this is the best time to de-politicalize the conversation. That both sides should come to the agreement that a misatke was made. That GISS will most probably institute some QA into their procedure. But whether the different teams play together better, is a somewhat moot point. I think both sides should ask “Did the science get better?” I can understand Gavin’s reaction that this is a small issue. I can understand those who think that perhaps there are other issues with GISS that this last problem and the Y2K problem indicate and should be followed up on. I would agree with both. But I think it likely that when, and if, it is done that the mistakes, if found, in GISS will not be overwhelming. The reason I think this is that the 4 most used metrics are in close agreement. I would hope that both sides would look at continuing to improve, if possible. And that of course is the real question. Can improvements be made?

John F. Pittman, I would like to believe so, but I’m quite certain it is an impossible wish. As the doubts start to grow, politics and psychology will mix with science in increasing amounts. Scientists are not immune to the human factor. History knows several examples.

The mauve arrow points to an oddity – a blazing hot spot around Greenwood, MS, with a peak anomaly approaching +8F. Greenwood’s Preliminary November Climate Data form (“CF6″ report available here ) hints at the cause:

For some reason Greenwood reported only the first seven days of the month, a warm spell where the temperature anomaly was +7F. The data missed the rest of the month, which was anomalously cool across the region. Despite the abbreviated and unrepresentative nature of the Greenwood data, it was apparently used in the map construction. This the hot spot.

No doubt this will be corrected later but meanwhile an oddly flawed map has been issued to, and perhaps used by, the public.

By the way, Peter Stott of the Met Office Hadley Centre has a new article in press at GRL titled “Variability of high latitude amplification of anthropogenic warming”. Abstract:

Climate models have long predicted that the high latitude response of nearsurface air temperatures should be greater than at lower latitudes as a result of snow/ice feedbacks. Here we show that a regression analysis of observed global surface temperatures and anthropogenic and natural forcings could misleadingly suggest that climate models fail to capture the observed zonal mean pattern of response to anthropogenic forcings. A better approach to detecting changes and to determine consistency of climate models and observations is to use multiple features of the response pattern derived from physically-based climate models, as has been done in optimal detection studies. We show that multi-variable fingerprints can more easily detect anthropogenic changes thereby offering the potential to more robustly quantify anthropogenic influence on aspects of the climate system such as the cryosphere and the hydrological cycle.

5 Trackbacks

[…] It was thought that the issue with the GISS temperature anomaly for October related to the northern hemisphere, in particular, parts of Siberia. But the corrected map from the Hanson teams, to accompany the corrected anomaly, now shows changes to Australia and the Canadian Arctic Islands in particular warming for October since Monday, November 10. Read more here. […]