Heh.

[Tom Karl, Director of the National Climatic Data Center] We are getting blogged all over for a cover-up of poor global station and US stations we use. They claim NCDC is in a scandal by not providing observer’s addresses. In any case Anthony Watts has photographed about 350 stations and finds using our criteria that about 15% are acceptable. I am trying to get some our folks to develop a method to switchover to using the CRN sites, at least in the USA.

Hat tip: AJ

===============================================================

Note this email, because it will be something I reference in the future. – Anthony

152 thoughts on “Heh.”

Someone correct me if my recollection was wrong, but wasn’t the actual scandal at the time the fact that the NOAA and the “climatology mafia” didn’t check this *themselves*, and that their reaction in the moment was to argue and minimize instead of revisiting the data?

At this date we have surveyed well over a thousand stations (I, myself, have over 200 kills, a dozen f-t-f, the rest “virtual” and/or by direct interview). Most of the remaining USHCN1 stations are long closed, and some sites are known only after recent station relocations).

The recent switchover to USHCN2, substituting ~50 stations, does not, to my recollection, show a switchover to CRN stations — but I will give it a look-see and report back.

Also, UHCN1 showed a +0.6°C/century trend, while USHCN2 shows +0.72. But that’s adjusted data, of course. As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, and therefore, of course, any results are Scientifically Insignificant.)

Brilliant! I bet reading that for the first time felt a bit like pay day!

Off topic, but here in the UK we’re experiencing heavy snowfall up and down the country.
Here from the Press Association:“Forecaster Paul Mott, of Meteo Group, the weather division of the Press Association, said the deep freeze was likely to continue into next week meaning the snow is likely to settle and much of Britain will remain carpeted in white.”

Until just a few hours ago, the Met (as reported by the BBC) was predicting “light snow” for tonight. Double “Heh!”

Is NOAA trying to lessen the number of observation stations? When I worked in the 1970’s with air pollution monitoring stations, EPA decided to eliminate the number of sites reporting. Much of this was suppose to cut back the cost to EPA. But the air monitoring sites I worked with was financed by our local county agency–not EPA. I was working for a local county government environmental agency. It would not surprise me if they were to eliminate the number of meteorological stations. One needs more stations/data not less for more accurate data.

On a related theme (good vs bad measurement locations), are there no sites that could be considered “pristine” and long-term? If any such sites exist, would it not be better to use the trends from a few good data points rather than attempt to adjust hundreds of not-so-good ones?

I’m thinking that National Parks would be good candidates, as real estate development, or land use changes, are generally not found in them. Perhaps this has been discussed already?

Anthony’s data has been a black eye on the ruling regime for years. Just cannot wait to see the spin on this one from the Warmists. It will be based on magical statistics that can “disappear” any and all offending empirical observations. If they were genuine scientists they would learn.

1.) The new USHCN2 sites are all COOP, not a CRN site among them. That leads to the question of what the raw CRN data is (gridded and ungridded) and why the suggestion to convert to CRN readings was not implemented.

2.) After the substitution, there are 2218 USHCN2 sites as compared with 1221 USHCN1 sites. By my count, 50 have been added, 53 discontinued. This has had the effect of somewhat increasing the adjusted historical trend by ~0.12C/century. This increase may be due to the change in stations, a change in adjustment, both (or perhaps some other factor entirely).

Siting is only part of the problem. The type and style of instrumentation changed several times over the period of interest, and corrections to the data are then made because the data produced is discontinuous. In principle this should not be necessary, but it is, and typically the corrections are poorly applied. Anthony has meade several postings comparing various generations of instruments, and the differeces in them.

Hmm… but where are the defenders of the sacred ‘data’ – the warmista based trolls? Surely, one of them must be along soon to post some c*ck and bull story about how the data was accidentally fecked up but suddenly became ‘good’ again, once they had found it hiding under their discarded grant funding and pay slips!

Thanks to Anthony and Climategate 1 & 2 the intransigent people (warmer’s) have sat up and noticed, they circled the wagons to no avail, in military terms they have been fighting a classic rearguard action with a steady but quickening retreat. These email or the poor ground station sitings are nothing new to readers at WUWT, but a sad commentary on the sordid state of so called climate science and the political crass class. How can any disciple of the church of global warming defend the indefensible is a mystery to me!
Thanks again Anthony for you tireless battle to expose these charlatans.

I have surveyed (and submitted) one hard-to-reach weather station. I would not have known what to look for without the information in this blog over the past years. The problems I saw that can influence the measurements from just this one site made concrete what has been said here. How can anyone grounded in the scientific method believe that the world’s temperature has increased by 0.1 C (or whatever) due to readings from these places? Thanks Anthony and Evan!

1.) The new USHCN2 sites are all COOP, not a CRN site among them. That leads to the question of what the raw CRN data is (gridded and ungridded) and why the suggestion to convert to CRN readings was not implemented.

Perhaps people thought about what a data break it would be in the USHCN data and that there would be no long continuous records in the years after the break.

We’ve heard very little about the CRN site data. Perhaps people are waiting for some sizable fraction of the blessed 30 year climate period before trying to embrace the CRN data.

Or perhaps they’ve found the CRN data isn’t tracking the airport station data very well.

It would be a good amateur project to collect monthly CRN averages and post summaries and graphs ala GISS, UAH, etc. and produce data suitable for inclusion at Wood-for-Trees.

I’d be interested if I didn’t have this pesky job that keeps me busy. And fed. Fed is good.

Theo, if I remmember right, Anthony’s project and data came under an attack not only by the alarmists, but also by many luke-warmers, and I could never understand why? What did luke-warmers hope to gain by downgrading Anthony’ project, beats me.

Bravo for your hard work Anthony and also Tim for reading each and every one of the ClimateGate2 emails

The email600 also includes:
‘.. IDAG is meeting Jan 28-30 in Boulder. You couldn’t make the
last one at Duke. Have told Ferris about IDAG, as I thought DAARWG
might be meeting in Boulder. Jan 31-Feb1 would be very convenient
for me – one transatlantic flight, I would feel good about my carbon
bootprint and I would save the planet!
Cheers Phil’
(bold inserted)

The acronyms are enough to bamboozle anyone. Or at least keep them on the outer.

evanmjones says
As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, and therefore, of course, any results are Scientifically Insignificant.)
———-
I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.

So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles involved are well known.

Roger Sowell says: February 4, 2012 at 12:53 pm
“On a related theme (good vs bad measurement locations), are there no sites that could be considered “pristine” and long-term?”

Roger: In his article “What’s Wrong With the Surface record” John L Daly listed the following U.S sites as some in that category.
Ashton, Idaho: Basin, Wyoming; Cedar Lake, WA; Cold Bay, Alaska; Davenport, Wa; Eagle Pass, Texas; Lamar, Colorado; Lander, Wyoming; Lampasas, Texas; Nome, Alaska; Spickard, Missouri;
Tombstone, Arizona; Yellowstone National Park;; Yosemite National Park HQ, California.http://www.john-daly.com/ges/surftmp/surftemp.htm
It would be interesting for those with the expertise to do a comparison with the graphs John lists and those now listed at Hansen’s Gistemp.
I do note that even using Hansen’s data “after removing suspicious records” that in almost all cases the places above not only still showed no “unprecedented warming” but 1934 was still clearly the hottest year in the USA in the time frame covered.

Though DAARWG agendas/publications are not listed or easy to find, the Joint Office of Science Support ‘JOSS works closely with scientists and research managers to plan, organize and conduct scientific programs in the most productive, efficient and cost-effective ways.

JOSS works on many levels, from consulting with individual investigators to working with research managers and funding agency officials who are planning large-scale geophysical field experiments and monitoring projects….http://www.joss.ucar.edu/index.html

At the request of the scientific community and in collaboration with the several Federal agencies, JOSS provides staff to manage major, scientific programs, including:
United States Global Change Research Program (USGCRP)
USGCRP International and International Group of Funding Agencies for Global Change Research (IGFA),
Intergovernmental Panel on Climate Change (IPCC) Working Group II Technical Support Unit.
Climate Variability and Predictability (CLIVAR);
Ocean Technology and Interdisciplinary Coordination (OTIC); and
Carbon Cycle Science Program (CCSP).’http://www.joss.ucar.edu/scientific_collaboration.html#education

Agendas and background information listed on the JOSS website for the dates of DAARWG cf email600 (11 Sept 2007)

The current director of the center is Tom Karl, a lead author on three Intergovernmental Panel on Climate Change science assessments.

He has played a key role in reports developed by the USGCRP and the Intergovernmental Panel on Climate Change (IPCC). He served as Co-Chair of the USGCRP’s US National Assessment and its recent Global Climate Change Impacts in the United States report. Additionally, Karl was the convening lead author for the Observations Chapter for the IPCC’s Third Assessment Report and was the review editor for the chapter on Observations for its Fourth Assessment Report. He has been the convening and lead author and review editor of all the major IPCC assessments since 1990.

Juan has directly observed many more stations than I have (problem is I don’t drive). Others have also done a lot more direct surveys than I have (Anthony’s done around a hundred or so, maybe more). It has been a true team effort.

Most of my “contribution” has been “virtual” observation. That is, locating stations using satellite resources and/or employing hints to run down the curators, call/contact them, and get exact descriptions of the locations so I can place them on a zoomed-in Google Earth map image and make the necessary measurements.

Sometimes I’ve made a direct spot using Google’s “Street Level” view. And sometimes “Birdseye Views” from (what is now) Bing maps would turn up a station.

All in all, I’ve clocked in somewhere between 200 and 250 (a number of which have been confirmed by later direct surveys, which we strongly encourage).

NOAA’s allegations that we would harass the volunteer curators is entirely unfounded. They were all helpful and highly cooperative — and were happy to be recognized and thanked for their steadfast public service and civicmindedness.

Additionally, I’ve put up hundreds of “measurement views” to supplement surveys others have made. Google maps “ruler” feature is invaluable in this regard and allows us the evaluate the station and assign a rating.

It has used up a lot of elbow grease but has been enormous fun. And, without going into any details, there will be more fun to come.

Jesse says
In regard to Prof Jones (?sarcastic) comment in email6000 on feeling good about his carbon bootprint…. save the planet
———
I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.

I couldn’t find in Daly’s article where he pulled these stations out for commendation, but I’ll assume it’s in there somewhere. Tombstone can now be pulled from the list; they just moved the station to town. It now sits maybe 20 feet from a paved street.

Basic science , ensure your measuring devices are calibrated and any issues are known and that their used correctly . Its what the teach in any good school let alone university. But its not a standard that ‘climate science’ can achieve, and they wonder why people are skeptical of the ‘grand claims of certainty ‘

“I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.”

Lazy Teenager: “I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.”

Sure, it’s easy enough to throw an offset on a reading. Doesn’t mean it’s correct, or that you’re not cooking the offsets to get the results you want.

Yo, go Anthony! And kudos to all the volunteers and of course the indefatigable Tom!

Meanwhile the Met Office-forecasted ‘light snow’ here in the UK has cut off my satellite TV signal and my young cats won’t go out. Time once again to cite Dr David Viner: “Cats just aren’t going to know what snow is!”

Tom: your video won’t play where I am (EMI copyright), so here’s a classic substitute:

evanmjones says:
February 4, 2012 at 3:55 pm
…
NOAA’s allegations that we would harass the volunteer curators is entirely unfounded. They were all helpful and highly cooperative — and were happy to be recognized and thanked for their steadfast public service and civicmindedness.

Indeed; by all indications, NOAA has been ignoring them and taking them and their equipment etc. for granted.

In March 2009 Phil Jones responded to an email from S Clegg, University of East Anglia (UEA). Clegg’s email thanked recipients for responding to requests for information for their Annual Report and the another document placed in pigeon-holes. The new request (5/3/2009) was for information for a new publication titled ‘Leadership, Innovation and Collaboration’. Clegg provided for the email recipients a few egs of the type and style of information required: eg Mike Hulme appointed editor in Chief of “Wiley Interdisciplinary Reviews Climate Change” etc

Jones replies:
1) Recent release of paper in Nature (Geosciences)’…..observed changes in Arctic and Antarctic
temperatures are not consistent with natural climate variability, but instead are directly attributable to human influence on the climate system as a result of the build-up of greenhouse gases in the atmosphere….The results show that these human activities have already caused significant warming in both polar regions, with likely impacts on polar biology, indigenous communities, ice-sheet mass balance and global sea level.’

2) ’ We would have highlighted the new set of UK Climate Scenarios (UKCP09), but they will not be out for several months. They should be a must for 2 years time!’ [bold added]

3) Jones is now a member of the NOAA Working Group called ‘Data Access and Archiving Working Group’ (DAARWG). ’This reports to the NOAA Scientific Advisory Board (SAB). I’ve attached its latest recommendations. There is a strong likelihood they will get acted upon now NOAA has a National Climate Service. … which reports to the Scientific Advisory Board (SAB) of NOAA. The aims of DAARWG are to provide the SAB with guidance on how best to archive the increasing amounts of observational and climate model data that NOAA is obliged to keep by Federal Laws. The group has advised on the development of guidelines to decide which datasets need to be archived and also addressed issues of access. DAARWG oversees all three of NOAA’s principal Data Centres as well as the 30+ centres of data that NOAA runs’ [bold added]

4) That he [Jones] had just ‘rotated off the Hadley Centre Science Review Committee.’

LazyTeenager says:
February 4, 2012 at 3:57 pm
‘I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.’

Your usual projection, with that added pinch of cognitive dissonance thrown in for seasoning. Such an inadvertent comedian and entirely without any self-awareness, as befits your monniker.

I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.

Your impression is absolutely incorrect. Furthermore, we are not interested in anyone else’s adjustments. We are interested in how NOAA makes the adjustments. In order to find out, we require the – exact – procedures/algorithm, working code (and operating manuals) so that NOAA’s adjustments can be replicated.

Unless the adjustments can be replicated using NOAA’s exact procedures, they cannot be independently reviewed.

You can’t just show up with adjusted data and say, “I got it from the boys.” Not and expect to be treated seriously.

So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles involved are well known.

This Liberal New Yorker hails from Show Me, Missouri, on that one.

The code, Jack. That’s what we need. A nice up of tea and the code . . .

The new improved HADCRUT4 lowers earlier recorded temps to make current actual temps look higher. It’s been found already, it’s old news. There was an Iceland story here on WUWT a week or two back on exactly that.

Two hundred years ago, people took to the streets. Now we take to the blogs. The military-industrial complex must be pissing their pants.

Pretend you have only two thermometers, their accuracy is ±0.1C. They’re “CRN1”, meaning there aren’t any barbeques, overhanging trees, nor jet exhaust (a criteria that 85% of the existing stations fail). Assume they’re auto-adjusted for humidity and elevation. (Some of the adjustments make perfect sense.) Put them anywhere you like so long as they’re fifty miles apart.

Now: Pretend the two thermometers are two corners of a rectangular area (projected on a sphere) and provide the average temperature of the -entire- box to an accuracy of 0.01C under all weather conditions.

As one can see from the graphs at the bottom of the summary, the linear slope of the movement on Tmax and Tmin varies enormously and does not correlate with any extraneous variable I could identify. (The linear fit is for decoration, not math). It is therefore somewhat pointless to try to set a baseline temperature when there is no consistency among pristine sites, whose trends here varied from -2.5 to + 4.8 degrees per century equivalent. over some 50 sites.

Going back earlier, when data colection often relied on individuals in remote locations, Dr Simon Torok wrote his Doctoral Thesis in 1996 on the more accurate compilation of weather data. Some of his complications on my web site, with thanks& acknowledgements to him, athttp://www.geoffstuff.com/Torok%20thesis%20excusesW2003.doc

While I can see how a large network of volunteer run stations could have error over time, I can’t see how the “corrections” aren’t fully documented. If only due to bureaucratic inertia and not data integrity.

philincalifornia says: February 4, 2012 at 6:56 pm
Lazy Teenager’s view of the world is that his honest opinions are honest, but anyone who has honest opinions that differ from his are being dishonest.

20/7/2009 Army Contracts to Study Its ‘Carbon Bootprint’ WASHINGTON — As the federal government prepares to regulate greenhouse gases, the U.S. Army has contracted a firm to evaluate the military’s “carbon bootprint,” a balance sheet of its emissions.

29/9/2009 RECOVERY (ARRA) – Lighting Feasibility Studies
RECOVERY ACT PROCUREMENT. THIS NOTICE IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY.
The project will be funded through the American Recovery and Reinvestment Act of 2009 (ARRA). The proposed procurement is being made under an Architect-Engineer (A-E) Indefinite Delivery Indefinite Quantity (IDIQ) Multiple Award Supplemental Contract for the primary geographic area of GSA Region 4 (GS-04P-06-EX-D-0027).
The scope of this work is for lighting feasibility studies using relighting best practices to the Veach-Bailey Federal Building, 151 Patton Ave., Asheville, NC 28801.

21 /5/2010 GSA Goes Green
The General Services Administration is Using ARRA Funds for Southeast Green ProjectsThe U.S. General Services Administration received nearly $5.6 billion in American Recovery and Reinvestment Act funding to modernize federal facilities and convert them into high-performance green buildings. Those dollars are starting to flow into communities in the Southeast as projects ramp up…..“The government spends a lot of money on energy use in buildings, and anything we can do to make that better and reduce our carbon footprint is a good thing…..”
• $4.4 million upgrade to the Veach-Baley Federal Complex in Asheville, N.C.
……. that offered energy conservation and renewable energy generation, could start within 120 days and had limited risk of failure. It also considered the facility’s condition, the project’s ability to improve asset utilization, return on investment, the opportunity to avoid lease costs and historic significance.’ (bold added)

Nov-Dec 2011 The Economic Bootprint of Defense Spending in Indiana‘Since 2001, the value of defense contracts awarded to Indiana has more than doubled, the annual number of unique contracts awarded has increased nearly five-fold, and the number of Indiana defense contractors has grown significantly (see Table 1). The 2010 value of Indiana’s defense-related contracts ranked 23rd among states……………the estimated average compensation for direct defense-supported jobs was nearly $20,000 greater than Indiana’s average compensation per worker for all jobs.http://www.incontext.indiana.edu/2011/nov-dec/article1.asp

Also mentioned in email600 was the need for information for the upcoming publication ‘Leadership, Innovation….’. An eg given was for epidemiology. Tony McMichael (Australia) contributed to reports in the IPCC and is mentioned in The Delinquent Teenager practices and advises on both epidemiology and climate. Sir Michael Marmot, also an epidemiologist also has an interest in architecture and town planning (http://globetrotter.berkeley.edu/people2/Marmot/marmot-con1.html). Perhaps later in green star ratings of civil buildings? Or was that the longitudinal studies (Whitehall etc) on civil servants. Will need to check.

@ juanslayton February 4, 2012 at 4:00 pm
“I couldn’t find in Daly’s article where he pulled these stations out for commendation, ”

Along with many other stations from round the world John considered met the requirements to be classed as “greenfields” sites, they are listed in a “clickable” Appendix – Station Records to 1998 or 1999, at the end of his article.

That list contains what John considered to be one the finest “greenfield” sites in the world because of its history and ideal location: Valentia Observatory. Ireland. Read what John said about it, click on the graph and see why the warmists and the whole CAGW crowd hate it.
And to anticipate any comment from LT about cherry picking, if ever there was a “bell-wether” surface station to monitor any climate change, Valentia would be the one!

evanmjones says:on February 4, 2012 at 12:31 pm“As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data,…”

Evan, Anthony, etc..
Is there correspondence documenting this? Is there a public statement as to why they refuse to reveal their methodology? Last time I checked, NOAA was a publicly operation, U.S. tax dollar$, etc…

Is Senator Jim Inhofe from Oklahoma aware of this? Has he addressed this with NOAA? If he is not aware of this, maybe it should be brought to his attention, he is a strong ally in rationally reviewing the data and the claims of AGW. He is the minority leader in the U.S. Senate Committee on Environment and Public Works. Maybe during one of their public meetings, he can request NOAA to answer a few questions? He seems to be a person who can get hings done…. Maybe some of his constituents, and others, write him to see if there is an answer to this?

As to not releasing the data, I remember watching a documentary a few years ago on the mapping of the human genome. There was a government lab that was crunching away with mapping it out.. But it was going to take years..

But, a private company started doing the same thing, from the other end of the genetic map. They didn’t want to copy the results the government was doing.. Well, since the government lab was publicly funded, it results were public information and released it’s findings regularly. The private company made an early announcement that they had 90% of the human genome mapped, much sooner than expected. They we legally able to include the government data in with their data. So even if each of them had only mapped 45%, the government was not privy to the results from the private company, but the private company could legally use the government’s data. Whether someone views this as ‘right’ or ‘wrong’ is up to the individual, but it was legal.. I wonder if NOAA is viewing this the same way. If they release all of their data and math, someone else will pick up the ball, leaving NOAA in the dust…

Good work, Anthony! Once again, we are reminded about what went wrong at NOAA and NCDC and how you pointed that out. You should remember this – and remind your critics of it over, and over, and over again.

Just as big a problem is the reduction of reporting stations from over 6000 to around 2300 in 1990. Most of these removed stations were in colder regions. The global average temperature in 1990 leapt up to be claimed as another doom scenario.

” IDAG is meeting Jan 28-30 in Boulder. You couldn’t make the
last one at Duke. Have told Ferris about IDAG, as I thought DAARWG
might be meeting in Boulder. Jan 31-Feb1 would be very convenient
for me – one transatlantic flight, I would feel good about my carbon
bootprint and I would save the planet!”

That final exclamation mark sugggests to me that he knows it is all a load of crap.

LazyTeenager says:
February 4, 2012 at 3:57 pm
“I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.”

LT from what I see you are not lazy posting lots of posts after posts and not a teen judging from your posts. This is not a teen speaking.
In the web many people like to talk under a pseudonym which I respect, even if we all know anonymity is just a fable, but wonder why would you like to give the impression you are just an anonymous teen?

> Considering you spelled Anthony’s name wrong, I’d say that’s more than good enough!

Yeah, I noticed that right after I posted. In my foggy decision process, I debated posting an Oops or just hope no one would notice. However, nothing escapes WUWT nation these days. At the very least it’s supporting evidence of the state of the Drambuie bottle.

Do you consult your blogger about your surface stations? In science, as in any area, reputations are based on knowledge and expertise in a field and on published, peer-reviewed work. If you need surgery, you want a highly experienced expert in the field who has done a large number of the proposed operations.

Don’t worry – he’s off coding up his own adjustments, since everyone knows how to do that…even NOAA…heh! /sarc.

BTW – I do wonder why NOAA refuses to release their adjustment code. Even GISS (after a lot of prodding and public embarrassment) released their code. Of course, we realized WHY GISS were so ashamed of it after they released it…

Surface station monitors on the about 30% of the earths surface that is land, but are not equally representing the entire 30%, and are not sited or functioning properly, need to have their data adjusted or manipulated in some manner in order to provide what can never be considered a “global” temperature since they do not monitor the other 70% of the planet.

LazyTeenager wrote: “I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.”

evanmjones answered: “Your impression is absolutely incorrect. Furthermore, we are not interested in anyone else’s adjustments. We are interested in how NOAA makes the adjustments. In order to find out, we require the – exact – procedures/algorithm, working code (and operating manuals) so that NOAA’s adjustments can be replicated.”

You can download the code used to homogenize (compute the adjustments) the NOAA’s USHCN dataset here:

This page also lists all the articles that describe how the software works. Many other homogenization codes are also freely available.

Homogenization adjustments are computed to comparing a candidate station with its neighboring stations. Nearby stations will have about the same climate signal. If there is a clear jump (relocation, change in instrumentation or weather shelter, etc.) or a gradual trend (urban heat island, growing vegetation, etc.) in the difference time series of two nearby stations this is unphysical and need to be corrected.

Rather than going through the code line by line, LazyTeenager is of course right, that it is much smarter to try to understand the principle and to write your own code and apply it to the data. If you get about the same result, NOAA and you did a good job, if you find differences you try to understand why, which of the two codes makes an error. That is how you normally do science.

The NOAA homogenization software was just subjected to a blind test with artificial climate data with inserted inhomogeneities. This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends. The test was blind in that only I knew where the inhomogeneities were inserted and the scientists performing the homogenization did not. More information on this blind test:

Alan Blue said:
“Pretend you have only two thermometers, their accuracy is ±0.1C. They’re “CRN1″, meaning there aren’t any barbeques, overhanging trees, nor jet exhaust (a criteria that 85% of the existing stations fail). Assume they’re auto-adjusted for humidity and elevation. (Some of the adjustments make perfect sense.) Put them anywhere you like so long as they’re fifty miles apart.

Now: Pretend the two thermometers are two corners of a rectangular area (projected on a sphere) and provide the average temperature of the -entire- box to an accuracy of 0.01C under all weather conditions.

Making one’s own code to do exactly that is non-trivial.”

I hope I’m just missing the sarcasm here. Not only is it non-trivial, it’s impossible! If the instrumental uncertainly is +- 0.1C, then it is literally impossible to obtain an average than has an uncertainty a factor of 10 better than what the instruments can actually read.

LazyTeenager says:
February 4, 2012 at 3:22 pm
“evanmjones says
As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, and therefore, of course, any results are Scientifically Insignificant.)
———-
I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.

So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles involved are well known.”

This is proof that Lazy Teenager is in fact a lazy teenager without any experience. Lazy Teenager; even if it were obvious HOW to write such adjustment code, we would still not know whether NOAA managed to do it without introducing errors. Some time later in your young life, you might get introduced to the craft of computer programming, and you will learn that even the most experienced programmers make mistakes all the time. Obviously, you don’t know this by now, otherwise you wouldn’t have written what you wrote.

Of course, this total lack of experience also goes a long way to excuse your entirely unfounded trust in IPCC consensus climate science.

JohnWho wrote:
“Let me get this straight: Surface station monitors on the about 30% of the earths surface that is land, but are not equally representing the entire 30%, and are not sited or functioning properly, need to have their data adjusted or manipulated in some manner in order to provide what can never be considered a “global” temperature since they do not monitor the other 70% of the planet. Can anything be built upon this “rock”?”

It is only you guys that focus so much on the surface network and in most cases even only on the surface network in the USA. Which is not very smart: Even if you could show that your national weather service is in a big conspiracy, it would hardly change the global warming signal. America is not that large. If you would like to contribute to science and find a reason why the the global warming signal is too strong, you’d better think of reasons that apply globally.

The oceans are lately covered by satellites and way back into the past by ocean weather ship and voluntary observations ships: International Comprehensive Ocean-Atmosphere Data Set (ICOADS).

The vertical dimension is covered by the radiosonde network, many of them on islands to cover the atmosphere above the oceans. Keywords: GUAN: GCOS Upper-Air Network and GCOS Reference Upper-Air Network:

So we now have evidence that they acknowledge the existing meteorology less than ideal for climate change monitoring and are motivated to improve it.

Ooops. Seems to contradict notions of nefarious behavior. If they wanted to produce fake data to support some climate conspiracy they would not bother to try to improve the network now would they.

Yet they didn’t. They did not convert to CRN. They did not even add CRN stations to the mix. Using NOAA/NWS or Leroy (1999) standards, <10% of stations are acceptable, and using the new and (very much) improved Leroy (2010) standards, ~15% are acceptable.

All they did was to replace 53 stations with 50 other stations that show greater warming than those they replaced. In proportional terms, they replaced 2% of the USHCN1 stations and that resulted in a 20% warmer trend for USHCN2.

This posting at Real science is by far the most important or relevant posting concerning the whole AGW scam/science since it started… a must readhttp://www.real-science.com/hadcrut-global-trend-garbage
it accounts for all that “cold area” before the 80’s. Its THE graph that was used and still is and its complete garbage as I always suspected. This needs to be broadcast far and wide

Hey Tom Nelson, I just want to say a BIG THANK YOU for all of your work in posting the climategate emails.

I also want to add that I read somewhere (I think on JoNova’s or Laframboise’s site) that Joe Romm is (I’m paraphrasing) upset over the climategate 2 emails. Nice to know that you’re contributing to this…

evanmjones wrote: “But homogenization is just one step in a long process. There is infilling, SHAP, TOBS, UHI, and equipment, to name just a few. (Not to mention the initial tweaking — outliers, etc.)”

What do you mean with SHAP and how is it different from homogenization? TOBS (Time of Observation bias), UHI (Urban Heat Island), changes is the equipment (instruments and weather shelters) can be corrected for using parallel measurements. But if not corrected that way, these errors are corrected in the normal statistical homogenization, by comparison with neighboring stations, which was tested in the blind validation study.

evanmjones wrote: “And, of course, homogenization is not supposed to increase the trend. After all, if all you are doing is, in effect, providing a weighted averaging of stations within a given radius (or grid box or whatever), the overall average would not change (or at least not much, depending on the weighting procedures). Yet the adjusted data is considerably warmer than the raw data. ”

Homogenization is supposed to change the trend in the raw data in case this trend is wrong. If a station is moved from the city to the airport, there is typically a drop in temperature. If this drop is sufficiently large, you may find an erroneous cooling temperature trend in the raw data.

The mentioned weighted average of stations is used as a reference time series (for some homogenization methods). You compute a difference time series of this reference with your candidate time series. If there is just one jump, due to the move to the airport, this difference time series looks like a step function with some weather noise. From the size of the step you determine the temperature difference between the city and the airport. This step size is added to the data to correct for the relocation of the station.

The reference time series thus does not replace the data of the candidate station, which some people seem to assume, but is only used to compute the size of the jump. Thus if the other stations would also all move the airports, the results would still be right (as long as they do not all move to the airport on the same day).

Why is there a difference between the trends in the raw and the homogenized data?
Menne et al. (2009): “The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum–minimum temperature system (MMTS). ”

I wrote: “This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends.”

evanmjones wrote: “In that case, homogenization is not going to explain the differences. So homogenization code is not terribly relevant to my objections. We need the full and complete adjustment code. The part that creates the — very large — differences between aggregate raw and adjusted data.”

The differences between the aggregate raw and adjusted data are due to homogenization. The blind validation study showed that the adjusted trends are closer to the true trends as the trends in the raw data. Thus there are changes in the tends, but no *artificial* additional warming trends, as many of you guys like to assume. My advice would be to look for weak points in the global warming theory elsewhere.

You can download the code used to homogenize (compute the adjustments) the NOAA’s USHCN dataset here:

But homogenization is just one step in a long process. There is infilling, SHAP, TOBS, UHI, and equipment, to name just a few. (Not to mention the initial tweaking — outliers, etc.)

And, of course, homogenization is not supposed to increase the trend. After all, if all you are doing is, in effect, providing a weighted averaging of stations within a given radius (or grid box or whatever), the overall average would not change (or at least not much, depending on the weighting procedures).

Yet the adjusted data is considerably warmer than the raw data. Using Steve McIntyre’s 20th Century data: The average USHCN1 station has warmed 0.14C per century using raw data. But it is +0.59 for adjusted data.

The data we used for Fall, et al. (2011), for the 1979 – 2008 (positive PDO) period — using USHCN2 adjustment methods — showed a warming of 0.22 C/decade for raw data and +0.31 C/decade for adjusted data.

This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends.

In that case, homogenization is not going to explain the differences. So homogenization code is not terribly relevant to my objections. We need the full and complete adjustment code. The part that creates the — very large — differences between aggregate raw and adjusted data.

As in the infamous Saturday Night Live “paraquat test”: It’s light! To conduct this test, we need an ounce. A FULL and COMPLETE ounce.

Rather than going through the code line by line, LazyTeenager is of course right, that it is much smarter to try to understand the principle and to write your own code and apply it to the data.

That statement is such an enormity it needs to be addressed separately.

Actually, for Independent Review purposes, not only does one have to understand the underlying principles, but also has to go through the code line by line and be able to run the code and get the exact same results as NOAA.

For example, are they using the TOBS record from the actual B-91 and B-44 forms, or are they using some sort of mishmash regional guesstimate procedure? “Underlying principles” are not going to answer that one. Only a line-by-line review is going to shed any light on that.

Being “much smarter” would get you landed in the hoosegow if you tried to pull that in the private sector.

Finally, homogenization does not change the overall average. What it does do is smear SHAP bias around so it cannot be distinguished. We are going to hear a LOT more of about this in the fairly near future. But that is a story for another day . . .

It is only you guys that focus so much on the surface network and in most cases even only on the surface network in the USA. Which is not very smart: Even if you could show that your national weather service is in a big conspiracy, it would hardly change the global warming signal. America is not that large. If you would like to contribute to science and find a reason why the the global warming signal is too strong, you’d better think of reasons that apply globally.

We concentrate on the USA for for the following reasons:

It is very difficult to locate even US stations. Having run down over 200 of them, I can speak to this personally. It is exceedingly difficult and timeconsuming, given that NOAA has pulled the curators’ names and addresses from the MMS website. Also, the coordinates they provide are often faulty in the extreme, though there has been some improvement of late. It has taken us years and years to run down the bulk of USHCN stations.

Foreign stations are a near-impossible task. We do not have an international network of volunteers. As for locating them by satellite, GHCN provides coordinates to only two or three decimal places. That is entirely useless for our purposes. Some can be identified by airport, WWTP, or some other industrial structure, but satellite resolution outside the US (even inside the US) is generally so poor as to make distinguishing stations impossible. On top of that, there is no conformity of equipment, so unless it is a Stevenson Screen or an ASOS, we wouldn’t recognize them if they showed up on the blurry map images — which they generally don’t.

In any event, the USA is an excellent sample. First, the US shows much the same overall warming trend for the 20th century as does the world, overall (c. +0.7C / century for adjusted data — and much less for raw data), though the “1940 bump” is higher. Second, with the possible exception of Australia, the US has the highest quality historical station network in the world. This assertion appears to be supported by what few foreign stations we have actually managed to locate.

Furthermore, we are not dedicated to proving that the NOAA is a “big conspiracy”. What we are after is determining if their precedures are tight enough to cut the mustard in the private sector and whether output is correct. There is a lot riding on the answers. Yet we are excoriated for even asking the question. That violates both Scientific Method (and mores) and the principles of Liberalism by which I was raised and educated.

And finally, we have a surveyed and rated sample of just a bit over 1000 stations. That will allow us to examine well sited stations vs. poorly sited stations. It does not matter statistically whether only the US is covered or whether the sample is scattered over the world. What matters is the number of stations evaluated and how consistent the equipment is.

The question is: How does site quality affect the readings? I’d prefer to be looking at 6000 stations worldwide, but 1000 within the US will suffice to answer that question.

Previously, we used Leroy (1999) ratings, though that was a poor metric as it accounted only for distance from heat sink and made no account for area within radius, as does Leroy (2010).

The oceans are lately covered by satellites and way back into the past by ocean weather ship and voluntary observations ships: International Comprehensive Ocean-Atmosphere Data Set (ICOADS).

The methods of measuring ocean temperatures prior to ARGO (2005) are both inconsistent and abominable (c.f., the bucket/bag/bilge controversy). UAH and RSS provide reasonably reliable atmospheric readings over the oceans, but not prior to December 1978.

The vertical dimension is covered by the radiosonde network, many of them on islands to cover the atmosphere above the oceans

Radiosonde readings show so little warming (or even cooling) that one must be suspicious of them. If the radiosonde readings are accurate, we have nothing whatever to worry about. I’d go with UAH and RSS for atmospheric readings until more is known.

What do you mean with SHAP and how is it different from homogenization?

By SHAP, I mean changing microenvironment over time. Station History Adjustment Procedure. This is entirely unrelated to homogenization.

TOBS (Time of Observation bias), UHI (Urban Heat Island), changes is the equipment (instruments and weather shelters) can be corrected for using parallel measurements.

Precisely. And we need to examine and audit NOAA procedure for doing so. Of course, it would be better to have automated Class 2-sited stations or better, with no adjustment needed or applied. Last I heard, NOAA no longer applies an adjustment for UHI. But without their algorithm, code, and manuals, we have no way of knowing the details.

But if not corrected that way, these errors are corrected in the normal statistical homogenization, by comparison with neighboring stations, which was tested in the blind validation study.

Incorrect. Homogenization has nothing to do with correcting for those factors. It just smears the error around between x number of stations so the problem shows up less per station — by a factor of x. Sort of like correcting a 5 point grading error by changing the grades of five students by 1 point.

Homogenization is supposed to change the trend in the raw data in case this trend is wrong. If a station is moved from the city to the airport, there is typically a drop in temperature. If this drop is sufficiently large, you may find an erroneous cooling temperature trend in the raw data.

Of course homogenization will alter the trends of every individual station in the network. however, you have stated definitively (with NOAA citation) that homogenization does not alter the overall trend average. Therefore, by definition, homogenization is merely distributing the errors of each individual station among all nearby stations, resulting in a net zero change in average.

Therefore, the 20th century raw trend anomaly being increased by adjustment by over 400% for the 20th century and by over 40% for the past 30 years needs to be subject to audit. A FULL and COMPLETE audit.

Why is there a difference between the trends in the raw and the homogenized data?
Menne et al. (2009): “The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum–minimum temperature system (MMTS). ”

I notice that they adjust MMTS trends UP to match CRS rather than adjusting CRS trends DOWN to match MMTS. Despite the fact that MMTS is probably a better instrument (discounting siting issues, of course).

And nothing, of course, for microsite. Just upward adjustments to stations moved to airports (which show a large warming trend bias, didn’t you know?).

As Al Gore once put it, so far as adjustment procedure is concerned, everything that’s UP is supposed to be DOWN and everything that’s DOWN is supposed to be UP.

And since homogenization, in and of itself, does not affect the overall trend for USHCN, homogenization code is not relevant to the question.

Dear evanmjones, if you are not willing to invest a little time into understanding the main principle behind homogenization and how it is implemented (including how it can improve the aggregate trend), do not expect people to waste their precious life time for a complete audit to satisfy your unfounded distrust.

Your last two comments are so full of plainly wrong statements, so clearly display that you have no idea how homogenization is performed and no willingness to learn, that I do not expect that further clarifications would bring anything.

The NOAA homogenization software was just subjected to a blind test with artificial climate data with inserted inhomogeneities. This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends.

. . .

The differences between the aggregate raw and adjusted data are due to homogenization. The blind validation study showed that the adjusted trends are closer to the true trends as the trends in the raw data. Thus there are changes in the tends, but no *artificial* additional warming trends, as many of you guys like to assume. My advice would be to look for weak points in the global warming theory elsewhere

Follow the pea.

What is going on, then, is that well sited stations that are running cooler are adjusted so their trends are as warmy as poorly sited stations (which also have been adjusted warmier).

Actually, good stations are adjusted even slightly warmer than bad stations. Quite a bit warmer, if airports are excluded. And, yes, I’ve checked.

Thanks for the advice, but I think we had better look for weak points in global warming right here.

“During the past few years I recruited a team of more than 650 volunteers to visually inspect and photographically document more than 860 of these temperature stations. We were shocked by what we found. We found stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/ reflecting heat source. In other words, 9 of every 10 stations are likely reporting higher or rising temperatures because they are badly sited.

It was WUWT evidence that station data was skewed by poor siting and selectivity that made me take the step from skeptical questioning of climate change models using this data to absolute skepticism of the theory itself.

In this day and age, with access to refined technology and enhanced communications, the global network of sites reporting raw data should be expanding, rather than shrinking. I’m not into conspiracy theories, but while ever the reverse is true, it is hard not to conclude that a caucus is trying to control the data, and to manipulate it to meet its own agenda.

Homogenization is supposed to change the trend in the raw data in case this trend is wrong. If a station is moved from the city to the airport, there is typically a drop in temperature. If this drop is sufficiently large, you may find an erroneous cooling temperature trend in the raw data.

I don’t understand why you say “If a station is moved…”. What you describe cannot be defined in any way as movement.

Rather, one station is discontinued and a new station is built at a second location. Keeping the same name or identification number does not make it the “same station”. Does it make sense to you to then “adjust” the recorded values at either site in the name of homogenization? Would you do this for two previously “unrelated” sites?

Furthermore, you state:

If there is just one jump, due to the move to the airport, this difference time series looks like a step function with some weather noise. From the size of the step you determine the temperature difference between the city and the airport. This step size is added to the data to correct for the relocation of the station.

You, of course realize that an adjustment would rarely be a single value added or subtracted. The difference would be unlikely to remain constant over the various months if the geographic characteristics change. This means that the temporal structure of anomalies calculated for the appended series could also be affected even if such adjustments were done on a monthly basis. Doing multiple adjustments then becomes much more arbitrary.

Nor would any of this explain those adjustments which have a trend already built into them…

Great news. Congratulations to all involved! Made my Sunday! But,,, This post added an exclamation point to my day:
“John Billings says:
February 4, 2012 at 5:37 pm
The new improved HADCRUT4 lowers earlier recorded temps to make current actual temps look higher. It’s been found already, it’s old news. There was an Iceland story here on WUWT a week or two back on exactly that.

Two hundred years ago, people took to the streets. Now we take to the blogs. The military-industrial complex must be pissing their pants.

We’ve got nice graphs though.”

You see, although my nic here is HuDuckXing, my real name is John Billings! So;

Your last two comments are so full of plainly wrong statements, so clearly display that you have no idea how homogenization is performed and no willingness to learn, that I do not expect that further clarifications would bring anything.

Unless homogenization includes SHAP, FILNET, outliers, UHI, TOBS, and microsite effects, it is not full disclosure.

All I see is adjustments that increase good site trends to greater than bad site trends. After the bad site trends themselves have been increased.

That’s a fact. We have the raw and adjusted data trends. We have determined the ratings.

Unless that can be replicated and the code inspected — line by line — there can be no independent review. By definition. I do not see how you can dispute that. Yet you said earlier that LT was correct in saying that there is no need to check out NOAA adjustments, which hike 20th century temperature trends by over 0.4C per century and the last 30-year trends by over twice that amount.

As I say, we have the raw and adjusted data trends.

So we’ll just toodle along, as we have been, and submit my clearly wrong statements for peer review. No need to clarify.

Meanwhile, we would like a FULL and COMPLETE adjustment procedure including any/all working code, manuals and methods involved. If independent review demonstrates that NOAA’s procedures are legit (for example, that Time of Observation is taken for each individual station directly from B-91 and B-44 forms), then there is no problem. But until we can do that, there can be no independent review, and the adjustments, by definition, cannot be considered scientifically valid, much less legitimately used as a basis for multi-trillion dollar policy.

I don’t understand why you say “If a station is moved…”. What you describe cannot be defined in any way as movement.

That’s how NOAA defines it. If the station does not receive a new COOP number it is not considered to be a new station but, rather, a station move.

Stations move rather frequently. Sometimes they are merely localized equipment moves, particularly if there is a conversion from CRS to MMTS. Sometimes a curator passes away or moves, so they find another volunteer (or go for the old standbys of either an airport or WWTP) and relocate the station accordingly. More often than not, NOAA does not consider this to be a “new” station, only as a station move.

Dear evanmjones, if you are not willing to invest a little time into understanding the main principle behind homogenization and how it is implemented (including how it can improve the aggregate trend), do not expect people to waste their precious life time for a complete audit to satisfy your unfounded distrust.

Evan has spent plenty of his precious life time working on various WUWT endeavors. Don’t gripe about losing some of yours – you seem to cover the subject fairly well (at least for having no source code) at your blog.

More importantly, thousands of people read this blog every day. You’re losing out on a good chance to explain to a lot more people than you reach on your blog how raw data gets processed into climate data in Germany.

Also, please take some time checking out http://chiefio.wordpress.com/gistemp/ . EM Smith spent a lot of time studying the GISS adjustments, enough time to warrant starting his own blog. You might want to see if some of his criticisms of GISS also apply to German data.

Some people remaining sceptical of climate change claim that adjustments applied to the data by climatologists, to correct for the issues described above, lead to overestimates of global warming. The results clearly show that homogenisation improves the quality of temperature records and makes the estimate of climatic trends more accurate.

I confess that I’ve forgotten how some of the steps Evan mentioned are applied, but one adjustment in particular by GISS is really annoying. It’s the backfilling of missing data in a station’s record, something that I don’t think is covered by homogenization as you understand it.

EM Smith’s blog probably goes into much better detail, but essentially when a new month’s data is out, GISS code looks through the record for missing data for the month, and if it finds it, recomputes an estimate for that month. An effect of that code, is that the historical record keeps changing, and so for anyone wanting to reproduce research that used GISS data, they have to know the month an year it was released in order to stay in sync. Worse, the adjustment tend to make the old data colder, thereby increasing the rate of temperature increase in the record.

So, put me in the camp that thinks climate change is occuring (well, not very quickly the last decade or so) and that adjustments lead to overestimates of recent global warming.

Oh, I understand how homogenization can “improve” the trends, all right. It identifies stations that are running cooler and “adjusts” them so they are warmer.

That is pretty much the only way that the few good stations start out with much lower trends than the bad stations and then somehow wind up with higher trends than than the (upwardly) adjusted data of bad stations.

Yes, you read it correctly: somehow the bad stations wind up with higher trends as well. And, yes, the adjusted trends for the good stations are adjusted even higher than that.

A lot of people here have been moving away from manual measurements to more automatable measurements with a more even coverage. For example, 10.7 cm microwave emissions instead of sunspot counts, satellite-derived temperature estimates of the lower troposphere instead of the ill sited US weather station network, and ocean energy storage instead of atmospheric temperature estimates. Perhaps you can compare your data with those other sources.

LazyTeenager says:
February 4, 2012 at 3:22 pm
>evanmjones says
>> As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, >>and therefore, of course, any results are Scientifically Insignificant.)

> I was under the impression that it’s relatively easy to code your own adjustment code and that it >has been done multiple times. And they all come much the same conclusions about the >temperature trends.

> So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles >involved are well known.

I have seen it argued before (from warmist side) that scientists doing such work as gathering raw data (maybe also developing adjustment codes?) should not have to give their works away for free, not even if they are paid by governments to do this work.

I seem to think that data and codes gained at taxpayer expense should be free to taxpayers of the taxpaying jurisdiction. Maybe delay free publishing by 1-3 years (depending on field of study), so that when something big hits, competing scientists have to do their own work.
It appears to me this forces competing work that generates alternative codes and data, and I think that is good. When someone else redoes something already done, science is interested if
the rework confirms or does not confirm something that can use confirmation by an independent effort.

If others develop adjustment codes of their own, it is interesting to see if they have similar results or significantly different results from the NOAA one. (Of course, this is easier with access to both the raw and adjusted NOAA data.)

I think that taxpayer paid data processing codes and compilations of raw data relevant to climate change should be published on the web for free to taxpayers that paid for it, no later than 1.5 years after they were generated, and no later than 9 months after publication of studies using them. I think subtract up to 6 months from these figures if necessary and sufficient, to the extent necessary, to have publication at least 15 days before a major election where candidates are running at least in part on climate change issues, and at least 10 days before major government body or international body voting events on appointing big players or on treaties concerning global warming or climate change issues.

Ric Werme to Victor Venema:EM Smith’s blog probably goes into much better detail, but essentially when a new month’s data is out, GISS code looks through the record for missing data for the month, and if it finds it, recomputes an estimate for that month. An effect of that code, is that the historical record keeps changing, and so for anyone wanting to reproduce research that used GISS data, they have to know the month an year it was released in order to stay in sync.

Ironically, the metadata presented in MMS has exactly the opposite problem. When station location coordinates are refined/corrected (recently done en masse with GPS), the old, inaccurate coordinates are not changed. Rather a fictitious location change is entered. So when you try to trace back a station history (necessary if you really want to go take a look on the ground) you never know which locations are to be taken seriously. At least you don't until you catch on to the game….

Rather a fictitious location change is entered. So when you try to trace back a station history (necessary if you really want to go take a look on the ground) you never know which locations are to be taken seriously. At least you don’t until you catch on to the game….

What you have to look for is coordinates ending in .33333, .5, .83, .66667 or whatever.

But even with what look like painstakingly precise coordinates, they can be anywhere from 5 feet to half a mile off. It’s a complete crapshoot.

Blue Hill, MA, is a poster child. I looked at several “station moves” over quite a patch of square miles and evaluated them in a rough sort of way. And then when I spoke to the curator, I discovered that there was one localized equipment move of 20 feet or so during the entire 100+ year history of the station.

So not only is it COMPLETELY impossible to judge microsite without an image or direct testimony of a curator (or other eye witness), but you can’t even rely completely on the larger picture. So we use the NOAA and GISS determinations of which stations are urban, semi-urban, and rural (we have o choice), but sometimes I wonder how accurate even that is.

And I know that the NOAA’s own microsite ratings — such as they even exist — are woefully inaccurate by examining Menne (2009) using Leroy (1999) standards. And, judging by my current studies, Menne, et algore, cannot be even close to accurate by Leroy (2010) standards.

“What you describe cannot be defined in any way as movement. Rather, one station is discontinued and a new station is built at a second location. Keeping the same name or identification number does not make it the “same station”. Does it make sense to you to then “adjust” the recorded values at either site in the name of homogenization? Would you do this for two previously “unrelated” sites?”

As long as the station moved over a distance much less than the average distance between stations, I see no problem in keeping the station number the same. You can also split up the record, as you suggest, that would also be fine. Every weather service has its own rules for doing so. If you split up the record you will have to take the jump due to the relocation into account when you compute a regional average over all stations. Thus you cannot avoid the homogenization problem.

RomanM says:

“You, of course realize that an adjustment would rarely be a single value added or subtracted. The difference would be unlikely to remain constant over the various months if the geographic characteristics change. This means that the temporal structure of anomalies calculated for the appended series could also be affected even if such adjustments were done on a monthly basis. Doing multiple adjustments then becomes much more arbitrary.”

You are right, there is often also a change in the annual cycle due to a inhomogeneity. For temperature you can typically estimate the adjustments needed for every month quite well. Thus temperature is often homogenized on a monthly scale. Precipitation is more variable, thus the adjustments more uncertain. Consequently precipitation is often homogenized on a yearly scale.

Trends are almost always computed on yearly mean values, then you also just need to compute the annual adjustments and the annual cycle is irrelevant.

RomanM says:

“Nor would any of this explain those adjustments which have a trend already built into them…”

In the blind validation study of homogenization algorithms we also inserted local trend inhomogeneities to model the urban heat island effect or the growth of vegetation, etc. Homogenization algorithms can also handle that situation. In most cases they solve the problem by inserting multiple small breaks in the same direction. Algorithms that use trend-like adjustments were not better than those inserting multiple small breaks.

————-
Ric Werme says:

“Evan has spent plenty of his precious life time working on various WUWT endeavors. Don’t gripe about losing some of yours – you seem to cover the subject fairly well (at least for having no source code) at your blog.”

It is nice to hear that someone puts in a good word for Evan. If he invested a lot of time in the surface temperature project to visit all the stations, I am very grateful. I wish there was a similar project in Europe as it has the potential to help our understanding of the quality of the measurements.

However, when it comes to homogenization, how inhomogeneities are removed, I am not able to understand the gibberish Evan is talking. He does not seem to be able or willing to understand how homogenization is performed. I am happy to answer your questions.

Ric Werme says:

“More importantly, thousands of people read this blog every day. You’re losing out on a good chance to explain to a lot more people than you reach on your blog how raw data gets processed into climate data in Germany.”

Actually Roger Pielke put me into contact with Anthony Watts, he requested permission to repost my post on the blind validation study of homogenization algorithms. I guess he was no longer interested when he read the conclusions. The admission that at least a minimal part of climatology is scientifically sound is apparently too controversial for this blog. Conclusion: If you are interested in the truth, read the blogs of the “opponents”.

—————
Ric Werme says:

“I confess that I’ve forgotten how some of the steps Evan mentioned are applied, but one adjustment in particular by GISS is really annoying. It’s the backfilling of missing data in a station’s record, something that I don’t think is covered by homogenization as you understand it.

I am not a climatologist. I am a physicist that normally works on the relation between clouds and (solar and heat) radiation. Being an impartial outsider was why they asked me to perform the blind validation. I now understand the homogenization problem somewhat, but I did not study the filling of missing data and cannot comment on this problem.

The International surface temperature initiative is working on a similar blind validation study, but now for a global temperature network. Because it is global, we can not only validate homogenization algorithms, but also the methods used to interpolate and compute regional and global averages. Stay tuned.

One more thing. A lot of people here have been moving away from manual measurements to more automatable measurements with a more even coverage. For example, 10.7 cm microwave emissions instead of sunspot counts, satellite-derived temperature estimates of the lower troposphere instead of the ill sited US weather station network, and ocean energy storage instead of atmospheric temperature estimates. Perhaps you can compare your data with those other sources.

A good idea. I do think we need to do both. Satellites also have their inhomogeneity problems. Their calibration can only be partially checked in space, the relation between the measured quantity and the climatological variable of interest also depends on the state of the atmosphere and may thus change in space and time. Furthermore, the time series are relatively short from a climatological perspective, the instrumentation has changed considerably over the decades and the satellites themselves have a short time span.

The European Space Agency has a climate Satellite Application Facility (CM-SAF), which is coordinated by the German weather service. They try to solve these problems and produce a good dataset. Again this is a very different problem and I do not have the expertise to judge it.

However, when it comes to homogenization, how inhomogeneities are removed, I am not able to understand the gibberish Evan is talking. He does not seem to be able or willing to understand how homogenization is performed. I am happy to answer your questions.

The result of homogenization is to take well sited stations and adjust them warmer than poorly sited stations. That is a fact.

The problem arises when 15% of the stations are properly sited turn out to run significantly lower trends than the remaining 85%. Therefore, they show up as outliers and are “adjusted” to conform with the surrounding (poorly sited) stations.

You are not fixing bad microsite. You are unfixing good microsite.

(When they homogenize the data, why do they always seem to feel the need to pasteurize it?)

As for UHI, I got a great idea: Take, oh, say, the USHCN. Average urban, semi-urban, and rural station trends. Compare the averages.

As for climate trends, simply classify, grid, and average the grid boxes for each classification. (Comparisons of good/bad stations within each grid is also recommended.)