GISS Adjusts the Heartland

One of the surfacestations.org intrepid traveling volunteers, Eric Gamberg, has been traveling through Nebraska as of late, picking up stations as he goes.

He recently visited the USHCN station of record, COOP # 256040, in North Loup, NE, not to be confused with Loup City, which he also visited. Records describe this station as being in a rural area, which is true. As some might say, it is surrounded by a “whole lotta nothing”. See the map. According to the Nebraska Home Town Locator website: “North Loup had a population of 339 with 192 housing units; a land area land area of 0.41 sq.” Seems quite small.

At first glance, there doesn’t appear to be much wrong with this station.

Looking South – Click for a larger image

No obvious heat sources or asphalt/concrete nearby. But let’s take a look at another angle:

Looking West – Click for a larger image

Notice all the shadows and the trees? Unlike many stations I’ve pointed out in the past, this station has an uncertain cooling bias from all the shade around it. The bias is variable, with seasons, with tree growth, tree pruning, and with windstorms that may remove leaves or whole branches from the trees.

Other angles of this station are visible in the surfacestations.org gallery for this USHCN station. This is a new submission, and the surveyor hasn’t submitted his written report yet, so I decided to look closer at some of the photos and screencaps he submitted.

One of the pictures caught my eye. It was a single picture of a Stevenson Screen with the name: “NE_North_Loup_CRS_now_at_Neighbors”

Click for a larger image

The neighbors location also has a significant shading issue with trees around the station. Since it appeared from that description that the station had been moved, I wondered if NOAA had logged the change, so I consulted the NOAA MMS database.

The station map showed only two locations as being recorded for this station even though data goes back to 1892 and is used by GISS in their data plots.

It seems sloppy to me that NCDC doesn’t have any location records any further back than 1948. This lack of historical metadata is becoming increasingly common.

The location tab of the MMS database gave a hint of what might have changed recently though:

Looking at the equipment tab in the NCDC MMS database I found that it appeared the station had been converted in 1986 to MMTS from mercury max-min thermometers:

1986-06-11 | 1988-01-25 | MMTS ELECTRONIC SENSOR

0001-01-01 | 1986-06-11 | MAX-MIN THERMOMETERS

But this still didn’t answer the question of how did the Stevenson Screen end up “at the neighbors”? I guess I’ll have to wait until Eric Gamberg submits his survey report to find out, or perhaps he’ll chime in here to tell us the story.

I thought maybe I’d try looking at the GISTEMP record to see if I could find any obvious shifts in the data that might provide clues for a relocation. Here is the raw USHCN data plot from GISTEMP:

Click image to see source file at GISTEMP

While it might look like there’s a jump coinciding with relocation, other stations in the area, within 100 KM had similar jumps around that time. Click on the names below to see those plots from GISTEMP:

So it appears the jump was natural, likely due to our rising El Nino year peaking in 1998, or we have another one of those data splicing errors like what was found back in August of 2007.

As a final check, I decided to look at the GISS Homogenized data plot for the North Loup station, and here is where the surprise came, the scales of the two graphs didn’t match. There was a .5 degree C difference in the Y scale. The Raw GISS plot topped at 12.5°C while the Homogenized GISS plot topped at 12.0°C :

Click image to see source file at GISTEMP

This was perplexing, so I thought I’d try a data overlay to see what had changed, and what I found was another one of those counter-intuitive downward homogenization adjustments which used the present as a hinge point and made the past cooler:

Click for a larger image

So this begs the question: Why is a station that is classified as rural, with an apparent cooling bias due to tree cover, with a long history containing only a few moves, in a small agricultural town with little growth in the last century get an adjustment like this that causes the past to get colder, and create an artificially enhanced positive slope of temperature trend?

The nearest “large” city is Grand Island, NE, seen in this map, over 40 miles away. There is nothing but farmland all around North Loup, NE. I don’t care how you try to reason it, an adjustment for a station of this type wouldn’t need to be done for UHI. So what’s going on? Is this another one of those situation where other stations that have UHI that are within a larger radius are affecting this station and forcing an adjustment?

This is why I have trouble trusting GISS data. We keep finding instances like this one where the historical temperature record has been adjusted for no discernible or apparently logical reason. Cedarville, CA, which I previously highlighted is another prime example of a rural station with a long history, little growth, no UHI, but with an artificially enhanced positive temperature trend .

The neighbors location also has a significant shading issue with trees around the station.

You know, there is considerable effect on cooling as well, cooling of the shelter/MMTS fixture and ground in the local area, due to the loss of the longwave IR path off into space due to the loss of the view of the overhead sky.

Anytime a deviation from the standard, recommended station siting guidelines is made places the resulting data into an unknown category (unless suitable ‘correction factors’ can be ‘cooked up’ for the very specific siting conditions that particular station is found to inhabit).

“Is this any way to run an airline?” (advertising slogan for National Airlines and parodied in reviews of Come Spy with Me, 1967 as: “Is this any way to make a movie?”)

Is there history available so that we can see what the GISS did with data, say in the sixties? Is the artifical cool past a recent introduction or is it of old age? I think it would be more damning if it were a recent “discovery” that the past was colder than raw measured data.

It’s both the trees and the changes in trees (growth, loss of limbs, changes in types, etc) over time that matters. This almost hopelessly complicates the search for decadal temperature trends of several tenths of a degree.

The rules that dictate these changes and the methodology by which they are made needs to be documented and available for public review. From the comments it certainly seems as though this has not been done. Anything less should be challenged as manipulation of the historical record.

Anthony: I suggest that what you’re observing may be coming not from GISS adjustments but from NOAA via the TOBS adjustment. If NOAA inferred a TOB change with one of the station moves, that could account for the “cooling of the past.”

How much do trees cool? When I bought my house 15 years ago, the deck was mostly in the sun and we could hardly use it. Now it is completely shaded and we eat out there all the time, don’t even need sunglasses, pleasant. I would guess an 8+ degree cooling over 15 years on hot days.

What if the NYSE adjusted GE’s 1972 stock price down by a penny or two? Would anyone mind?

I suspect the SEC may well institute criminal charges of fraud if some entity continuously revised the historical prices downwards to show a better growth rate.
Those responsible would be facing jail time.

My understanding is the whole purpose of the Stephenson structure, and later sunshields is to provide a barrier to the Infrared radiation from the sun, ground, and other objects in the vicinity. It gives the sensor a “shady” environment.
It seems the air temperature should be about the same under the tree as in direct sun. The reason it feels cooler is the blocking of the direct IR. If the air is a bit cooler under the tree, then would thermodynamics tell us the heat in the nonshaded area should move toward the cooler shady area to achieve equilibrium? Not even mentioning the effect of small breezes.
Surely there were tests done on the early Stephenson screens to determine their effectiveness. Anthony, are you aware of any?
I understand and agree that the rules for placement of these stations has not been followed and maintained, which is the purpose of surfacestations.org . Thanks Anthony.
I have self-snipped most of my snarky followup about this being a government project.

Anthony: I suggest that what you’re observing may be coming not from GISS adjustments but from NOAA via the TOBS adjustment. If NOAA inferred a TOB change with one of the station moves, that could account for the “cooling of the past.”

A station-specific TOBS adjustment should be a discrete permanent shift that ideally offsets the actual shift caused by the change in observation time so that nothing is noticeable in the data. If there wasn’t really a change, or if the adjustment isn’t exactly the right size, a permanent shift might be observable in the data. Since the adjustment here seems to build up gradually, I doubt that it is caused by TOB.

Anthony — can you plot the difference between the two series?

The Year 2000 uptick in this and the adjacent series is certainly suspicious.

It’s both the trees and the changes in trees (growth, loss of limbs, changes in types, etc) over time that matters. This almost hopelessly complicates the search for decadal temperature trends of several tenths of a degree.

This is one location, one type of problem. How do we quantify just this location as to how moves, shadows and tree cover impact the min/max on any given day or over any period of days, much less over the entire time combined or compared to other locations with other types of problems? Especially given that the purpose of these are to track samples of temperature for weather purposes, not to derive climate.

Temperatures recorded to the whole degree can never be accurate to any more than 1 degree. Heck, throw in at least a +/- 1 degree margin of error on top of it if not more.

Make no mistake; blocking sunlight from both the air and ground around the screen/sensor is not the same as shielding the sensor from direct sunlight or any affects of the shielding itself while leaving the ground/air alone.

Temperatures are already presumed to be in the shade. That is the reason for the thermometers being in a shelter.

And those ‘shelters’ need a view of the open sky … no?

Would not the temperature ‘profile’ (temperature vs time plot) be affected by an ‘obscuring over-cover’?

Are these obvious ‘deviations’ in siting guidlines (w/o respect to over-cover, ground-cover, etc.) without taking into consideration the inherent inability to now do direct Apple-Apple comparisons (vis-a-vis nearby sites, and even temporal comparisons) to just be, well, ignored?

I like a nice, well-designed experiment as anyone else; tossing in one more un-controlled, undefined, experiment-affecting parameter seems like an avenue destinating in futility.

It seems to me that thermometers in a Stevenson Screen under shady conditions, such as shown here, would give a better measurement of true ambient temperature than those that are located “correctly” in the sun. Even if you paint the screens bright white and keep them that way, i’ll bet there will be some direct solar heating of the interior of the screen. It would be different, however, if the screen was located in a forest, because forests do actually cool the air through transpiration (another negative water feedback) :)

The point of the samples at 5 feet in the air is to get a reading of the surface temperature indicative of the area that is not affected directly by the surface right under the sampling point. If I remember correctly, Gavin said that 50 feet would be a better height but was impractical. Then hopefully the measurement device location is indicative of the area a few hundred kilometers around it.

So we’re looking for a spot where the measurement device isn’t directly affected by the ground nor by vertical mixing from convective and turbulent transfers. Not one where either the ground or the air are shielded directly themselves by something other than whatever the sun and clouds are doing.

If you’re shading the ground around the sensor to in various random ways and coverage amounts, and blocking the air, your reading is contaminated and biased, and you don’t know how. Not the plan of action here.

As far as the housing for the device, it doesn’t seem too dificult to figure out how to make the housing neither absorb heat externally and/or radiate heat internally, or at least derive an exact number of what it does by experimentation, like, you know, when it’s designed or at the factory that makes them.

It seems to me that thermometers in a Stevenson Screen under shady conditions, such as shown here, would give a better measurement of true ambient temperature than those that are located “correctly” in the sun.

While this “double-shading” might indeed result in an ideal temperature measurement, it would not a) be compliant with siting guildlines b) be easily implemented at every station.

Temp msmt sites in deciduous (sp?) tree locations would be in compliance only when trees were leafed.

Temp msmt sites in desert areas would be ‘hard’ to implement with actual trees. Now come up with an applicable artificial ‘cover’.

The ‘temp msmt sites’ w/shade would need to have the amount of view to the open sky specified; we are having a tough enough time meeting the already minimum siting guidelines.

STANDARDIZATION (over-cover and under-cover, etc.), no matter how seemingly arbitrary, needs to be adhered to.

Seems to me those ‘standards’ exist, it is compliance with those standards that is the problem.

COMPLIANCE is a human-equation thing, a ‘judgement’ call about rules adherence, exercised optionally to the degree possible by those local ‘authorities’ who host said temp msmt site/apparatus at that site.

It seems to me that thermometers in a Stevenson Screen under shady conditions, such as shown here, would give a better measurement of true ambient temperature than those that are located “correctly” in the sun.

But only for the part of the year when the leafs are out. So if one uses an smoothing algorithm that emphasizes some months more than others when calculating an annual average, then the population of reading that are effected by seasonal sun exposure (due to leafs) will be further emphasized in the Northern Hemisphere, and further de-emphasized in the Southern Hemisphere.

I wonder what percentage of site are effected by seasonal sun exposure.

RE23, Assuming a static cooling, bias yes. But there are two problems.

1) The cooling bias (for Tmax)from shade is unmeasured and unknown. It changes with seasons, tree health, tree damaging wind, etc. It changes with growth, it changes with station moves. Applying a linear trend change isn’t representative of the real change.

2) There’s also a warming bias (for Tmin) due to the foliage reflecting IR back to the surface at night. This also changes with seasons, tree health, damaging wind, tree growth etc.

3) The homogenization adjustment used by GISS is a broad swatch adjustment, affecting many stations in a radius.

4) No mechanism has been proposed or used by GISS for microsite bias adjustments as we see in this example.

5) Station siting specs can for avoidance of shade as well as artificial heat sources. GISS uses station that are both in and out of compliance with these specs without any attention paid to the microsite issues.

If there are so many issues with placement of temperature stations, why not, (for the purposes of global temperature), only use night time temperatures? For example, just use the 1am temperature to represent that day’s temp. For purposes of tracking global temp it should still work. It seems analogous to the logic of taking the temperature in the shade. Of course, it won’t address problems like having the station right next to exhaust fans, but nothing short of moving a station will fix that.

If I remember correctly (and it’s been a few months so I may not), GISTEMP adjustments are based on night lights. That is, stations that are not classified as 1/dark for night lights get urban adjustments. Since North Loup is classified as 2/dim it gets a homogeneity adjustment.

I plotted the difference between the raw data and the homogenized data. (The combined data is exactly the same as the raw data). I’m not going to upload the plot, but it’s a classic hinged homogeneity adjustment with a hinge point around 1922. The key adjustments that define the hinge are:

-0.2C in 1892
-0.6C in 1922
+0.0C in 2002

To answer a couple of points from Anthony Watts’ article:

another one of those counter-intuitive downward homogenization adjustments which used the present as a hinge point and made the past cooler

Not quite. The hinge point is actually in 1922. But that’s a minor point.

Why is a station that is classified as rural, with an apparent cooling bias due to tree cover, with a long history containing only a few moves, in a small agricultural town with little growth in the last century get an adjustment

The station is not 1/dark so it gets a homogeneity adjustment, exactly as documented in the GISTEMP procedure. The adjustment is done to make the trend of this station match its 1/dark neighbours.
It’s designed to correct for UHI issues but occasionally makes corrections in the other direction.

—
I’m on the record as believing that stations that require homogeneity adjustments should be removed from the analysis instead of adjusted. I still believe that. However, the effect of the homogeneity adjustment is to match the “bad” station trends to the “good” station trends — it is very similar to simply discarding the “bad” stations altogether.

Anthony, it would be interesting to see a plot of temperatures from North Loup and its 1/dark neighbours. Does North Loup match the nearby 1/dark stations? Is the agreement better before or after the homogeneity adjustment?

Perhaps I’m misunderstanding what you mean by hinge point, but how can the data adjustment be pivoting about 1922 when the adjustment for 1922 itself is homogeneity adjusted by around -0.5 C? Is there a typo in your comment or are you describing something other than a single hinge point adjustment?

Feh, operator error on my part with PgDn. It helps if I read all of your comment. I see the the fine detail of your point. It would not surprise me if the adjustment actually applies to the entire record but rounds to 0 for 2002 and beyond.

Earle Williams:
I was not very clear in describing the adjustment. I’ll try again. It consists of two line segments that meet c1922. The segment from the beginning of the record until 1922 slopes downward (from -0.2C to -0.6C). The segment from 1922 to the end of the record slopes updward (from -0.6C to 0.0C).

The adjustments are rounded to 0.1C so they have a “stepped” appearance. The current step (0.0C) starts in 1999 and extends to the most recent year in the downloaded data (2006).

#27
Continued thanks Anthony and others for your continued efforts on this project. If nothing else, the project raises many interesting and useful questions. But I’m still wondering where all the “good” stations that we are adjusting to. I agree with John V. that a closer look at the 1/dark neighbors may be edifying.

Of these nine stations both Saint Paul and Genoa are unadjusted. I presume that means they show up as dark/unlit. The other seven stations are adjusted as shown below. Grand Island, the most urban as shown by GISS, has the least severe adjustment of the adjusted stations. It also shows some unusual adjustments circa 1985-1995.

Adding up all the adjustments and dividing by the number of years, I get an overall adjustment of -0.07 deg C per year, including the unadjusted stations. Describing that in terms of a trend, the adjustments apply a trend of 0.7 deg C per decade to this part of Nebraska.

Tress use Transpiration to respire.Transpiration is the evaporation of water from the aerial parts of plants, especially leaves but also stems, flowers and roots. Around a tree the levels of water are higher than the surrounding area. Will this increase or decrease the measured Tmin?
it will tend to increases Tmin and so increase Tmax-Tmin/2; leading to global warming.

A short-circuit on that decade trend logic thing, as these adjustments are not cumulative per year. I don’t know what you can take from the average -0.07 deg C per year adjustment beyond recognizing that it is non-zero.

What’s the margin of error? Unless it’s considerably lower than the whole degree resolution of the samples, none of this discussion of a 127 global mean temperature anomaly trendline has any meaning. Yes,

I realize that combining them all at least gives an indication of what could be happening, but then that gives us another question; if I measure 56.4 here and write down 56, aren’t I already at a -.4 margin of error? What if 100 feet away at the same time it’s 57.8; I’m at 58 and a +.2

But what is the temperature of the area, 56 or 58? Do I get a mean and say it’s 57? What does that really tell me about the general area, if I have a difference of 2 ( or 1.4 or whatever) in the air within a 100 foot area, much less comparing and combining with ones 100 or 1000 yards away?

Assuming the temperatures I’m measuring for my samples are from unbiased and calibrated instruments in the first place.

I can’t find a reason for GISS (or GHCN) to make such a large homogeneity adjustment to this station.
The homogeneity adjustment is supposed to account for discontinuities in an individual station’s monthly record. This is a NASA lights = zero site, yet GHCN shows it as dim. I looked at the location on satellite and it is in a very small rural town, mostly residential (and it’s obvious from the surfacestations.org photos that it is in a residential part of town). MMTS was installed on 6-11-86. There have been no recorded station moves since 1948 when MMS started records on this site.
The time of observation changed several times between 1956 and current but that shouldn’t be a part of any homogeneity adjustment, and since the time of observation moved back and fourth between 0700 and 1700, the adjustment would jump up and down anyways.
The other stations in the area are also very rural. I can’t see a reason for the large negative adjustments that max out in the 20’s. I have to agree with Steve Sadlov in # 37 that this is an effort to cool the warm 30’s, something that I would not put beyond Hansen given all of the money and support that he has received from political interests that are trying to use this issue to create new taxes and scare up votes.

Mike C:
The homogeneity adjustment is generally called the UHI adjustment. I believe it is based on NASA’s “lights” value, not on GHCN’s rating, or on brightness index. It is an automated procedure that adjusts any and all stations that are not dark. (Outside the USA48, the rural designation is used but inside the USA48 the night-lights designation is used).

The other adjustments you mention are not included in homogenization. The homogeneity adjustment is not “supposed to account for discontinuities in an individual station’s monthly record” as you state. Its purpose is to remove urban effects.

It’s probably best for this conversation if I ignore your conspiracy theory. I suspect it will soon be snipped anyways.

Adding up all the adjustments and dividing by the number of years, I get an overall adjustment of -0.07 deg C per year, including the unadjusted stations. Describing that in terms of a trend, the adjustments apply a trend of 0.7 deg C per decade to this part of Nebraska.

I guess you mean -0.7 deg/decade. Anyway, that is a LOT, considering the whole Global Warming Disaster is so far only 0.7 C per century. There is something really screwy in these adjustments (I guess we know that, eh?).

You need to review your homogeneity adjustment material. The homogeneity adjustment is supposed to resolve all of the old issues not related to initial QC, TOB and Filnet (missing values). That includes UHI, documented station moves, changes in equipment, equipment failures and etc, as well as those that are undocumented. The fact that so many undocumented discontinuities were found in the record and that the meta data was so incomplete that NCDC changed to this new homogenization process, which is now used by GISS as is stated on their page.
Your conspiracy theory comment is also misplaced. I did not force Hansen to take 250K from the Heinz foundation (that’s right folks, John Kerry’s wife) or 720K in PR support from George Soros (not to mention all of the speaking fees and etc.). Had any scientist taken 1% of that amount from Exxon, they would be kicked to the curb. I have seen you state your discomfort on this blog about the negative comments about Hansen, you’re gonna have to deal with it man, Hansen took the money, no one here forced him to. He is part of a political movement and well paid for his part.

Your comment was removed since it was OT. However, since apparently failed to read the update that was posted, I moved it further up, ahead of the references, where it may be more visible to people like yourself.

If you have issues with my blog’s presentation, address them there, not here.

RE John V “I’m on the record as believing that stations that require homogeneity adjustments should be removed from the analysis instead of adjusted. I still believe that. ”

On that issue we are in agreement. In the case of this small village called North Loup, the adjustment is simply wrong. If it was being adjusted for UHI, the present, not the past, should be cooler. UHI grows in magnitude with time as population grows, buildings are added, and more power consumption occurs.

So why would you make the past cooler to adjust for UHI that grows in magnitude closer to the present. It makes no sense.

since we know that GISS algorithms change data in the past. and since we know
that the homogenization works on data AFTER in filling, then dont we really have to relook at all this stuff on a monthly basis?

Here’s another example of the consequences when a scientists doesn’t understand statistics. Note that the scientist, who created a theory that had gained consensus within his community, still claims the theory is right even if the method was wrong! Sometimes human behavior is universal.

Hmmm. Have we got history here. Turn of the century; Chicago; the meat packing industry and Meat packing houses.

Start with The Jungle, Upton Sinclair, 1906 spurred the “The Pure Food and Drug Act” of 1906 worked to assure that meat, alcohol, etc. were labeled and sold correctly. Animals inspected at slaughter to assure sanitary standards …

Does data deserve any better; how about the public who consumes that data?

We need another Upton Sinclair, (Mosh?) to inspire a writ to be titled and enacted as “The Climate Truth-In-Methods and Data Act”

The homogeneity adjustments are apparently intented to make a correction for the UHI for the various sites. As implemented, the correction factors used adjust the trend over time, but the method of ‘hingeing’ the corrections to 2002, or any recent year, introduces a significant bias error.

The influence of UHI is to bias the local temperature reading higher because of developement near the station. The UHI introduces a positive bias to the recorded temperature, that should/could be corrected to eliminate that bias error. The means to correct that trend would be to adjust the temperature over time, “hingeing” the correction factor to a date the unit was ‘most rural’ and then reduce the temperature with time as developement occurs. These adjustments should be zero in the early years and become progressively larger negative values as local developement increases the error due to UHI.

It’s no wonder the most recent years are the highest temperatures on record and the trend are increasing with time. The bias error is built into the adjustments.

Point two:
The heat from the sun’s rays being absorbed by the leaves of a tree located well away from the sensor is certainly different than the ray’s energy being absorbed by the box the sensor is housed in. The local air temperature at the sensor would obviously be substantially different because the temperature of the box is different.

The radiant cooling to the night sky would be similarly influenced. Since the average temperatures for the day are evaluated using the min and max recorded levels the results would be biased during any date with foilage on the trees and a clear or light clouds time.

My understanding of the CRS relocation at North Loup is that it occurred at some point after the installation of the MTSS, when it would be just used as a backup. Note that when in use as the primary instrument, the CRS was located at the current MMTS curator’s yard behind the garage (and likely shaded on the south by a Cottonwood tree, now removed). As is typical, the MMTS was installed closer to a building than the instrument it replaced. In any event the installation of an MMTS at least typically results in a microclimate change.

S.M.:

since we know that GISS algorithms change data in the past. and since we know
that the homogenization works on data AFTER in filling, then dont we really have to relook at all this stuff on a monthly basis?

Well, since the data are taken daily, I’d guess that looking at the dailys’ would be the way to go. Given the bad math associated with averaging daily into monthly and then monthly into annual given that months have differing # of days, but months seem to be given equal weights, getting into more detail for possibly “good climate” temperature stations might result in a better analysis.

If Feb has a different trend than the rest, given that monthly –> annual gives a heavier weight to the days in Feb, one gets a (slightly) distorted result. Don’t we have computers to handle this?

Since the Childs AZ station (being in the desert and away from town over its whole life) seemed a probable “good climate” site compared to many, I plotted up the monthlies to see just what adjustments had been performed in the handling of the data at NOAA to when GISS starts their manipulations.

It’s interesting that there appear to have been seasonal adjustments and along with other adjustments that should have a good explanation.

I started to look at the dailies from the .pdf’d log sheets, but that effort is on the back burner for a couple of months.

In any event, it would be beneficial to select a few likely good stations and start with the daily data and follow the data manipulation along the whole gauntlet. (It’d be hard to beat such an approach with a stick.)

Eric # 61,
When Cotton Regional Shelters first began changing to MMTS they noticed discontinuities in many of the temperature records. This was studied and an average adjustment (MMTS Adjustment) was applied to all stations that changed per Quayle et al:

This adjustment is no longer used. It has been replaced with the homogeneity adjustment that is meant to catch all discontinuities, whether documented or not. This is done by comparing the station’s record to that of 10 to 40 nearby stations. This thread provides an example of how the homogeneity adjustment has fair problems, in this case UHI. Another example, a station move, is on the L.A. station here: http://www.climateaudit.org/?p=2935#comments

is worth a thread in itself. I’ve only read the abstract. Anyone read the whole thing and can comment? It looks from the abstract as if Jones’ position on Chinese UHI may have serious problems when subjected to detailed study, and as if the divergence between the US and ROW instrumental record may have the beginnings of an explanation. Not all that surprising, given the pace of industrialization, but if confirmed, its something of a bombshell.

“On that issue we are in agreement. In the case of this small village called North Loup, the adjustment is simply wrong. If it was being adjusted for UHI, the present, not the past, should be cooler. UHI grows in magnitude with time as population grows, buildings are added, and more power consumption occurs.

So why would you make the past cooler to adjust for UHI that grows in magnitude closer to the present. It makes no sense.”

But if you only adjust stations that warmed, and don’t adjust stations that cooled, then you are introducing a cooling bias into the adjustment. Developed sites usually get warmer over time, but some percentage of them will have local effects that cause cooling over time. If one is using a blanket adjustment for all developed sites, one must allow both directions of adjustment, or one is introducing an intentional bias.

A reason to automate the homogeneity adjustments is lack of manpower. Given that these procedures were put in place before all of this bacame so life-or-death, perhaps that needs to be reevaluated. Let us say that a person could figure out the proper adjustments for individual stations (for changed instruments, moves, UHI etc.) in about a week. It would then take 6 people only 4 years to fix all the US network. That hardly seems like an insurmountable cost. With Anthony’s and Steve’s audits, they would have a head start!

Ya know, as someone who’s done some research at the University level, all this really bothers me. We would have never, and I mean never, been allowed to manipulate the data to this degree in trying to draw conclusions from it. First of all, the U.S. has Temperate, Arid, Sub-tropical and continental climate zones. It goes from Mountains to vast plains. How in the world are you going to homogenize the data for all of those zones? Why even try. If you want to compare urban to rural, countryside, then compare them. Don’t try to turn Urban readings into countryside and vice-versa. All you are doing is confusing the data. The whole excercise is looking to become increasingly pointless. This data has been tortured to the point of being water-boarded!!!

That is a novel idea except it does not apply to this station. The adjustment suggests that this station artificially warmed from 1890 to 1920 then artificially cooled from 1930 to 2000. There is nothing in the metadata or in the surrounding urban environment to support such an adjustment. This does however, provide evidence that the 1930’s were artificially cooled in order to eliminate a natural signal from the record.

<blockquote>4) No mechanism has been proposed or used by GISS for microsite bias adjustments as we see in this example.</blockquote>

And that’s the point, isn’t it? Not only the point for the specific issue at hand but for the microsite issue writ large.

I have run the numbers of the 502 observed stations, according to their minimum estimated CRN violation ratings. And I will note further that Anthony Watts is VERY lenient when it comes to those violations; it seems to me from the photos the actual violations are worse than he says.

The aggregate measure of all stations measured is a 2 degrees C warm bias.

1.) This does NOT include UHI.

2.) Most of these violations occurred or worsened AFTER 1980, mainly because of the MMTS switchover, but also because of exurban creep bringing with it both waste heat and massive heat sink violations as defined in the CRN handbook.

It is obvious that the CRN ratings estimates are not in serious error as has been suggested on other blogs: we have all seen the jumps in temperatures as the sites have been changed for the worse.

Not only that, but FILENET spreads the bias around, thus masking it. A

And UHI effect is calculated off of often badly compromised rural stations thus resulting in a severe lowballing of UHI.

This post and the comments are a bit confusing because they fail to discuss whether the documented adjustment procedures were applied correctly and the procedures themselves are not valid, or whether the adjustments are just completely inexplicable.

I may have to do the work myself: There have been references made to the standard adjustment procedures applied by GISS. I can’t seem to find the canonical source for this information, can someone provide a link?

what I found was another one of those counter-intuitive downward homogenization adjustments

Is the GISTEMP adjustment mechanism not documented? It certainly appears that the source code is available, and from what I understand, it’s very much possible for there to be positive temperature adjustments at some stations for some time periods using their adjustment mechanism (which doesn’t seem to have a “hey, let’s make 1930 colder” step).

As to the technique they use, my understanding is that they are applying an error adjustment based on the fact that climate variation measurements that are close in proximity are highly correlated. And on the whole, the entire homogeneization technique was designed to adjust for the UHI and it should have a net result of reducing the severity of the global heating trend, not increasing it.

It would be fine to question the adjustment technique, but unfortunately that wasn’t the topic of this post, in fact I’m not sure what the scientific point was, but I’m not impressed that someone who I think knows better would go around pointing out the counter-intuitive nature of the results of the homogeneization technique in an attempt to imply to the layman that it is at best an incorrect thing to do and at worst, [snip]

I’m not an expert, and the early downward adjustment certainly baffled me at first, until I researched it. And judging by the comments, nearly all of the commenters have been suckered in by this trick as well.

Got to eliminate (or at least reduce) those warm 1930s.

Why the wild speculation about whether arbitrary adjustments were made? If you have all the evidence available to prove whether something was done incorrectly, why don’t you just prove it!?

Let me summarize: People are going to the effort of driving to take pictures of all of these stations (a laudable effort indeed!). But instead of doing the small amount of incremental work to actually analyze whether the adjustment technique was correctly applied, the blogger tries to make a big deal out of the “counter-intuitive” nature of the adjustment. There’s something fishy going on at this blog.

As long as we’re speculating wildly, let’s try this one: I’ll speculate that in the critical analysis of these data adjustments, the bloggers _have_ actually run the software themselves and _already_ know that the data adjustments were applied correctly to this station’s data but chose to let people infer that they might not have been.