GISS “raw” station data – before and after

I’ve been following this issue a few days and looking at a number of stations and had planned to make a detailed post about my findings, but WUWT commenter Steven Douglas posted in comments about this curious change in GISS data recently, and it got picked up by Kate at SDA, which necessitated me commenting on it now. This goes back to the beginning days of surfacestations.org in June 2007 and the second station I surveyed.

Remember Orland? That nicely sited station with a long record?

Note the graph I put in place in June 2007 on that image.

Now look at the graph in a blink comparator showing Orland GISS data plotted in June 2007 and today:

NOTE: on some browsers, the blink may not start automatically – if so, click on the image above to see it

The blink comparator was originally by Steven Douglas. However he made a mistake in the “after” image which I have now corrected.What you see above is a graphical fit via bitmap alignment and scaling of the images to fit. This is why the dots and lines appear slightly smaller in the “after” image. I don’t have the GISS Orland data handy at the moment from 2007, but I did have the GISS station plots from Orland from that time and from the present, downloaded from the GISS website today. If I locate the prior Orland data, I’ll redo the blink comparator.

I believe this blink comparator representation accurately reflects the change in the Orland data, even is the dots and lines aren’t exactly the same thickness.

Douglas writes in his notice to me:

It appears that RAW station plots are no longer available, although NASA GISS (Hansen et al) do not say it in this way. Here is the notice on their site:
Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).

I don’t know about the “renamed” option, but the RAW data appears to be NO LONGER AVAILABLE.

Here’s a detailed blink comparison of Orland. All their options now give you an “adjusted” plot of some kind. The “AFTER” in this graph show the “adjustments” to Orland.

Here is what the GISS data selector looks like now, yellow highlight mine, click to enlarge:

And here is another blink comparator of Orland raw -vs- homogenized data posted by surfacestations.org volunteer Mike McMillan on 12/29/2008:

click for full size

And here is the “raw” GISS data for Orland today, please note the vertical scale is now different since the pre-1900 data has been removed, the GISS plotting software autoscales to the most appropriate range:

This raises a number of questions. for example: Why is data truncated pre-1900? Why did the slope change? The change appears to have been fairly recent, within the last month. I tried to pinpoint it using the “wayback machine” but apparently because this page:

Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).

Appears to span the entire “wayback machine” archive, even prior to 2007. If anyone has a screen cap of this page prior to the change or can help pinpoint the date of the change, please let me know.

It is important to note that the issue may not be with GISS, but upstream at GHCN data managed by NCDC/NOAA. Further investigation is needed to found out where the main change has occurred. It appears this is a system wide change.

“Nov. 14, 2009: USHCN_V2 is now used rather than the older version 1. The only visible effect is a slight increase of the US trend after year 2000 due to the fact that NOAA extended the TOBS and other adjustment to those years.
Sep. 11, 2009: NOAA NCDC provided an updated file on Sept. 9 of the GHCN data used in our analysis. The new file has increased data quality checks in the tropics. Beginning Sept. 11 the GISS analysis uses the new NOAA data set. ”

Well this just illustrates the well-established and documented trend that late-19th and early 20th century temperatures have been plummeting for years, at ever increasing rates. Soon the ministry of truth will start photoshopping snowflakes into old historical photographs, such as the Wright brother’s flight at Kitty Hawk.

Very interesting. I was noticing some many of these changes (where your archived plots on surfacestions.org do not match the current GISS plots) when I was gathering the GISS data for StationLab. It was on my list to ask you about, if you were interested in my StationLab.

I always find it laghable how they pretent that they have made adjustment to imporve the accuracy, but when analyzed it is clear that they have done just the opposite. All these examples (as well as many previously discused relating to the surfacestations audit) show a lowering of historic tempetures, and an elevation of modern temperatures. Of course, when trying to remove UHI effect from the record, that is exaclty opposite of what should be done.

“Nov. 14, 2009: USHCN_V2 is now used rather than the older version 1. The only visible effect is a slight increase of the US trend after year 2000 due to the fact that NOAA extended the TOBS and other adjustment to those years.
Sep. 11, 2009: NOAA NCDC provided an updated file on Sept. 9 of the GHCN data used in our analysis. The new file has increased data quality checks in the tropics. Beginning Sept. 11 the GISS analysis uses the new NOAA data set. ”

REPLY: I had suspected as much, this is a likely source of the change, thanks for providing that notice. – Anthony

I’ve had it. GISS and CRU CAN’T be THIS incompetent, they KNOW that over the last 20 years, that as the gatekeepers of the ‘data’, their continuous massaging the numbers through ‘adjustments’ and dumping ‘raw’ data records allows them to effectively manufacture their own data to match up with their own models. It IS a scam, it’s not just incompetence. This is positively Orwelian, but this is really happening.

Ok, new version, same data. Why is the data incompatible with the version instead of the other way around? Is there a legit reason pre-1900 data showing more warmth has to be truncated from the record because of the version switch? I don’t get how that is an explanation, I guess.

I think it’s time we got some lawyer with a conscience to file a class action suit for us. This hiding of data that we in the US paid for is criminal – or it should be if it isn’t. These people should be fired AND prosecuted.

I made a download of raw data on 9-12-09 for the dutch data. This is now no longer available, so it seems that this adjustment was made yesterday or today. The data that is available now is really different.

And while I am on this, is there somewhere that one can acquire the station data that includes Tmin and Tmax? All I have found so far is the monthly means. I would like to look at data on a daily basis, but I can’t find it. Does it exist?

Back in 2007 Douglas Keenan was planning to file a fraud claim against Dr. Wei-Chyung Wang and his co-author for a paper they had written on rural / urban temperatures in China. The paper allegedly showed that there was no difference between temps in cities and in the rural villages, but had been unable to produce any clear data on the stations used. Phil Jones was the co-author. Here’s a letter from Ben Santer to Jones.

Sorry about the delay in replying to your email – I’ve been out of my
office for a few days.

This is really nasty stuff, and I’m sorry that it’s happened to you. The
irony in this is that you are one of the most careful and thorough
scientists I know.

Keenan’s allegations of research misconduct, although malicious and
completely unfounded, clearly require some response. The bottom line is
that there are uncertainties inherent in measuring ANY properties of the
real-world climate system. You’ve probably delved deeper than anyone
else on the planet into uncertainties in observed surface temperature
records. This would be well worth pointing out to Mr. Keenan. The whole
tenor of the web-site stuff and Keenan’s garbage is that these folks are
scrupulously careful data analysts, and you are not. They conveniently
ignore all the pioneering work that you’ve done on identification of
inhomogeneities in surface temperature records. The response should
mention that you’ve spent much of your scientific career trying to
quantify the effects of such inhomogeneities, changing spatial coverage,
etc. on observed estimates of global-scale surface temperature change.

Will someone with some scientific credentials PLEASE get on television with some charts that lay people can understand that exposes this DATA manipulation. The alarmists keep beating the drum that the temperature is rising at an alarming rate and no one is countering with understandable criticisms and evidence that the data was mysteriously adjusted upwards. This will make a huge difference in the credibility of their base line argument.

There are many good points to focus on, but pick one such as this and drive it home.

As part of this, the GHCN data that feeds GISS/CRU/NOAA is not truly “raw” either. Note that the GISS station selector yields multiple sources for various locations. You can get the monthly data for those various locations in the GHCN files, but there is no archived daily data in the GHCN files. The daily data has it combined.

This is important, because monthly means calculated using the archived daily data are DIFFERENT than what appears in the monthly file – sometimes by as much as 2 degrees C.

Note that the GHCN data has 2 versions, raw and adjusted. These comments apply to the RAW version. In other words, the raw version is not truly raw, either.

All this talk about how the CRU “homogenized” the data to remove inconsistencies brought forward a comment from a reader. He pointed out that the data wasn’t just homogenized; it was also pasteurized: heat was added until the data was completely sterile!

UDHCN1 did have some downward trend adjustments. But twice as many upward. The effect was to change the US station average (equally weighted) from +0.14C per century (raw) to +0.59C per century (full FILNET adjustment).

I’ve started doing some research on what sort of adjustments are done when the data is homogonized … mainly with Pennsylvania stations … so far about half the stations show a step ladder of adjustments … i.e. greater adjustments down in the early years 1900 ish stepping up to zero adjustments for the last decade …
__
|
—
|
—
|
—

keep in mind these adjustments are down … if they were adjusting for UHI then we would expect the most recent adjustments to be the highest …

seems like a neat trick to get the slope to be steeper … don’t raise the near term temps, just lower the older temps …

If you are looking for another ‘odd’ set of stations- check out Valencia and Castellon-Almazora in Spain. They are 30 miles apart and both about 2 miles from the Med. But Valencia is a big city and Castellon a much smaller town.

Three things- the unadjusted data are in close agreement up until about 1972, at which point they diverge by over a degree, with Valencia much warmer (urban effect?).

Secondly if you look at the adjusted data, they are suddenly in much closer agreement after 72- but it is Castellon that has been adjusted UP, not Valencia down.

Finally- if you look at Valencia by itself between 1900 and 1950, you will see that they have adjusted those temps downward a full degree across the board.

Now there may be very good reason for these adjustments (which obviously should be published- neither of the stations appear to have moved, at least according to the historical change db), but at first blush it would appear Valencia has an urban heat issue, and instead of correcting it NOAA instead corrected the nearest neighbor upwards to match. Moreover, a large adjustment to Valencia’s temps in the first half of the century have created (surprise) a hockey stick that doesn’t exist in its neighbor, and much less so even with the potential urban heating uptick.

(Also check out Alicante-Cuidad, the next town south, which also was adjusted way down in the 50s to produce a similar hockey stick. I’d really like to know why Valencia supposedly jumped a degree in temperature when none of her neighbors show this in their unadjusted data).

It has come to the point where we need to go back and dig up the paper records to be sure the information has been faithfully copied. Are images of station forms available? Or, has the paper conveniently dissolved?

Great work, Anthony. The self-referential nature of “climate science” as practiced by the AGW proponents (using tax money) is appalling. It appears that reality is being “adjusted” to fit the models, which of course, yield the desired outcomes. The individuals and organizations engaging in apparent data manipulation, data destruction and secrecy need to be held accountable.

Have we reached a point where to audit the basic data of this debate credibly we must return to paper records and carefully deliberate each of the needed adjustments in public; while at the same time reviewing the quality of those paper records from the standpoint of station maintenance and UHI effect? Are we at this point?

I wish I were retired already and had the time to begin such an undertaking.

GISS data is full of this kind of stuff – trying to document it (with others) at present.

I have a previous blogpost from last month on adjustments – a GIStemp ‘Hall of Shame’. I have posted links here previously, but will do so again as it is directly relevant (Climate Fast Food)

Anthony, if it is of interest, you may use it. Us new bloggers don’t get much traffic. Writing it has been a result of learning about climate stuff from WUWT in the last two years and then finding out in greater detail though what E.M. Smith has been doing.

After seeing adjusted temperature data for Orland, I realize that is not as cold as I thought. Snow has appeared briefly on the front lawn twice since 1992 and was sorta expected this morning, given the over-night temperatures of recent days. All that happened was rain, beginning around 1:00 am to 1:30 am and a relatively warm temperature due to the “green-house effect” of cloud cover. Since the Terminator signed CO2 limiting legislation, the effectiveness of the CO2 “green-house effect” here in California has been nil, nada, zilch, zero.

Jeff,
Please post which stations and summarize if you can. This stepwise adjustment is showing up everywhere with the same characteristics.

In every case so far (10 in CA, 1 in Nevada, 1 in Arizona, 1 for Calgary), the steps are in 10ths or some multiple and the overall curve is adjusted to moderate the early 20th century thereby increasing the apparent warming and rate in the late 20th.

The net result has been to push this towards a common curve model as each station’s adjustments have been very mechanical and unique but same curve results if you use Excel and a polynomial fit with 6.

I’ll be very interested if you see the same or something different.

Anthony…good work. We may have layers of unexplained adjustments going on.

Agree with Kevin K – starting over with the paper records is the path of real science now. Surfacetemps.org is (from memory on the Darwin temps thread comments) registered, now what’s needed is the open-source (and Transactional, please!) rebuild of those records. Let’s see what my trusty back of the ciggy packet calculator says about the size of this here job:

15,000 stations (from aging memory – E M Smith has station counts).
160 years of obs in the worst case
2 obs per day

Why, that’s only – um- about 1.65 billion data points.

Divide that over the number of hits on WUWT, and that’s around 60 data points per hit.

Big job, but someone has to do it. The ‘professionals’ clearly aren’t up to the task…

REPLY: we really don’t need to do all stations – the 1218 in USHCN would work, plus a few thousand in GHCN, but there are no paper records of those. – Anthony

Why don’t one of the “skeptics” just bring two placards to the studio the next time they are booked for a primetime interview or debate with a warmer?
My suggestion would be to hold up two Willis Eschenbach graphs as follows:

1) Here is the raw temperature data
2) Here is the same temperature data on drugs!

I can’t think of a better visual to drive home the point that the data has been corrupted.

I have previously posted this on WUWT but it seems desirable to post it again here.

It demonstrates that 6 years ago The Team knew the estimates of average global temperature (mean global temperature, MGT) were worthless and they acted to prevent publication of proof of this.

The most important email among those hacked (?) from CRU may turn out to be one that I wrote 6 years ago. I had forgotten it but Willis Essenbach found it among the hacked (?) emails and circulated it. I copy it here then explain its meaning and significance.

The excuses seem to be becoming desperate. Unjustified assertion that I fail to understand “Myles’ comments and/or work on trying the detect/attribute climate change” does not stop the attribution study being an error. The problem is that I do understand what is being done, and I am willing to say why it is GIGO.

Tim Allen said;
In a message dated 19/11/03 08:47:16 GMT Standard Time, m.allen1@physics.ox.ac.uk writes:
“I would just like to add that those of us working on climate change detection and attribution are careful to mask model simulations in the same way that the observations have been sampled, so these well-known dependencies of nominal trends on the trend-estimation technique have no bearing on formal detection and attribution results as quoted, for example, in the IPCC TAR.”

I rejected this saying: At 09:31 21/11/2003, RichardSCourtney@aol.com wrote:
“It cannot be known that the ‘masking’ does not generate additional spurious trends. Anyway, why assume the errors in the data sets are geographical and not?. The masking is a ‘fix’ applied to the model simulations to adjust them to fit the surface data known to contain spurious trends. This is simple GIGO.”

“Richard’s statement makes it clear, to me at least, that he misunderstands Myles’ comments and/or work on trying the detect/attribute climate change.

As far as I understand it, the masking is applied to the model to remove those locations/times when there are no observations. This is quite different to removing those locations which do not match, in some way, with the observations – that would clearly be the wrong thing to do. To mask those that have no observations, however, is clearly the right thing to do – what is the point of attempting to detect a simulated signal of climate change over some part of (e.g.) the Southern Ocean if there are no observations there in which to detect the expected signal? That would clearly be pointless.”

Yes it would. And I fully understand Myles’ comments. Indeed, my comments clearly and unarguably relate to Myles comments. But, as my response states, Myles’ comments do not alter the fact that the masked data and the unmasked data contain demonstrated false trends. And the masking may introduce other spurious trends. So, the conducted attribution study is pointless because it is GIGO. Ad hominem insults don’t change that.

And nor does the use of peer review to block my publication of the facts of these matters.

Richard

The great importance of the matter in the quoted email may not be apparent to some. Therefore, I provide this brief background explanation.

Climate change ‘attribution studies’ use computer models to assess possible causes of global climate change. Known effects that cause climate change are input to a computer model of the global climate system, and the resulting output of the model is compared to observations of the real world. Anthropogenic (i.e. man-made) global warming (AGW) is assumed to be indicated by any rise in average global temperature (mean global temperature, MGT) that occurred in reality but is not accounted by the known effects in the model.

Clearly, any error in determinations of changes to MGT provides incorrect attribution of AGW.
The various determinations of the changes to MGT differ and, therefore, there is no known accurate amount of MGT change. But the erroneous MGT change was being input to the models (garbage in, GI) so the amount of AGW attributed by the studies was wrong (garbage out, GO) because ‘garbage in’ gives ‘garbage out’ (GIGO). The attribution studies that provide indications of AGW are GIGO.

I and others attempted to publish a discussion paper that attempted to explain the problems with analyses of MGT. We compared the data and trends of the Jones et al., GISS and GHCN data sets. These teams each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends. Since all three data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers), and purport to be the same metric (i.e. MGT anomaly), this is surprising. Clearly, the methods of compilation of MGT time series can generate spurious trends (where ‘spurious’ means different from reality), and such spurious trends must exist in all but at most one of the data sets.

So, we considered MGT according to two interpretations of what it could be; viz.

(i) MGT is a physical parameter that – at least in principle – can be measured;
or
(ii) MGT is a ‘statistic’; i.e. an indicator derived from physical measurements.

These two understandings derive from alternative considerations of the nature of MGT:

If the MGT is assumed to be the mean temperature of the volume of air near the Earth’s surface over a period of time, then MGT is a physical parameter indicated by the thermometers (mostly) at weather stations that is calculated using the method of mixtures (assuming unity volume, specific heat, density etc). We determined that if MGT is considered as a physical parameter that is measured, then the data sets of MGT are functions of their construction. Attributing AGW – or anything else – to a change that is a function of the construction of MGT is inadmissable.

Alternatively:

If the thermometers (mostly) at weather stations are each considered to indicate the air temperature at each measurement site and time, then MGT is a statistic that is computed as being an average of the total number of thermometer indications. But if MGT is considered to be a statistic then it can be computed in several ways to provide a variety of results, each of different use to climatologists. In such a way, the MGT is similar in nature to a Retail Price Index, which is a statistic that can be computed in different ways to provide a variety of results, each of which has proved useful to economists. If MGT is considered to be a statistic of this type, then MGT is a form of average. In which case, the word ‘mean’ in ‘mean global temperature’ is a misnomer, because although there are many types of average, a set of measurements can only have one mean. Importantly, if MGT is considered to be an indicative statistic then the differences between the values and trends of the data sets from different teams indicate that the teams are monitoring different climate effects. But if the teams are each monitoring different climate effects then each should provide a unique title for their data set that is indicative of what is being monitored. Also, each team should state explicitly what its data set of MGT purports to be monitoring.

Thus, we determined that – whichever way MGT is considered – MGT is not an appropriate metric for use in attribution studies.

However, the compilers of the MGT data sets frequently alter their published data of past MGT (sometimes they have altered the data in each of several successive months). Hence, our paper always contained incorrect MGT data because the MGT data kept changing. The MGT data always changed between submission of the paper and completion of the peer review process. Thus, the frequent changes to MGT data sets prevented publication of the paper.

Whatever you call this method of preventing publication of a paper, you cannot call it science.

But this method prevented publication of information that proved the estimates of MGT and AGW are wrong and the amount by which they are wrong cannot be known.

It should also be noted that there is no possible calibration for the estimates of MGT. The data sets keep changing for unknown (and unpublished) reasons although there is no obvious reason to change a datum for MGT that is for decades in the past. It seems that the compilers of the data sets adjust their data in attempts to agree with each other.

Methods to correct these problems could have been considered 6 years ago if publication of my paper had not been blocked.

Additionally, I point out that the AGW attribution studies are wrong in principle for two reasons.

Firstly, they are ‘argument from ignorance’.

Such an argument is not new. For example, in the Middle Ages experts said, “We don’t know what causes crops to fail: it must be witches: we must eliminate them.” Now, experts say, “We don’t know what causes global climate change: it must be emissions from human activity: we must eliminate them.” Of course, they phrase it differently saying they can’t match historical climate change with known climate mechanisms unless an anthropogenic effect is included. But evidence for this “anthropogenic effect” is no more than the evidence for witches.

Secondly, they use an attribution study to ‘prove’ what can only be disproved by attribution.

In an attribution study the system is assumed to be behaving in response to suggested mechanism(s) that is modelled, and the behaviour of the model is compared to the empirical data. If the model cannot emulate the empirical data then there is reason to suppose that the suggested mechanism is not the cause (or at least not the sole cause) of the changes recorded in the empirical data.

It is important to note that attribution studies can only be used to reject hypothesis that a mechanism is a cause for an observed effect. Ability to attribute a suggested cause to an effect is not evidence that the suggested cause is the real cause in part or in whole. (To understand this, consider the game of Cludo. At the start of the game it is possible to attribute the ‘murder’ to all the suspects. As each piece of evidence is obtained then one of the suspects can be rejected because he/she can no longer be attributed with the murder).

But the CRU/IPCC attribution studies claim that the ability to attribute AGW as a cause of climate change is evidence that AGW caused the change (because they only consider one suspect for the cause although there could be many suspects both known and unknown).

Then, in addition to those two pieces of pure pseudo-science – as my paper demonstrates – the attribution studies use estimates of climate changes that are know to be wrong!

This does not give confidence that the MGT data sets provide reliable quantification of change to global temperature.

My first post here as I’m just an average punter (and voter) but I read this site daily as it is an education of the best sort. My question.

How many climate scientists rely on this ‘adjusted’ data believing it to be a solid foundation upon which they then do their thing?

My natural scepticism is firming up daily and amongst the general public I am not alone. However probably the last remaining barrier to total disbelief is that I can’t believe that so many serious scientists are ‘on the take’. If however the base science they are relying upon is seriously flawed they I would expect to see an increasing number standing up to be counted.

Probably a lame question.

Randy

REPLY: The answer is, almost all of them. There are very few papers questioning the data integrity. Most take the data at face value, never questioning the measurement environment and the data procedures. I didn’t start questioning it myself until Spring of 2007. – Anthony

That’s what Anthony and Steve M. have been doing for a few years. Those are huge undertakings and we all owe them, big time.

I realise this, and I intended no slight of Anthony’s and McIntyre’s efforts, but the last time I checked surfacestations.org it seemed that only about one-fourth of the U.S. stations had gotten a thorough vetting, and I have no idea where the UHI project currently stands, and problems with various data sets available on the internet seem to be spreading faster than the vetting process can identify problems. So, where does data credibility stand at this point?

Anthony:
I have some tools that can easily parse all station data for hundreds of stations and write the temperatures into Excel-compatible files. Each column of the spreadsheet represents a single station and includes all temperature data for that station.

I have a copy of the raw GHCN archive (v2.mean.zip) from September 12, 2007 and from today. I plan to generate spreadsheets from the current archive and the old archive to see if there are differences in the raw GHCN data.

Does anyone have a copy of v2.mean.zip from GHCN prior to September 2007?

REPLY: we really don’t need to do all stations – the 1218 in USHCN would work, plus a few thousand in GHCN, but there are no paper records of those. – Anthony

I have from time to time gone looking for climate data on-line or on CDRom or some other format, and I am often surprised at the variety of formats available. If one had no access to paper records for the GHCN stations, then what would be the next best alternative? Are all of the various data products derived from the same underlying data, so that with, say, SOD records from GHCN stations, one could be pretty well assured of having the raw GHCN daily maximum, minimum and average temperatures?

Coincidentally, I often use the term “homogeneity adjustment” as a colloquialism for the word “trick.”

Fix a leaky faucet: “That’ll do the homogeneity adjustment.”
On stage: “And now for my next homogeneity adjustment…”
Tax incentives: “I believe in homogeneity adjustmentle down economics”
Halloween: “Homogeneity adjustment or Treat!” (ALWAYS gets the treat)
Old saying: “You can’t teach an old dog new homogeneity adjustments.”
At a hockey game: “My son scored a Hat-Homogeneity adjustment”
Hot Rods: “His ride is totally homogeneity-adjustmented OUT!”
TMZ: “Seems some of Tiger’s women may have been turning homogeneity adjustments.”
Confidence man: “A real homogeneity adjustmenster.”
Common practice: “Homogeneity adjustment of the trade.”
Playing a practical joke: “Ah Ahhh! Homogeneity adjustmented you!”
Catching on to a joke: “I’m not falling for that homogeneity adjustment.”

Finding the UK GISS records were seriously tampered with in similar ways to all this, I prepared a page… it has been sitting in the backwaters but I think it belongs here with everything now coming out about the GISS records. Here it is. Here too, in the British Isles, GISS have been adjusting to INCREASE the trend… consistently.

Anthony
…How many climate scientists rely on this ‘adjusted’ data believing it to be a solid foundation upon which they then do their thing?…

Randy

REPLY: The answer is, almost all of them. There are very few papers questioning the data integrity. Most take the data at face value, never questioning the measurement environment and the data procedures. I didn’t start questioning it myself until Spring of 2007. – Anthony

This lack of questioning data integrity extends to a lot of other geophysical data as well. The “Palmdale Bulge” is a pertinent example. And the use of borehole temperature records to reconstruct past climate is rife with such problems. This is why I am now interested in the status of the UHI project that Anthony proposes on the “projects” page of this site. It looks like an interesting approach to this pesky issue, which will still plague the credibility of the data even if all the other issues of station maintenance and data adjustment are resolved.

Richard
…
These teams each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends….

Now why wouldn’t anyone notice allegedly independent estimates of the same quantity that differ by two times their respective 95% confidence intervals? 95% means something specific and to differ by two times such an interval is highly improbable. If Richard is right about this, and I have interpreted what he says correctly, why didn’t more alarm bells go off? This is exactly the type of data consistency issue that eventually deflated the “Palmdale Bulge”.

The extent of the alterations is breathtaking when you see them all together, and keep in mind, this is USHCN Raw data, not GISS’s Homogenized versions. Whoever did this is making Dr Hansen look like a piker.

These alterations have also affected the few GISS homogenized charts I’ve examined.

This is getting serious (no it’s been serious for some time, but this is a wake up call). The AGW juggernaut just keeps rolling. If the data are really being fraudulently ‘adjusted’ and there is no control or monitoring of the fixing, then I agree that we need a top lawyer and a class action. Otherwise they will continue to get away with one of the greatest scientific frauds in history. Truth and honesty will – must – ultimately prevail.

Thanks that does work, but is one method any more valid that the other? I mead is the average of daily averages “better” than averaging TMAX and TMIN separately for the whole month and then averaging these monthly outcomes? Obviously the two techniques produce significantly different results, but is one more meaningful than the other in terms of representing the mean temp of a station?

In 1888, America Weather by Gen. A. W. Greeley was published. On page 253 of that book, the following account is given:

“The most remarkable case, that of June 17th, 1859, was at that time said to be the most wonderful visitation of this character for thirty years. At San Francisco on that date, the thermometer is said to have registered a rise in the temperature from 77F to 133F, with a burning northeast wind, which fortunately lasted for a few hours only, the thermometer registering 77F at 7 p.m. At Santa Barbara, on the same day, (in the afternoon), a strong easterly wind set in, during which the burning air was filled with dense clouds of dust, which caused intense suffering, and drove everyone to the nearest shelter. The fruit was all destroyed, and although the burning blast lasted but a few hours, yet animals, such as calves, rabbits, and birds, died from the effects. The temperature was said to have reached 133F in Santa Barbara, 102F at San Diego, and 117F at Fort Yuma.”

He goes on to say that such heat waves have occurred less frequently as time has passed. So perhaps the early Orland record has some merit to it.

The listed record highs for June 17 for these 3 cities is 96F in 1957, 93F in 1957, and 115F in 1981 for Yuma AZ (is that Fort Yuma?).

“REPLY: No it is still intact at NCDC, on paper forms, with transcription data also available – Anthony”

I have some questions.

How does the process of transcription work?
Is it in-house or contracted out?
How do they do QA on the transcription process and on the transcripted data?
What is the traceability between the paper and the transcripted data?
Is the original data paper or microfiche?
How many pieces of paper are we talking about?
What is the condition of the paper records?
Are they handwritten or computer printouts?
How do we access either the paper or the transcription records?

Reloading this data from source with complete traceability is the only way to fix this.

I design and install very high volume machine-human hybrid data entry systems and associated state of the art OLTP VLDB systems.

IMHO, its technically trivial to re-digitize this data. The only thing “new” would be the QA process and this would depend on the nature of the data and the source documents.

The systems costs I cannot cover would be the high volume scanners, systems hosting, and the bandwidth to serve the documents to volunteers. I’ll bet we can get the scanners donated for the duration. I can also design and build the VLDB and image storage systems as well as do all the back end data processing and interfaces. I would need help with the web front ends.

There are so many pro AGW scientists now being interviewed in the news and claiming that the climategate scandal doesn’t matter because the USA data still shows the warming, that they are obviously now working over time to make sure that this miss-information is valid and that the adjusted raw data will now support them.

These pathetic, deluded souls are either ignoring, or have no clue that the stations are in poor shape and the data has been manipulated as much, or more so, than the CRU data; and that the freedom of information act in the USA is also being obstructed!

I hope I have missed some, but in all of the news stories I have watched so far, no one has brought up the facts of the USA data. This crucial information is being completely overlooked in the main media, so far! When that information gets out, then watch the public backlash, and interest in cap and trade, etc.! When the people finally realize they have been duped, heaven help the dupers.
Stephen

David (12:58:50) :
And while I am on this, is there somewhere that one can acquire the station data that includes Tmin and Tmax? All I have found so far is the monthly means. I would like to look at data on a daily basis, but I can’t find it. Does it exist?

Apologies if someone else has posted this. These are scans of the B-91 forms the observers keep. Some are a little rough.

You NEVER mess with the “raw”data. Whatever you do, the “raw”data should remain acessible, unaltered. It is a sin to modify “raw”data and present them as being raw. Value added data are worthless without the “raw”data. It is better to discard value added data in any dispute, and rely on the “raw”data. And then I mean really raw data.

Speaking of getting back to the paper records, perhaps instead of using the data as actually written down, maybe they’ve switched to the far better trick of deriving past temperatures by oxygen isotope analysis of the paper used by the weather stations.

It would be nice if someone go on television with a copy of modern data on a particuclar site and an old copy of a newspaper from the same place, showing the high and low for the previous day, arguing that “they’re lying about past temperatures, as this newspaper illustrates, because to make a bogus warming trend they have to either lie about the present or the past, and they figure that nobody from a hundred years ago is going to show up to demand a retraction.”

At http://cdiac.ornl.gov/epubs/ndp/ushcn/access.html I read
“Please note: for users already engaged in analysis using the previous version of these data, you may still access the previous version through the end of 2009 ”
and there is a link to http://cdiac.ornl.gov/ftp/ndp070/ there.
Maybe someone with a decent server and bandwidth should download all that data (looks to be a couple of hundred MBytes) before it goes away on 31 December 2009?

I have looked at the raw data straight from NOAA (GHCN) for a few dozen stations. I am comparing the data from September 2007 to the data from today.

So far there are *zero* changes to the raw GHCN data.

If this holds up for all the stations, then I think the differences that Anthony found either:
a. Occurred between June and September 2007
b. Occurred somewhere in the processing between GHCN and GISTEMP (is GISS now using USHCN v2?)

Everyone please calm down on the conspiracy theories and accusations of fraud.

After doing some semi-manual comparisons, I am now letting my computer do automated comparisons of all of the stations in the USA. I will report more when it’s done.

I’m not really swift at this stuff but it seems to me that with all these adjustments the “Warmers” have set themselves a trap that is going to burn them in the future. (Or maybe it already is and that is why temperatures have leveled off for the past 10 years). They can’t just keep adjusting temperatures ever upwards. Pretty soon, it is going to be obvious that the temperatures bear no relation to reality. Conversely, they can’t keep doing the Ministry of Truth routine of adjusting history downwards. There are too many copies of the current data sitting in skeptic archives. Therefore, they are trapped. The future has to level off and that is going to mean their AGW Theory which demands future warming is wrong and it will be proved so by thier own lies.

I have a copy downloaded with all fields as of June 28, 2009 in case we want to cross check data revisions. It appears that their last update was July 09, so there will be some “change” between my version and what is currently available.

How much of a change *might* be interesting. If anyone from WUWT shop wants a copy of my downloads of data email me. I downloaded about 80 gigs worth of different climate data (yes gigs) in June.

I can open up an FTP connection for downloading if you decide you want access to it. Email me for contact info.

Your email address is invalid, I treid to email you and it was rejected. In keeping with policy here, your recent comments have been removed and you will not be able to post comments until you provide a valid email address.

FOR OTHER POSTERS:

I agree with John V. I’m not interested in conspiracy theory comments. My interest is in finding out the hows and whys.

SO PLEASE COOL IT!

I’ve already lost most of my day dealing with this thing when I wasn’t prepared to because a commenter posted up an erroneous blink comparator.

Surreptitious changes in the “raw” data at GISS are only half the story. Some stations have disappeared entirely off the map. If anyone has saved the Gonzales TX record from an older download, please post it. It’s no longer available elsewhere (KNMI) either.

On the other hand, from my link in post (14:58:55) : above, besides the rampant ramping up of temperature slopes in Illinois, it is interesting to note that 1934, the previously hottest year in America, has been reduced at most stations regardless of the slope change. Also, 1998, the formerly second hottest year, has also been reduced many places, but not so much as to keep it from replacing 1934, but enough that it won’t be quite so hard to beat in the current century.

In regards to the Orland CA data issue, 6 days ago I posted the l following query. Obviously it happened before that date.

—————————————————————
5 Dec 2009
D L Kuzara (14:22:23) :

Can someone explain why the two temperature charts of Orland CA and Marysville CA on the surfacestations.org home page (bottom) do not match, in fact seem to be radically different than the current charts and data at http://data.giss.nasa.gov?

Has someone changed the underlying data at Nasa?
—————————————————————-

Investigations into the Climategate scandal are still stalled even after experts have claimed the the Data ‘ leaked ‘ from the CRU is authenic .

I would wish that the investigation would begin soon because alot of raw data that could be evidence in an investigation is suddenly disappearing or destroyed .

This Raw Data is the evidence that the CRU was intentionally manipulating the figures . If no raw data is available for a investigation, then it will make it that much more difficult to prove the CRU liable .

Just curious, I get that they switched to a new version, but why does the new version cut off all that data? Or is that what everyone is trying to figure out? A little confused as to how the version difference settles the question. Help?

Anthony
Do you really think that it’s possible to get back to the true raw data?

From what I’ve read of this thread, I understand that there are no paper records for either GISS or GHCN or both. The best that could be hoped for is to get back the data to a date when someone with foresight downloaded the prior data. But this assumes that the data at that prior date was valid.

I interpret the comments of those who are more familiar with the data that adjustments and corrections and deletions seem to have happened in a completely ad hoc way. I see no reason to assume that even that prior date data hasn’t been willfully corrupted.

Surely the only recourse is to ditch all the surface temperatures as unreliable unless corroborating paper evidence can be found to support it? This seems to be a huge forensic task – perhaps worthy of farming out to the internet for data collation, but huge nonetheless. I’m a relative newbie to all of this – at least in the blogosphere, but having lots of QA experience, I fail to see how the veracity of the surface temperature data can be assured.

Watts should do a story on this: http://www.youtube.com/watch?v=aUtzMBfDrpI
“Journalist Phelim McAleer (‘Mine Your Own Business’, ‘Not Evil Just Wrong’) asks Prof Stephen Schneider from Stanford University an Inconvenient Question about ‘Climategate’ emails. McAleer is interrupted twice by Prof Schneider’s assistant and UN staff and then told to stop filming by an armed UN security guard.”

I’m counting the days till a Hollywood actor goes on TV to explain that past temperatures decline due to the law of entropy and the expanding universe, that the planet’s past is as subject to the laws of thermodynamics as its present, so of course the old weather station data has to be changed to reflect this inevitable decline. ^_^

“It would be nice if someone go on television with a copy of modern data on a particular site and an old copy of a newspaper from the same place, showing the high and low for the previous day,”

There is an online site where you can do keyword searches through hundreds (maybe thousands by now) of newspapers, mostly from the US & Canada. Once you’ve set up your search template on the search page, you can search through a long time series of a paper and harvest the hits, then click on them one at a time to go straight to the temperature data for the day.

(Note–the site is a bit awkward and cranky and there are tricks to navigating it and using the search feature that take time to learn. Also, I haven’t done a temperature search myself–I’m drawing on my experience with searching for other material.)

The cost is $12/month on a month-by-month basis, or $6/month if an annual subscription is bought. (I.e., $72/year.) I think there’s a free one-week (or so) trial subscription. Here’s the link:

Please address technical questions about these GISTEMP webpages to Dr. Reto Ruedy. So I sent this email to him. We will see if I get a response.

Dr Ruedy

I noticed that sometime prior to 5 Dec 2009, when I last looked, the underlying temperature data for the Marysville CA and Orland CA surface stations were modified from the original (and I assume raw) data that was previously on the GISS website.

I think it is interesting that many of the adjustments are relatively small after 1979. Perhaps the availability of satellite temperatures from then on has “discouraged” further large adjustments over this period. Increasingly, it is looking like ever larger adjustments downwards are being made to earlier temperatures to maintain the appearance of rising temperatures over the long term. Correspondence with the satellite temperatures from 1979 is then used to justify the whole series including its cooled earlier temperatures.

Sorry if someone else has admonished. We need to either find and archive the ‘raw’ data, or create a new database just like surfacestations was done. We may have to create it by going to the library and getting High/Low and records from the local newspapers.

A dedicated website to the raw temperature record for every station on record.

Let’s evaluate the ‘adjustments, fudging, UHI corrections, siting and equipment adjustments for each agency. Thus the ‘value added’ can be compared to any station.

It seems to me that with the billions upon billions of dollars we are forced to spend on this ‘research’, they could set up a data base of empirical observations for our temperature record.

Query to you experts: What about the UAH data? How does it compare to the corrupted data sets of CRU and NOAA?

Isn’t John Christy a steward of the UAH dataset? He appears to be genuinely interested in the truth in the data and not an agenda. He does not appear to be one who would monkey with the data to serve an agenda.

Further re: Prof Christy, I saw him on the tube, the watched his lecture from 2007. His lecture was absolutely outstanding…much more so than in the debates on tv. Would that he would have used his personality while being interviewed live.

P.S., Laura Ingraham just debated an “environmentalist” Tyson Slocum. She appeared armed with some info against his recitation of the talking points…emails don’t disprove rigorous peer-reviewed science, the code didn’t modify any data used in any specific publication.

A little more tutoring would serve these talking heads to good end.

Someone is going to win a Pulitzer or its equivalent for taking on this issue in depth…either for or against.

[REPLY – UAH tracks fairly closely (but with a bit less of a trend). But the deal is not so much what CRU & crew have done since 1979, it’s what they’ve done to the record before that, before UAH records. Such as possibly flattening that inconvenient 1930s bump. ~ Evan]

Jack Kendrick (15:40:48) :
‘I’m not really swift at this stuff but it seems to me that with all these adjustments the “Warmers” have set themselves a trap that is going to burn them in the future. (Or maybe it already is and that is why temperatures have leveled off for the past 10 years). They can’t just keep adjusting temperatures ever upwards. Pretty soon, it is going to be obvious that the temperatures bear no relation to reality.’

Not to worry. We’ll just pull the old “Hansen/Gore trick”. We’ll just turn the heat up in your room until you are sitting in your underware wondering why it’s getting soooo hot outside.

I cannot believe that any scientist would knowingly throw away raw data. It is so fundamental to their life’s work.

They may well have taken a copy and “enhanced it”, but the raw data will still be somewhere, and their must be an audit trail for colleagues, etc to find their way around.

I guess you have to be VERY specific in any future FOI requests.

If they claim, in response, that there is no audit trail, then that exposes them to further criticism.

Imagine a company’s Financial Controller saying, “Well, our accounts did not match the bank balance, so we just changed the numbers to make them agree”. Or worse yet, ” … we just took the excess money out of the bank to make the balance agree with our accounts”.

Isn’t everyone a little tired from all this leaping to conclusions? Mr. Watts has asked an interesting question. I see no factual basis for the conspiracy “answers” on this post other than a deep mistrust. That isn’t nearly enough.

Imagine what another “Super Duper Solar Storm” –similar to the 1859 event– would do to ‘modern’ science’s records. EMP sizzling all over the place. Not only would we literally be back in the Dark Ages, there wouldn’t even be a paper trail in Latin or Greek to fall back on. Tower of Babel kinda scarry don’t you think?

In effect yes. ISO9000 – Fail. (Although I work in Pharmaceuticals and compliance to ISO is somewhat superceded by the requirements to comply with MHRA and FDA guidance and regulations)

I was struck by Roy Spencer’s post “What if this was Cancergate?”. I understand you sarcasm in your follow up remarks but as far as I can see there is absolutely no traceability in who has made the temperature adjustments and why, a basic tenet of medicinal manufacture. The Pharma industry is not without its own faults but at least there is a legally responsible person who is liable if medicines are released for sale which do not meet prior and licensed specifications for manufacture and testing.

Without the traceability and audit trail any explanation, so long as it has some sort of grounding in science, could be advanced which would lead to further endless discussion regarding the adjustment’s validity. Alternatively no one could own up to some or all of the adjustments and we’d have no idea whether the data was raw or adjusted (value added!! :P – Seems to me that “value added” might in actual fact be better expressed as “devalued”).

Somewhat ironically, the “Harrys” of this world would probably end up carrying the can for adjustments when it seems clear to me that he didn’t enjoy the process of untangling the data and was entering often cynical comments in the code. Reading between the lines, it seems his cyncism came from being pressed in some way from above.

Speaking with 20/20 hindsight, a better approach would have been to state explicitly in the form of a data specification, exactly what adjustments would be made under what circumstances up-front. I see various statements on the data websites about adjustments for station moves etc. but not the methodology to be employed to derive the appropriate adjustments necessary.

If the raw data cannot be verified then I can’t begin to imagine the scale of scientific literature which would be undermined.

An example of this is that I went and checked State College Pa and graphed it out. This is one of the best datasets I’ve seen so far, not 1 month missed since 1895. However What you find is that GISS lowered the 1895 Temp almost 1 full degree Celcius and then they gradually work their way down ward -.8C then -.6C and so forth so that by the time you get to the present day the Raw and the ADJ is exactly the same. This dataset went from a cooling trend to a warming Trend. Then again this is the home of Michael Mann so is it any suprise there is Mann-Made Global Warming there.

Anthony/Mods
Might it help with organising the incoming data if you make thread on which those who are doing the analyses can post comments and findings, but those of us who are not suitably competent to do the analysis cannot post?

It would be easy to moderate – if the post doesn’t contain an analysis then it gets deleted. There’s some good work being done here but it’s a little swamped by well meaning but verbose comments – my own included.

That Orland before and after pretty much sums up the whole climategate. If this is what they have done to raw data that is easy to understand for the average person, then who knows what they have done to data that you need a degree to understand. I feel more than ripped off and cheated. I feel sick to the core that this deception has gone so far and its results have been so damaging to Western countries.

I have written a letter to Mr. Rutten, the author of the opinion piece you cite:

Mr. Rutten,

You really need to dig a little more deeply into the climate science situation. You ask, “Who benefits?” Indeed. Let’s think about politicans who would love to have a reason to raise huge taxes to spend for votes — you would disagree? How about scientists who know that to be funded by those politicians they need to demonstrate support for AGW? How about people who just “know” that modern civilization is “bad” and “must be” destroying the planet? You don’t think that supersitition is what is selling your papers and paying YOUR salary? So, who benefits? Look in the mirror…

Second, have you taken any time to look at the GCHN or GISS data sets? I think you would be surprised at what you would find: consistent adjustments that lower temp data before 1940 and consistent adjustments that raise temp data after 1940.

Finally, you really must question the “establishment,” isn’t that the entire purpose of having a free press? Your article shows clearly you have abandoned that mission because nowhere, and I mean NOWHERE, do you or your paper indicate you have thoroughly investigated AGW science (which is, at this point, the clear “establishment”). You keep repeating “consensus” and “peer reviewed” as though that absolves you from your fundamental responsibilities.

I cannot find anywhere a coherent explanation for how “the science” has proven AGW that does not have huge scientific holes in it. I have an engineering background and a PhD, so I figure I’m smart enough to read this stuff and make sense of it. I’ve corresponded with significant AGW researchers and I can tell you they cannot give me explanations that aren’t fundamentally flawed.

If you cannot respond to my last paragraph in a way that makes sense, then you really ought to take a deep breath, look in the mirror, and ask yourself some hard, existential questions.

“Imagine what another “Super Duper Solar Storm” –similar to the 1859 event– would do to ‘modern’ science’s records. EMP sizzling all over the place. Not only would we literally be back in the Dark Ages, there wouldn’t even be a paper trail in Latin or Greek to fall back on. Tower of Babel kinda scarry don’t you think?”

On the Peter’s dad thread I noted that the current West Point NY GISS chart seemed to have changed from my recollection. And then this thread started so I started searching and found a previous West Point chart in the Surfacestations.org pdf.

I made a blink compare gif file of the two charts. If anyone wants it, let me know. I see a trend upwards that was barely there before if at all, and cooling introduced into the earlier decades of the record. ventana54@gmail.com

Imagine what another “Super Duper Solar Storm” –similar to the 1859 event– would do to ‘modern’ science’s records. EMP sizzling all over the place. Not only would we literally be back in the Dark Ages, there wouldn’t even be a paper trail in Latin or Greek to fall back on. Tower of Babel kinda scarry don’t you think?

Isn’t everyone a little tired from all this leaping to conclusions? Mr. Watts has asked an interesting question. I see no factual basis for the conspiracy “answers” on this post other than a deep mistrust. That isn’t nearly enough.
*******************
The FACT that climate scientists dodge FOI requests and “lose” data is more than enough. Now NASA seems to be going in the same direction. If We The People distrust them enough, we can get our reps to deal with them. We can tell our reps not to make policy based on a fairly tale. In an ideal world, we could have them canned or their funding cut off. It will be interesting to watch.

On my “to do someday” list is a comparison of USHCN and USHCN.v2 (they are different in a few ways…). The v2 file is in 1/10 F, no longer 1/100 F. The older stations now have MUCH more ‘missing data’ flags It looks like more of our past has been selected for “deletion for QA reasons”. There is also a significant difference in the actual values. (I was doing a before / after benchmark to see what the effect was of the “putting back in” of 2007 to date in GIStemp when I found I could not directly compare them to see what happened at the 2007 transition…)

Oh, and of course, the V2 version has the data from May 2007 to date that is not in the USHCN old format file.

This is a giant “Dig Here” that I’d have done more work on by now, but that whole Climategate thing interrupted… not that I’m complaining, mind you ;-)

FWIW, I suspect the reason the USHCN.v2 was tossed in in such a hurry might have something to do with my showing how to do it and starting to run benchmarks. I put an “Easter Egg” in the comments in the posted FORTRAN just for NASA 8-0

“Check the rerun of tonight’s O’Reilly Factor. Laura Ingraham, sitting in for Windbag, slices and dices Tyson Slocum with a very sharp blade.”

Not as much as she might have, had she had more complete information at her disposal. It’s frustrating to watch these people – their hearts are in the right place, but they should have assigned one person to the file to update them on a daily basis.

All concerned citizens should be writing their US Senators and Reps. I know my efforts to do this with my US Senator have been considered in taking constructive action (see below email). Also, it would be good if someone started a class action lawsuit that concerned citizens could help fund.

==================================
Dear Peter,

Thank you for contacting me regarding global climate change. It is good to hear from you.

I understand your concerns with the recently disclosed e-mails from the University of East Anglia’s Climatic Research Unit (CRU). The American and British scientists that comprise this unit are major contributors to the United Nation’s Intergovernmental Panel on Climate Change (IPCC). Their work provided the foundation of climate data used to create the UN IPCC climate change reports.

The released e-mails raise serious questions about the data used in the IPCC climate reports. This includes the most recent Fourth Assessment. These e-mails demonstrate a coordinated effort by trusted climate scientists to suppress dissenting views and manipulate data and methods to skew the IPCC reports to reach a unified view of climate change. In addition, the elimination by CRU of all the raw data on which these scientists based their models prevents other scientists from replicating their work and raises additional questions about the accuracy of the data used in the IPCC reports.

The actions by these scientists and others to suppress data that contradicts their conclusions is unacceptable. I have sent a letter to the Chairman of the Senate Subcommittee on Oversight in the Senate Environment and Public Works Committee, Senator Sheldon Whitehouse (D-RI), requesting an immediate investigation into this matter. I have also sent a letter to EPA Administrator Lisa Jackson calling for a thorough and transparent investigation into the actions of the scientists that were disclosed in their own emails. Given that the EPA has relied on this data to formulate a variety of policies, I have asked Administrator Jackson to withdraw all climate change regulations until an investigation and independent analysis of the data can be completed.

Thanks again for taking the time to share your views with me. I look forward to hearing from you in the future.

@Jim: I suggest that it isn’t NASA, but NOAA. And NOAA has an even more direct connection with corruption than has NASA. Remember the “would you please delete re AR4” e-mail? Guess whom Michael Mann was asked to forward that to? Gene Wahl at NOAA/NCDC! What does that tell you, put together with everything we’ve seen in the last six days?

Anthony Maybe it’s just my suspicious nature but it might be a good idea if you have a backup of your surfacestations.org temperature plots just in case someone with a perverse nature decides to hack it and delete the plots.

Jim (12:58:09) : I guess the Wayback machine wouldn’t work to get the old data since it was stored on a server.

Or is there just a basic time lag for what is available? When the (USAFF) forum was discontinued by the owner I went wayback to collect the 20 months of my “Stay warm, World…” thread (96 pages) and found only the early months available. My assumption is that more recent months will become available over time.
Is my assumption valid?

Note to AGW quacks, don’t bother trying to convince me of heating or cooling until you can regain my trust and clearly demonstrate (prove beyond show of doubt) that the data is sound. Until then, I will do everything I can to ensure not one bloody dime of my money goes towards this BS (bad science).

Tim Heyes, good point on the traceability in pharma. I can’t imagine an excuse for this, especially given that even Halo or Super Mario Brothers lets you look at the change logs and bug fixes. Heck, everybody’s mouse drivers seem to have more openness and accountability, even rollback in case the new version is junk!

I’d also again note that site relocations should have no statistical impact on the data unless a century ago weather folks kept deciding to move stations to a cooler place to write down the readings in greater comfort. Any other reason for a move should see as many stations go up as down, and by similar amounts, which should all cancel out.

In short, the adjustment to a station’s data should be a matter only of local concern, for the purpose of accurately recording and reporting the area’s temperature for local use and reference because it impacts many decisions in a variety of endeavors.

Yet knowing that truly justified adjustments to station data should produce, world-wide, a statistically insignificant effect on climate records, the only reason to undertake such a global and sweeping revision is to produce an altered account of the climate’s history.

Put simply, the idea that ALL those old geezers put ALL their temperature stations in the wrong place, and all in the same flawed way, and nobody noticed it till now, is absurd. Especially since the discovery of any such fundamental flaw in the site selection behavior of weather geeks would itself merit mention everywhere from Nature to USA Today, ,since AGW alarmists would’ve trumpeted it to the heavens.

I scan the comments to the articles on a daily basis at The Huffington Post et al, to measure what I call the “Mass Brainwash Index”, that publication being one of the best places to get accurate results from the populous. Six months ago my index was at a reading of 9.5, 10 being the most brainwashed and 0 being the least. Today my Index has fallen to a reading of 7.5. Something dramatic is happening to the psyche of the American population.”

Today I post this concerning Huffington post;

“Michael (01:15:38) :

Top story on Huffington Posts Green Tab has this as the first comment about The Copenhagen Summit. Is somebody handing out brains over there?

“Mogamboguru I’m a Fan of Mogamboguru I’m a fan of this user 328 fans permalink

” An Incredibly Expensive F o l l y ”

“Why Failure in Copenhagen Would Be a Success”

CO2 Emissions Cuts Will Cost More than Climate Change Itself

Based on conventional estimates, this ambitious program would avert much of the damage of global warming, expected to be worth somewhere around €2 trillion a year by 2100. However, Tol concludes that a tax at this level could reduce world GDP by a staggering 12.9% in 2100 — the equivalent of €27 trillion a year.
.

It is, in fact, an optimistic cost estimate. It assumes that politicians everywhere in the world would, at all times, make the most effective, efficient choices possible to reduce carbon emissions, wasting no money whatsoever. Dump that far-fetched assumption, and the cost could easily be 10 or 100 times higher.

To put this in the starkest of terms: Drastic carbon cuts would hurt much more than climate change itself. Cutting carbon is extremely expensive, especially in the short-term, because the alternatives to fossil fuels are few and costly. Without feasible alternatives to carbon use, we will just hurt growth.

Secondly, we can also see that the approach is politically flawed, because of the simple fact that different countries have very different goals and all nations will find it hard to cut emissions at great cost domestically, to help the rest of the world a little in a hundred years.”

On my “to do someday” list is a comparison of USHCN and USHCN.v2 (they are different in a few ways…). The v2 file is in 1/10 F, no longer 1/100 F. The older stations now have MUCH more ‘missing data’ flags It looks like more of our past has been selected for “deletion for QA reasons”.

Umm, am I missing something basic? This is an important product, I assume
it’s used in a lot of ongoing research, how could they possibly release
this without a decent explanation of what changed and why. Wasn’t there
even a README file to explain some of that?

Wayback machine won’t grab the old data because of the way the website is set up. The data is not embedded in the page, you complete a form that retrieves the data from elsewhere, so there is nothing for wayback to save.

David (12:55:39) :
Ok, new version, same data. Why is the data incompatible with the version instead of the other way around? Is there a legit reason pre-1900 data showing more warmth has to be truncated from the record because of the version switch? I don’t get how that is an explanation, I guess.

Answer: It isn’t the same data. See above…

David (12:58:50) : And while I am on this, is there somewhere that one can acquire the station data that includes Tmin and Tmax? All I have found so far is the monthly means. I would like to look at data on a daily basis, but I can’t find it. Does it exist?

It’s the closest I’ve found so far (other than images of printed pages..)

David (18:12:03) :
Isn’t everyone a little tired from all this leaping to conclusions? Mr. Watts has asked an interesting question. I see no factual basis for the conspiracy “answers” on this post other than a deep mistrust. That isn’t nearly enough.

OK, how about the mail where they talk about not releasing data, talk about how to get on IPCC committees and who from NCDC and CRU ought to right which parts of the IPCC report? Oh, and commiserate about the need to suppress FOIA requests. That good enough? It’s only two folks talking, but they reference others. Is 2 enough for a conspiracy?

they also mention having “raw data” on a server in 2003, but it isn’t completely clear to me if this is the “raw raw data” or merely the “cooked raw data” or the “warmed over cooked raw data” ;-) This being the data from their prior data leak episode.

…I mean where is the grass covered enclosure in which the Stevenson Screen is meant to be mounted, according to internationally recognized meteorological standards…it’s all outside the very area in which it is supposed to be growing …. Why? … that’s back to front!

…some may argue that we are comparing ‘apples with apples’, since it has always been exposed in this manner – over a shingle bed, coarse gravel/ stones, whatever, with inherent difference in albedo values, (to mention one thing)… from a grass covered surface, which is the ideal and recognized surface, internationally….

….has anyone checked to see how long the surface has been like this….has it been modified over it’s lifetime…( like from grass to shingle/ loose metal)… to say that the station has always been there, and that’s the end of the matter, really is not good enough…

Regarding 1934, my father will be delighted to hear that the dustbowl wasn’t as bad as he’d thought. Perhaps with further revisions he could avoid living through it entirely! :)

Anyway, is there a link between the revisions and the reformatting of data? I can understand why they’d update the format, given the amount of money pouring in to the field from tax dollars that allow a host of long-delayed programming projects to move forward, but I don’t see why that would involve temperature adjustments back to the Great Depression.

On a side note, or perhaps a point in itself, the story of all these unjustified adjustments will come out, whether wholescale or in dribs and drabs, and each will be another example of how the data was manipulated. We’ve all spent years watching every new photo of a glacier, every newly noticed bird behavior, and every polar bear suicide spun as anecdotal proof of man-made global warming.

Now the shoe is on the other foot. Every crazy adjustment made to the historical temperature data will be analyzed, debated, and probably hung like an albatross around somebody’s neck. The press likes a narrative, and though the climate is hard to understand, if not fundamentally unfathomable without gross oversimplification, fraud is simple to grasp. There’s only so many times AGW scientists can go on TV and recite “peer-reviewed literature” before the public notices their uncanny resemblance to tobacco executives. I will bet that by the time a few AGW proponents go on a news show and get unexpectedly grilled like the estranged ex-boyfriend in a triple homicide, this whole house of cards will collapse and we’ll get back to unbiased science.

“I don’t know? Maybe I can create a bridge between the WUWT blog and the Daily Paul Blog, a Political blog and a Science blog, to bookmark these hreads, and keep the conversation going between these to blogs. The concersation that would come out would be most fascinating.”

Randy (14:12:09) : How many climate scientists rely on this ‘adjusted’ data believing it to be a solid foundation upon which they then do their thing?

As Anthony said, almost everyone. I’ve spent a year or so digging through GIStemp looking for the “magic sauce” and always accepted the data a valid. It was only when I started to “characterize the data” for doing a GIStemp benchmark series that I found “odd trends” in the “raw” data. That lead to what I thought at the time was a boondoggle, but turned out to be a gold mine. That GHCN was fudged, big time.

Since GHCN looks to be the input for the 4 major temperature series (GIStemp, CRUt, NCDC, and the Japanese series) and then the modelers and other investigators take their output… this all comes down to one thing:

Who does what to the really raw data at NCDC as it is turned into GHCN? And why?

It would involve anywhere from 1 to a dozen people. (One manager or key researcher setting a standard, up to a dozen managing parts of the data set process – personal max estimate).

THAT is the Grand Prize in this treasure hunt…
My natural scepticism is firming up daily and amongst the general public I am not alone. However probably the last remaining barrier to total disbelief is that I can’t believe that so many serious scientists are ‘on the take’.

They are not. Most are honest, but duped. Some are willingly duped. Many have simply fallen in love with their theories and are not seeing the warts. And frankly, of the “thousands of scientists”, most of them seem to produce reports of the form “IFF AGW is real, then this bad thing will happen in my field.” I’d put the number of ‘intentional rats’ at about a half dozen max, but in high places. They will likely have a couple of dozen of willing accomplices, but those folks will be a mix of the ‘duped’ and the ‘suspected but the boss said to’ and the ‘Hey, It was a paycheck’ folks.

If however the base science they are relying upon is seriously flawed they I would expect to see an increasing number standing up to be counted.

When the rocks, arrows, and bullets are flying, folks do “duck and cover” not “stand up and be shot”.
Probably a lame question.

Nope. A very fine question. One asked in every fraud investigation. Who is the Alpha, who are the grunts, who are the dupes, who are the marks.

A joke involves the 200W aquarium heater they got from Petco to keep the thermometer from freezing, allowing the station to maintain accurate readings in the coldest of cold spells.

:D

Frankly, the weather stations are so wacky that years ago I figured we’d just as well use OnStar satellites to collect surface temperatures from the onboard computers of speeding luxury cars. The system already uses GPS to track car locations, so the only hangup is convincing Lincoln and Lexus drivers to speed through fields in the middle of Iowa at 3:00 AM, and convincing NOAA to quit deleting datasets with the claim that spurious readings came from impacts with sleeping cows.

Thanks. I’ll be here all night (or until Anthony tells me to get serious!). Try the veal. It’s delicious!

Wayback machine won’t grab the old data because of the way the website is set up. The data is not embedded in the page, you complete a form that retrieves the data from elsewhere, so there is nothing for wayback to save.

It looks to me like an artifact of the USHCN vs USHCN.v2 change. USHCN data set is date stamped with 2007. Any changes before that are not available, but the USHCN vs USHCN.v2 are available for download from NOAA / NCDC (at least for now… but I have saved copies ;-) See here:

Randy (14:12:09) : How many climate scientists rely on this ‘adjusted’ data believing it to be a solid foundation upon which they then do their thing?

As Anthony said, almost everyone. I’ve spent a year or so digging through GIStemp looking for the “magic sauce” and always accepted the data a valid. It was only when I started to “characterize the data” for doing a GIStemp benchmark series that I found “odd trends” in the “raw” data. That lead to what I thought at the time was a boondoggle, but turned out to be a gold mine. That GHCN was fudged, big time.

Since GHCN looks to be the input for the 4 major temperature series (GIStemp, CRUt, NCDC, and the Japanese series) and then the modelers and other investigators take their output… this all comes down to one thing:

Who does what to the really raw data at NCDC as it is turned into GHCN? And why?

It would involve anywhere from 1 to a dozen people. (One manager or key researcher setting a standard, up to a dozen managing parts of the data set process – personal max estimate).

THAT is the Grand Prize in this treasure hunt…
My natural scepticism is firming up daily and amongst the general public I am not alone. However probably the last remaining barrier to total disbelief is that I can’t believe that so many serious scientists are ‘on the take’.

They are not. Most are honest, but duped. Some are willingly duped. Many have simply fallen in love with their theories and are not seeing the warts. And frankly, of the “thousands of scientists”, most of them seem to produce reports of the form “IFF AGW is real, then this bad thing will happen in my field.” I’d put the number of ‘intentional rats’ at about a half dozen max, but in high places. They will likely have a couple of dozen of willing accomplices, but those folks will be a mix of the ‘duped’ and the ‘suspected but the boss said to’ and the ‘Hey, It was a paycheck’ folks.

If however the base science they are relying upon is seriously flawed they I would expect to see an increasing number standing up to be counted.

When the rocks, arrows, and bullets are flying, folks do “duck and cover” not “stand up and be shot”.
Probably a lame question.

Nope. A very fine question. One asked in every fraud investigation. Who is the Alpha, who are the grunts, who are the dupes, who are the marks.

“Want to Join the Debate” is another invention of mine. If you don’t mind me laying claim to it right here right now.

It is a website that brings all the Blogosphere together.
It connects blogs talking about similar subjects together. Tiers of competence are created, voted on by their peers. May the best blog debate WIN!

Great stuff! But as a scientifically educated simpleton, I’m wondering why they bother adjusting the raw data at all, if all we’re looking for is ‘change over time.’

Adjustments would presumably be in order if we were trying to establish the actual temperatures, but as we’re not, it doesn’t matter if instruments are out of calibration, or anything else, as long as the site hasn’t been moved or the instruments replaced/recalibrated, and even those changes would show up as “step changes,” easily corrected for.

Using only raw data from the earliest records (which can be confirmed, apparently, by newspaper accounts), simply compare them to today’s raw data and a trend (if any) should jump off the page. No doubt different stations will show different trends in both directions, so average them.

The only obvious problem would be UHI effects, so rather than attempt to correct for them, simply exclude stations that are affected. There should still be plenty remaining for the survey to be sufficiently accurate for the argument, or “government work” for that matter. UHI impact change could be corrected for each station only if a record of the times for the changes exists and that seems highly unlikely in most cases. So, blow ’em off.

Similarly, it shouldn’t matter when a station became operational, 1880 or 1950, the trend will show, one way or the other. It might even better if the set of stations showed different start dates as that too might show a trend if there is one. Forget everything that happened in between- that’s weather, not climate.

So, somebody with patience, show me why this wouldn’t work, instead of spending gazillions of dollars arguing over adjustments.

OK, this is getting annoying. The “raw” data, from which the adjusted is made, does not exist for the earlier period. OK, here we can see at least that the file name, 9641C_200908_raw.avg contains the word “raw” and I’ve got the right data set. But the first year of data is 1903.

So the first couple of years have more -9999 missing data flags. The annual averages are a bit flakey too, but once it stabilizes, the two annual series again show the old version has been made cooler in the new version.

In my defense, I can only offer that trying to keep strait what adjusted data was adjusted a lot and which unadjusted data was adjusted almost as much vs the raw that that was also adjusted… well, it’s easy to get lost.

At any rate, at least now you can compare “Old ‘unadjusted’ we hope” with “New ‘adjusted a lot'” and with “New ‘unadjusted’ just changed some”…

And we still have the past cooled off relative to the present, though the 1930s have gone nutty in the early years.

And we’re supposed to get excited about things in the 1/10 or 1/100 C place when the “raw” data bounces around in 1/10 to 1/2 C jumps?

Sometime after I’ve gotten a nights sleep, I’ll do a more formal posting on this. Doing it “on the fly and live” is not my favorite way to work…

I think I am beginning to understand how “global warming” has been man “made”. Having once published a peer-reviewed scientific paper I am sickened. This is not science. Scientists DO NOT alter their data and say they just changed its format.

I also understand why they are getting desperate enough to try that.

IF they can ram through the carbon derivatives bubble before the whole world has noticed it has stopped warming, then they can lie and pretend that their scam has saved the world. If not, they are so busted.

After reading this post earlier today, I let it percolate a bit before checking out the info for Orland, CA, at USHCN, NCDC, & GISS.

First, though, I needed to do some adjusting of my own. Got out the pizza data (it’s not delivery) and adjusted it with an extra dose of shredded chesses, additional pepperoni, and a fiery heavy sprinkling of crushed red pepper. Consumption of the data was than further tweaked with a grape product (Sangria).

Now I was ready to peruse the data. Wow! They all have different start dates, the earliest 1895, the latest 1931. And USHCN’s data I found was in °F while the others were in °C. I also note that the graph I got from USHCN had nothing looking like the “missing” data in the earlier GISS plot for Orland (pre-1900); it’s slope is mostly flat.

Trying to better understand this mess, I looked up earlier postings on WUWT for Orland, including Steve’s discussion at:

After looking through the mind-numbing jumble of data just for this one supposedly stable station is enough to make one wonder, “Does anyone know what the real temperature is anywhere at anytime? And in a few years, how many times will today’s temperature data have been ‘adjusted’ and by how many agencies?”

While taking a stroll through WUWT’s memory lane (June 2007), I found a couple posts that seem especially relevant now. With daily hits for WUWT now averaging over 200,000, just look at this one about volume experienced back on June 19, 2007:

Today was a record setting day, not only for this blog, but for any blog sponsored by the Enterprise Record. And to think I used to be excited when I got 3000 hits a month. My traffic today was over 20,000 visitors!

Yes, those were the good old days when many posts elicited comments numbering in the single digits!

And finally, after noting all the shenanigans by all these adjustments to data from governmental agencies worldwide, here’s a light fun piece from the archives that we all ought to take a moment to enjoy:

This manipulation of truth reminds me of the Secret Gospel of Mark, which reads:

For even if they say something true, still the lover of the truth should not agree with them. For not all true things are truth. One must not value what human opinion considers truth more than the true truth, which is recognized through faith.

Or, in modern terms:

For even if the Sceptics say something true, still the Warmists should not agree with them. For not all true things are truth. One must not value what human opinion considers truth more than the true truth, which is recognized through the new religion.

“If anyone has a screen cap of this page prior to the change or can help pinpoint the date of the change, please let me know.”

I think I found the answer.

By using the Wayback Machine and starting at http://www.giss.nasa.gov I was able to navigate to various versions of the form. If the “Last Updated” date on the form is reliable, then it looks like the change you’re talking about was made between 11/24/2000 and 03/20/2001.

Here are links to three versions of the page as archived by the Wayback Machine.

wtf (21:55:54) asks:
Has anyone else come to the conclusion that from this point out, virtually all past climate data are possibly suspect?

Well, as my above post (Richard S Courtney (14:07:21) ) shows, 6 years ago I and the other 18 signatories to my paper tried to publish that all past climate data are certaily suspect. But our paper was blocked from publication and my complaint at the blocking is part of the hacked (?) Climate gate emails.

Kevin Kilty (14:57:24) commented on statements from my above post where I said of the Jones et al., GISS and GHCN data sets of mean global tmperature (MGT) time series:
“These teams each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends….”

His comment on those statements said;
“Now why wouldn’t anyone notice allegedly independent estimates of the same quantity that differ by two times their respective 95% confidence intervals? 95% means something specific and to differ by two times such an interval is highly improbable. If Richard is right about this, and I have interpreted what he says correctly, why didn’t more alarm bells go off? This is exactly the type of data consistency issue that eventually deflated the “Palmdale Bulge”.”

At least 19 of us did notice and we ‘heard alarm bells’ but our paper was blocked from publication.

1. I can prove that we submitted the paper for publication.
2. I can prove that Nature rejected it for a silly reason; viz.
“We publish original data and do not publish comparisons of data sets”
3. I can prove that whenever we submitted the paper to a journal one or more of the the Jones et al., GISS and GHCN data sets changed so either
4. The paper was rejected because
(a) it assessed incorrect data
or
(b) we had to withdraw the paper to correct the data it assessed.

But I cannot prove who or what caused this.

pat (17:06:38) makes a comment that goes to the heart of the problem when he says:
“This wholesale substitution of opinion for real data is simply scary.”

Yes, as I said in my above posting;
“It should also be noted that there is no possible calibration for the estimates of MGT. The data sets keep changing for unknown (and unpublished) reasons although there is no obvious reason to change a datum for MGT that is for decades in the past. It seems that the compilers of the data sets adjust their data in attempts to agree with each other.”

In the absence of possibility of calibration what can the data be compaed to except “opinion”?

I could have added that the recent reduced trends in the the Jones et al., GISS and GHCN data sets imply that they are now adjusted in attempt to also agree with the satellite (RSS and UAH) data sets.

E.M.Smith (21:10:33) seems to have understood the importance of my point that said;
“although there is no obvious reason to change a datum for MGT that is for decades in the past”
because he writes:

“Here is a sample of the “old” Orland data from UHSCN:

Not only do we lose 1883 and 1884 in their entirety, be it looks to me like the “new” version has cooled the past.

1934, for example, is 1/2 F colder “now” than it was before…

So, I ask again: Anyone know how to do a FOIA request for the changes made, code, reasons, emails,…”

Adjustments to individual station records are only one of the ways the data have been changed over the years.

So, to put it kindly, it has been known for at least 6 years that the data sets of mean global temperature (MGT) time series are uncalibrated guesswork that have been repeatedly altered for a variety of unknown and unpublished reasons but publication of this knowledge has been prevented until now.

So, let me get this straight. The high priests have performed an ‘immaculate conception’ in ‘turning water into wine’, which may indeed be a miracle in turning a cooling trend into a warming one but it is not a justification for global climate change action.

Whosoever took that decision to ditch the raw data should go to jail and ALL climate change work based on those datasets should be put on ‘indefinite hold’ until this scandal of epic proportions is fully exposed, its perpetrators taken to the proverbial guillotine and ALL politicians who trusted them or conspired with them strung up and pilloried. And I’m REALLY, REALLY looking forward to the Royal Society trying to defend primary data destruction……..

Let the exposing begin.

But remember: not every country is as generous as the US in its exposure clauses. Tiger Woods just banned the UK from publishing pictures of him nude. I don’t think I want to see those pictures, but I sure would like to see the complete exposure of climate data destruction…….

Squeaky bum time for Big Al and the Hoaxers…We in Ireland have just been dealt the roughest budget in livinig memory.. And whilst we cut unemployment benefits to 20-21 year olds by 50% it is all the sweeter that we can afford €150,000,000 to ‘combat climate change in Africa’..as announced by our esteemed MInister of the Environment John Gormley of (yes you guessed it)…the GREEN PARTY. God, save me from your followers.

Whilst on the subject of blink comparators, perhaps we in the UK should have our very own reflecting the disparity between the CET as calculated by the Met office in http://hadobs.metoffice.com/hadcet/cet_info_mean.html and that calculated by Philip Eden in http://www.climate-uk.com/index.html.
Intrigued by the Met’s anomaly increasing over the past few days, whilst temperatures were falling across the UK, I took myself to Philip Eden’s site where I found that his anomaly to 10th December was fully 0.6C less than that stated by the Met.
Investigation reveals that the Met are not comparing like with like and that their current recording sites differ substantially from those that were employed to make up the bulk of the early records.
Philip Eden however constructs his charts from sites that are as close as is possible to those originally used. In the country of the blind the one eyed man is king!

I got the raw and adjusted graphs for Marysville. I made a mistake when I didn’t save the originals. It took quite a bit of adjusting to make the scales match. The raw data graph says “Marysville” at the top, and of course the homogenized version has no label. I feel bad that I didn’t keep the originals…who would have thought they’d change their website and “lose” the raw data.

You assert to me:
“how ‘the team’ could have conspired to do this is gobsmacking.”

I have made no accusations of conspiracy. Please note that I have only reported demonstrable, documented facts. Indeed, I see no need to invoke ideas concerning conspiracy.

‘Group think’ and self interests are sufficient to explain what has happened, and, therefore, a conspiracy seems unlikely.

Everyone who has conducted work to compile MGT data series and/or has conducted work that depends on or utilises MGT data sets has a personal interest in preventing publication of work that shows the MGT data series are complete rubbish. They have invested time money and effort into such work that has provided each of them with career status and enhancement. So, all of them could be expected to dismiss, to denigrate and to oppose publication of anything that shows the MGT data sets are complete rubbish.

Indeed, it is hard to imagine that any one of them could bring him or her self to consider the possibility that the MGT data sets are complete rubbish. So, action to prevent publication of a paper that indicates the MGT data sets are complete rubbish could be seen (by themselves) as a reasonable defence of their work.

But the fact is that the MGT data sets are complete rubbish and actions to prevent publication of this fact is a matter of record.

Questions:
When making comparisons between the archived surface station.org plots and current GISS plots which version of adjusted GISS data should be used?

Also what does the time of observation adjustment do? As far as I can tell both the old Liquid in Glass thermometers and the MMTS systems record the minimum and maximum daily temps, so how does the time of observation affect the result?

Yes, the Ministry of Truth could adjust the older data downwards. But if they want to keep the data on an upwards trajectory, eventually, in the not too distant future, the “older data” is going to include 1998 and 2007 which are kind of like Holy Grails for them. They’ll have to do a lot more than just adjust the datasets. They’ll need to do a “1984” -like adjustment to the peer reviewed liturature and the all of the articles written in the MSM. I think they have caught themselves in a trap created by their own deceipt and that is why they are so desperate in Copenhagen to get something done before the deceipt becomes obvious.

This is probably pointing out the obvious. But since most of us apparent heathens, heretics, and deniers (and the occasional troll) doesn’t exactly match the general description of rocket scientists, wouldn’t it be a bit easier for people to comprehend if all the logic of following the data (instead of the money) was put into a nice tidy flow chart, which would no doubt look like a family tree.

As everyone ought to know, all them “thousands of [IPCC] scientists” aren’t crunching temperature data, a whole lot of ’em, probably most, are in fact just crunching the results of others, i.e. the results of others are the base for the context of their work assignment, so to speak.

From what I have read, in articles and comments alike, it often is just like the author offered in a reply: the data is taken at face value, which might be sound in a lot of cases, since it’s not up to the economy guy the question the integrity of the temperature data if his/her job is about calculating the cost.

However, a flow chart would probably make it easier for average people to visualize the problem if the data gets corrupted at one stage or another, since it can be easily pinpointed with ones two own eyes. And besides, since the climatology department at NASA also seem to be lacking in the amount of rocket scientists these days, and what with the bureaucrats love for anything flow charty in color, even they might enjoy the wonders of the simple colorful visualization tool to know who to blame’n’sack.

It is imperative that all the old records be preserved and archiVed in an accessible file. There is a circling of the wagons going on and a burning of files that could be used in investigations which all the usual suspects fear is about to occur

Richard S Courtney (14:07:21) :
“It demonstrates that 6 years ago The Team knew the estimates of average global temperature (mean global temperature, MGT) were worthless and they acted to prevent publication of proof of this.”

Breathtaking, how gatekeeper functions of those people damaged climate science. IMHO all ‘corrections’ and ‘adjustments’ of raw data should be carefully reexamined wherever possible.

Some interesting insight into motivations for adjustments
FOIA\mail\1254147614.txt

” At 06:25 28/09/2009, Tom Wigley wrote:

Phil,
Here are some speculations on correcting SSTs to partly
explain the 1940s warming blip.
If you look at the attached plot you will see that the
land also shows the 1940s blip (as I’m sure you know).
So, if we could reduce the ocean blip by, say, 0.15 degC,
then this would be significant for the global mean — but
we’d still have to explain the land blip.
I’ve chosen 0.15 here deliberately. This still leaves an
ocean blip, and i think one needs to have some form of
ocean blip to explain the land blip (via either some common
forcing, or ocean forcing land, or vice versa, or all of
these). When you look at other blips, the land blips are
1.5 to 2 times (roughly) the ocean blips — higher sensitivity
plus thermal inertia effects. My 0.15 adjustment leaves things
consistent with this, so you can see where I am coming from.
Removing ENSO does not affect this.
It would be good to remove at least part of the 1940s blip,
but we are still left with “why the blip”.
Let me go further. If you look at NH vs SH and the aerosol
effect (qualitatively or with MAGICC) then with a reduced
ocean blip we get continuous warming in the SH, and a cooling
in the NH — just as one would expect with mainly NH aerosols.
The other interesting thing is (as Foukal et al. note — from
MAGICC) that the 1910-40 warming cannot be solar. The Sun can
get at most 10% of this with Wang et al solar, less with Foukal
solar. So this may well be NADW, as Sarah and I noted in 1987
(and also Schlesinger later). A reduced SST blip in the 1940s
makes the 1910-40 warming larger than the SH (which it
currently is not) — but not really enough.
So … why was the SH so cold around 1910? Another SST problem?
(SH/NH data also attached.)
This stuff is in a report I am writing for EPRI, so I’d
appreciate any comments you (and Ben) might have.
Tom.”

I wouldn’t worry too much about any of the data being permanently gone or destroyed. Whoever is posting these new datasets are as naive as the authors of the, now famous, “emails”.

All of these emails and all of this data passes through and is stored on servers which are far removed from the control of the authors. All of these bureaucracies have large data centers with untold number of “virtualized” servers (mail, ftp, file and web) which are dutifully backed up daily. It is even likely many of the participants laptops are backed up regularly.

These agencies are so anal with their data they even have armies of serfs scanning paper copies continuously.

It’s in three phases: Beginning with RAW data plot (archived at surfacestations.org, to USHCN corrected (“value added”), and onward to the final plot with Homogeneity Adjustments (“quality controlled, homogeneous”) applied.

The transformation is ASTOUNDING. If it wasn’t for the graphs stored at surfacestations.org, I wouldn’t have found what I found, but would have instead assumed that I was essentially seeing raw temperature station data from NASA (which they call “RAW DATA+” – kind of a “value added” thing).

Don’t forget that a lot of the raw data plots are still stored at surfacestations.org as part of each site survey – just not in number form. Those are historical records now, and still extremely valuable in establishing a pattern. Too bad ALL of NASA’s data wasn’t archived. (anyone?)

Now, IF I was a warmist, in charge of data, and was one who felt that adjustments were needed to the station data to fit a given hypothesis, my tasks would be different depending on the type of stations considered.

1) adjust pre-1940 data to appear as trendless as possible — kind of a pre-industrial age climate change denial, and

2) adjust post-1940 data to show a gradual incline that accelerates from about 1960 onward (i.e. make it “agree well” with a hockey stick)

3) For improperly placed stations (e.g., mostly urban, like those with nearby artificial heat sources): The Urban Heat Island effect signature is unmistakable, with an obvious trend from many of these stations starting out low in 1900’s and continuing almost linearly upward. The tasks would be the same as above, but the FIRST order of business would be to mask (“hide”/”contain”) the “putative” UHI (Urban Heat Island effect).

Yes, the Ministry of Truth could adjust the older data downwards. But if they want to keep the data on an upwards trajectory, eventually, in the not too distant future, the “older data” is going to include 1998 and 2007 which are kind of like Holy Grails for them. They’ll have to do a lot more than just adjust the datasets. They’ll need to do a “1984″ -like adjustment to the peer reviewed liturature and the all of the articles written in the MSM. I think they have caught themselves in a trap created by their own deceipt and that is why they are so desperate in Copenhagen to get something done before the deceipt becomes obvious.

Yes like an embezzlement scam, you can only cook the books so much. Eventually the adjustments you made to make the books balance 5 years ago, will make it impossible to balance the books next year without it being patently obvious that something is amiss.

It appears to me we are entering that “Chickens coming home to roost” phase where it will be physically and logically impossible to mask reality with “tricked up data”, as your adjustments will be confounded by historical records you have no control over, like news accounts of storms 20-30 years ago which state the month was the coldest on record since 1858, but your current graph does not agree with that news account. You can only juggle so many balls for so long.

What we need to do, in addition to all the forensic analysis going on of the code and the data, is to start to data mine large libraries for those historic records that site official weather service records of the day, and then ask why those accounts do not agree with their “raw data” for the same area.

Take raw GHCN data and mix it well with USHCN corrections. This is now “value added” data, so post it for public viewing as:

“raw GHCN data+USHCN corrections”.

2) QUALITY CONTROL

Discard dangerous original raw GHCN data, making it “quality controlled”, by removing it from public view.

3) HOMOGENIZING

The value added (USHCN corrected) data will not be considered complete or useful until after it is homogenized. Do this by folding in homogeneity adjustments as appropriate or necessary. Post this for public viewing as:

“after homogeneity adjustment”

For an example of the above, using Santa Rosa (38.5 N 122.7 W) click on the following link:

With Santa Rosa, they’ve taken what was essentially a raw set of real temperature data that showed long term steady upward trend since 1900, and turned it into something that is essentially trendless, until a final sharp upper tick at the end. AKA – one more variation of a hockey stick.

JerryB (08:31:52) : Perhaps the title of this post should be changed, since GISS has been using NCDC adjusted data, not raw data, for USHCN stations for at least 8 years.

GISS does not use “NCDC adjusted” it uses NCDC produced GHCN “unadjusted” AND USHCN (until about a month ago when it changed to USHCN.v2) and where “unadjusted” is in fact adjusted in some ways, but is labeled “Unadjusted” on their web sites… The bulk of the planet data is the GHCN ‘unadjusted’ dataset while the US-HCN only covers the USA.

In STEP0, GIStemp glues together the GHCN data and the USHCN data into a bastard mix of the two (details only to folks with strong stomachs…). Do you call that raw? Adjusted? Cooked? Half cooked? Half baked? Unadjusted? Maladjusted?

Sometimes it passes GHCN unmodified through, sometimes it passes USHCN straight through, and sometimes it “sort of averages” the two to get a smooth blend of two different offset curves. It all depends on what chunks of which it has…

Jerry

REPLY: GISS in their previous presentation advertised it as “raw” so that is where the reference comes from. -A

That is correct. From the GISS web site point of view the “GHCN unadjusted” data set was called “raw”. That’s what they called the “after STEP0” graphs on their web site.

Now; it has “USHCN corrections” but before it said something more like “Raw GHCN + USHCN combined”. The “corrections” word is a new twist…

hcn_calc_mean_data.Z Time of Observation and Filnet Adjusted Mean Monthly
Temperature (Calculated from hcn_doe_max_data.Z and hcn_doe_min_data.Z)

So I’m left to wonder if “fully adjusted” means the same as “TOBS and Filnet”? What about those other types of adjustments? SHAP? was it?

Also, FWIW, in the other description file, status.txt, we have:

07 August 2009
Raw (unadjusted) data series and series adjusted only for the Time of
Observation bias (TOB) have been added. See the readme.txt file for
file naming conventions and data formats.

So the “raw” USHCN.v2 file is fairly new. Though this still leaves open the question of why “raw” and “fully adjusted” are both different from USHCN The Original and from GHCN.

This is just maddening.

Yes, I’m processing the “right” copies through my copy of GIStemp code, but figuring out what the various data sets and various data set manipulations means it the nutty bit.

So now were taking the GHCN “unadjusted” and the USHCN v2 “fully adjusted” and blending them, then in STEP2 applying UHI adjustments all over again?

Does anyone have any guidance as to IF NCDC definition of “fully adjusted” includes a UHI adjustment?

I’m beginning to think that it is impossible to get anything resembling “raw” out of NOAA / NCDC no matter what it is called (and no matter how often they, or GISS call it “raw”.

I’m going to take a break before I fulminate…

Someone needs to take the samples of the “New USHCN V2 Raw” posted above and check them against the online pdfs of the paper forms and see if the USHCN.v2 “raw” is remotely like what was put on paper. (Yes, I can always hope…)

“Satellite-based measurements of decadal-scale temperature change in the lower troposphere have indicated cooling relative to the surface in the tropics. Such measurements need a diurnal correction to prevent drifts in the satellites’ measurement time from causing spurious trends. We have derived a diurnal correction that, in the tropics, is of the opposite sign from that previously applied. When we use this correction in the calculation of lower tropospheric temperature from satellite microwave measurements, we find tropical warming consistent with to that found in surface temperature and in our satellite-derived version of middle/upper tropospheric temperature.”

Why weren’t the ground data corrected to match the satellite data? If the ground data is corrected, then the satellite data is corrected to which ground data…corrected or uncorrected?

Most of the blink comparators I’ve seen at have what look to be minor adjustments, ones that just happen to make the plots trend lightly to a more “hockey-stick” shape. But adjustments upwards to 3ºC that span decades?

Those are ENORMOUS adjustments/corrections.

The “trend” in the corrections on most of what I’m looking at now: The farther you go into the past, the greater the adjustment or correction applied, but always essentially toward the same end. Flatten the overall trend of the “shaft” part (pre-1980-90), and make that “shaft” part lower than the recent past (1980-present).

Is it possible that something along the lines of a Mann/Briffa reconstruction has become so accepted by the collective mind that they’re actually being used to “calibrate”, or otherwise “quality control” real temperature data from the past? It would be simple enough to do with the entire dataset – just feed in an algorithm that checks all the raw data against some governing assumption, call it error on the data’s part, discard anything that strays too far from some predetermined envelope, and adjust and correct as necessary. Automatically, no manual intervention needed. Could that be part of the “quality control” to which the raw data sets have being subjected – and without explanation, no less?

What else could possibly justify wholesale swings in past data like the ones seen in Santa Rosa – where the “shaft” is literally SLAMMED upward, smoothed and flattened to the ceiling, then “homogenized” back downward, but only those parts that are at least a few decades old?

Also, do we really need to attack the entire data set (the way they have)? A small, but extremely detailed sampling of a few of the most egregiously adjusted/corrected stations, thoroughly investigated (beginning with the original paperwork, much of which is still available in .pdf form from the servers), should be sufficient, once debunked, to call the value of the entire data set into question.

Good luck with that. They loose information faster than CRU. They are useless when it comes to important stuff. And, if anyone wants them to remove material, all they need is request it, and poof it’s gone. I know, because I tried to find stuff that was there, and then the next year it was not, and that was about 7 years ago, so if anything they are worse now. Don’t get me wrong, I’m not saying it’s not there, only that if it is, you had better not dally in looking for it.

I was finally able to finish comparing raw GHCN from Sept 2007 to GHCN from Dec 2009. Here are the results:

Out of 2.9 million station monthly temps prior to Sept 2007:
2339 (0.1%) temps were added (previously blank, now have values)
833 (0.03%) temps were removed (previously had values, now blank)
2 were changed

So whatever changes exist in GISS data are probably due to the change in GISS algorithm. The raw data from GHCN still looks to be essentially the same.

Wow. Gives new meaning to the phrase “October surprise”. I wonder what the October’s counter-month, April, looks like. Usually limited heating and air conditioning in those months as well.

And the fact that he was able to glean that from existing GHCN data tells me that…

…NASA has some more adjusting and homogenizing to do. Obviously, if it’s not agreeing with the GCM’s, and only agrees with the tree-rings (which we already know are completely valid and reliable save for the past 60 years), then there’s probably something wrong with the data, and a compelling reason to confine the selection to other months — even it menas defining what the other months are!

“The actual yearly output numbers [for the “millenium simulatin -AR4”] are in the email. So, I took the column identified as global average and plotted it. What a surprise. The hypocritical hot air coming out of the climatologists that all their models show unprecedented warming is simply not true.”

we switched on November 13 from USHCN-version 1 data to USHCN-version
2 data. My guess is that you must have compared the data shown on Dec 5 to data shown before Nov 13, the most recent update. The next update will be early next week.

we download from the web data sets prepared by NOAA and SCAR that are available to anybody and start from there. Any changes you notice in the station data if you select “raw GHCN + USHCN corrections” occurred before we get the data and you’d have to contact NOAA for further information.

As you know, you can download all station data as they were before and after our homogeneity adjustment.

Over the US we are using satellite night light data to determine whether to adjust a record or not rather than population data, and Orland’s data were bright enough to trigger our adjustment.

9641C_200907_raw.avg.gz is the file you might want (we use F52). Orland’s data are the lines starting with “046506”. Notice that this file is compressed (use gunzip to uncompress) and the data are in units of one tenth deg_Fahrenheit.

“Over the US we are using satellite night light data to determine whether to adjust a record or not rather than population data, and Orland’s data were bright enough to trigger our adjustment.”

I could almost see this as a generalized explanation, but Orland was specifically discussed. Orland, with a 2000 census population of 6,281, “bright enough” to trigger an adjustment (primarily) to its pre-industrial era temperatures, including a complete discard of all pre-1900 data?

Actual adjustments to Orland’s *recent* temperature data (which is, ostensibly, what comparisons should be looking for to trigger adjustments) were negligible. The bulk of the adjustments, and they weren’t minor, were all pre-1942. How could satellite data tell us ANYTHING about data from 1900-1942, which is where all the largest adjustments were made, let alone data recorded from 1880-1900, now discarded?

And Winston looked at the sheet handed him:
“Adjustments prior to 1972 shall be -0.2 degrees and after 1998 shall be +0.3 degrees.”

Winston wondered at the adjustment to the data. At this point, no one even knows if the data, prior to his adjustments, was raw data or already adjusted one or more times previously.

It didn’t matter. All Winston was sure of is that one of the lead climatologists needed more slope to match his computer model outputs. He punched out the new Fortran cards and then dropped the old cards into the Memory Hole where they were burned.

“There!” Winston exclaimed to himself. “Now the temperature data record is correct again. All is double-plus good.”