One way adjustments: The Latest Alteration to the U.S. Climate Record

On Thursday, March 13, 2014, the U.S. National Climactic Data Center switched to gridded GHCN-D data sets it uses to report long term temperature trends – with a resulting dramatic change in the official U.S. climate record. As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.

The new dataset, called “nClimDiv,” shows the per decade warming trend from 1895 through 2012 for the contiguous United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?

As will be illustrated below in the side by side comparison graphs, the increase in the warming trend in the new data set is largely the consequence of significantly lowering the temperature record in the earlier part of the century, thereby creating a greater “warming” trend.

It should be noted that the 0.088 degree figure above was never reported by the NCDC. The Center’s previous practice was to use one data set for the national figures (nClimDiv or something similar to it) and a different one for the state and regional figures (Drd964x or something similar). To get a national figure using the Drd964x data, one has derive it from the state data. This is done by taking the per decade warming trend for each of the lower forty-eight states and calculating a weighted average, using each states’ respective geographical area as the weightings.

The chart below shows a state by state comparison for the lower forty-eight states of the per decade warming trend for 1895-2012 under both the old and the new data sets.

In the past, alterations and manipulations of the temperature record have been made frequently and are often poorly documented. See D’Aleo and Watts. In this instance, it should be noted, the NCDC made considerable effort to be forthcoming about the data set change. The change was planned and announced well in advance. An academic paper analyzing the major impacts of the transition was written by NOAA/NCDC scientists and made available on the NCDC website. See Fenimore, et. al, 2011. A description of the Drd964x dataset, the nClimDiv Dataset, and a comparison of the two was put on the website and can be see here.

The relative forthcoming approach of the NCDC in this instance notwithstanding, looking at the temperature graphs side by side for the two datasets is highly instructive and raises many questions – the most basic being which of the two data sets is more faithful to reality.

Below are side by side comparisons under the two data sets for California, Maine, Michigan, Oregon and Pennsylvania for the period 1895-2009, with the annual data points being for the twelve month period in the respective year ending in November. The right-side box is the graph under the new nClimDiv dataset, the left-side box is the graph for the same period using the discarded Drd964x dataset. (The reason this particular period is shown is that it is the only one for which I have the data to make the presentation. In December 2009, I happened to copy from the NCDC website the graph of the available temperature record for each of the lower forty-eight states, and the data from 1895 through November 2009 was the most recent that was available at that time.)

I will highlight a few items for each state comparison that I think are noteworthy, but there is much that can be said about each of these. Please comment!

California

Left: Before, Right: After – Click to enlarge graphs

For California, the change in the in datasets results in a lowering of the entire temperature record, but the lowering is greater in the early part of the century, resulting in the 0.07 degree increase per decade in the Drd964x data becoming a .18 degree increase per decade under the nClimDiv data.

Notice the earliest part of the graphs, up to about 1907. In the graph on left, the data points are between 59 and 61.5 degrees. In the graph on the right, they are between 56 and 57.5 degrees.

The dips at 1910-1911 and around 1915 in the left graph are between 57 and 58 degrees. In the graph on the right they are between 55 and 56 degrees.

The spike around 1925 is above 61 degrees in the graph on the left, and is just above 59 degrees in the graph on the right.

Maine

· The change in Maine’s temperature record from the dataset switch is dramatic. The Drd964x data shows a slight cooling trend of negative .03 degrees per decade. The nClimDiv data, on the other hand, shows a substantial .23 degrees per decade warming.

· Notice the third data point in the chart (1898, presumably). On the left it is between 43 and 44 degrees. On the right it is just over 40 degrees.

· Notice the three comparatively cold years in the middle of the decade between 1900 and 1910. On the left the first of them is at 39 degrees and the other two slightly below that. On the right, the same years are recorded just above 37 degrees, at 37 degrees and somewhere below 37 degrees, respectively.

· The temperature spike recorded in the left graph between 45 and 46 degrees around 1913 is barely discernable on the graph at the right and appears to be recorded at 41 degrees.

Michigan

Michigan’s temperature record went from the very slight cooling trend under Drd964x data of -.01 degrees per decade to a warming trend of .21 degrees per decade under nClimDiv data.

In Michigan’s case, the differences between the two data sets are starkly concentrated in the period between 1895 and 1930, where for the entire period the temperatures are on average about 2 degrees lower in the new data set, with relatively modest differences in years after 1930.

Oregon

· Notice the first datapoint (1895). The Drd964x dataset records it at slightly under 47.5 degrees. The new dataset states at slightly over 45 degrees, almost 2.5 degrees cooler.

· The first decade appears, on average, to be around 2.5 degrees colder in the new data set than the old.

· The ten years 1917 to 1926 are on average greater than 2 degrees colder in the new data set than the old.

· As is the case with California, the entire period of the graph is colder in the new data set, but the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.

Pennsylvania

· Pennsylvania showed no warming trend at all in the Drd964x data. Under the nClimDiv data, the state experienced a 0.10 degree per decade warming trend.

· From 1895 through 1940, the nClimDiv data shows on average about 1 degree colder temperatures than the Drd964x data, followed by increasingly smaller differences in later years.

How do they justify the lower/older temperatures in the new models? Are we saying that “gridding” the data applies lower temperature readings to a greater geographical area(s) and thus brings about a lower average temperature? Has anyone tried to reproduce the grids to see whether there is any “gerrymandering” taking place? Moving the grid around slightly could definitely change the outcome of this type of analysis, I would think. That would be a type of sensitivity analysis that I think should be done to determine whether there is an impact to where the grid is placed.
I would be interested to hear whether other more knowledgeable readers see this as a change promoting consistency, which could at least be defended, or a cynical form of data manipulation. The most important information would be why the authors or producers of this data believe that it is important to produce a gridded product versus a data set that is closer to the raw data. I would also expect that the authors would have to show some form of analysis on different grid placements. I think that it is clear that when you start to average things, you run the risk of completely muddying the waters so that no useful information can be discerned.

I’m just trying to figure out why, when you look at those graphs as a whole, you can’t clearly see World War I, the Depression, or World War II.
You can see the Depression in the Michigan chart (but not the others), but if temps are tied so strongly to CO2, you’d think a couple of world wars (with attendant increases in overall production and emission of CO2) would show up, at least a little.

As with all such changes , what ever the data, this puts the lie on the uncertainty they were claiming before or after, or maybe both.
“the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.”
That’s over a full degree per century. What was the claimed accuracy on the data beforehand and what is it now? Has the uncertainty been reduced by 1 degree per century.?

According to the abstract, these adjustments were intended to fix:
“1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially undersampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
2. For the TCDD, all divisional values before 1931 stem from state averages published by the U.S. Department of Agriculture (USDA) rather than from actual station observations, producing an artificial discontinuity in both the mean and variance for 1895-1930 (Guttman and Quayle, 1996).
3. In the TCDD, many divisions experienced a systematic change in average station location and elevation during the 20th Century, resulting in spurious historical trends in some regions (Keim et al., 2003; Keim et al., 2005; Allard et al., 2009).
4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).”
The land-based temperature data no longer has any meaning. Good stations without siting problems don’t show these same trends thus invalidating this exercise. The foxes are gaurding the henhouse.

Well I was assuming that they actually provided an error estimation for their data. So far I have not been able to even find one!
However this is interesting on the removal of degree.day fields since underlying data not long exist 😕 Huh?
>>
Qualitative Note on Degree Day Values: Preliminary analysis of degree day
data indicate that some values may be noticeably different from current
values. There are two primary reasons for this:
1) General: New dataset and methodology (see prior notes on changes to how
climate division data are computed)
2) Specific to the degree-day products: Population weights have been updated
to use 2010 Census data.
>>
Run that again? Degree.day products depend on population count ? WTF?
If any aspect of your temperature record moves because of a change in “populatin wieghts” you have a problem with your data processing!
The closer you look the worse it gets.

I suspect gridding ends up applying more homogenization resulting in upward adjustments to the very best, rural data. Once that data has been “upgraded”, the UHI contamination is complete. This is not science.

Michael E. Newton says:
April 29, 2014 at 8:38 am
How can climatologists predict the future when they can’t even predict the past?
>>
Sure they can. We can safely predict that in the next few years the past will get colder.
This is obvious from the lack of warming or “pause”. Since there is no warming this decade, the effect of GHG forcing is to make the past cooler.
You have no grasp on how climatolgy works, Where have you been ???

IF the observable data does not match the predicted data, then just change the observable data sets to match the predicted data look better .. Most would call this pure corruption of data while others seem to pass this off as sound climate science. This has little resemblance to any science.

Look guys I’ve been telling you this stuff FOR YEARS.
1. GHCN-D is daily data.
2. There are more daily stations in the US than Monthly ( GHCN -M)
3. Daily data is not adjusted. Monthly data is adjusted.
4. D’Aleo and Watts looked at GHCN-M, not GHCN-D
5. Based on my experience with daily data ( since 2007) and based on all my work
with GHCN-D, the following is true:
A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.
Skeptical theory was this: if we use more data ( D’aleo and watts specifically complained about the great thermometer drop out ) and if we use unadjusted data, then you will see that warming
trends are diminished.
Nice theory. That theory has been tested. As far back as 2010 when I processed all GHCN-D data. That theory was tested. That theory is busted.
When you switche to Daily data, data is which is not adjusted, and when you use it all you find out out that the past tends to cool and the present tends to warm or stay the same. This leads to a slightly warmer trend.
Calling Dr Feynman. he would say if you have theory ( the great thermometer drop out and using adjusted data skews the record warm) you test that theory ( add more stations, use daily data)
If the results disagree with your theory, your theory is busted.
The skeptical theory about adding more stations and moving to unadjusted data is busted.
Its been busted since 2010, not that folks noticed

Perhaps they are preparing for that 20 year cold spell that may be coming. Temperatures are trending flat now. What if they start to trend down? Big trouble in Big Green.
Adjusting downward becomes a method by which you retain the illusion of global warming, even if it’s not happening.
Call me a “conspiracy ideationist,” if you’d like.

Bernd Palmer says:
April 29, 2014 at 8:30 am
“I challenge anybody to give a valid reason for changing the temperature values in 1895.”
Bernd, they are constrained at the present end by satellite data, so if they want to steepen the hockey stick blade, they are obliged to lower the earlier data. It does reveal their nefarious motives. The last fiddle with recent data was HadCrut going from their 3rd to 4th version. They warmed it marginally but they’ve now pretty well run out headroom at this end of the record. The thing started with Hansen at GISS in 1998 when he pushed the still standing record high temp for the US in 1937 down several tenths of a degree because he realized, if his dream of a new world record was to come in his lifetime, he better take advantage of the strong 1998 El Nino.

These postings + S Goddards will hopefully be used in court as Prima Fascie evidence of fraudalant tampering of data and then used to assess liability and damages deliberately caused by these institutions.

Steve Mosher did comment here, and he offered more than rhetoric or cryptic remarks.
One observation of this would be:
– that even if the results for the US show slightly more warming, what has been the actual impact of these changes?
Not much.
Nothing historically unusual. No increase in extreme weather. No huge or dangerous changes.
Let us not do as the climate obsessed do and reject the facts and data.
This small change does not make wind power any less wasteful. It does not make a CO2 tax any less insane. It does not make the hockey stick less hokey.

Michiganders Celebrate, we are the winners!
A trend of .02 has increased an order of magnitude to .24. I feel warmer already.
But how does this explain this year’s crocus blooms that (at my home) will survive past May 1st? Oh, that’s right. Everything cold is “weather” and everything hot is “climate”

Yes I see that Mosher commented above same BEST drivel as usual. He hasn’t read the posting evidently. Doesn’t want to I would not blame him he probably gets paid by BEST ect to maintain the AGW industry going. BTW no offense meant I have appreciated much of his delving work into Gleick especially LOL He should stick to that.

United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?
=====
Probably neither one…..

Mosher,
“A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.”
No, when YOU comment on temperatures we find that YOU always conform to the Warmer narrative.
So, we can either decide that temperatures always conform to the Warmer narrative, or we can decide that YOU conform to the Warmer narrative.
Which is more likely?
Andrew

Greg says:
April 29, 2014 at 8:41 am
Run that again? Degree.day products depend on population count ? WTF?
==================================================================
By God man, you found it!!! AGW is PEOPLE!!!!! Remove the people, all the body heat dissipates into space in a few years and the problem is solved— once the Eloi decide on how many Morlocks are actually required in Utopia.

ftp://ftp.ncdc.noaa.gov/pub/data/cirs/drd/divisional.README
Differences of the biases were small (>
So no clear statement of the uncertainty in the data presented but an implicit statement that there remains some biases of the order of 2 deg.F and that an error 0.3 is regarded as being “small”. , presumably meaning in relation to other errors like the 2F ones.
There’s an error in my last : sqrt(120) should have been sqrt(1200)
Since Dan Travers gives us decadal trends lets stick with sqrt(120) : as an uncertainty scaling for 120 monthly readings.
Now taking the undeclared but infered uncertainty to be +/-2F that gives , for the old DRD data:
2/sqrt(120) = +/-0.18 F/dec on a decadal trend.
“Errors in ɳClimDiv are likely less than 0.5°C for temperature ” Rather unhelpful for data given in fahrenheit. Did they even mean 0.5°C ??? Taking it as stated that’s 0.9F.
0.5*9/5 / 2/sqrt(120) = +/-0.082
California goes from 0.09 +/-0.18 F/dec to 0.22 +/-0.08 F/dec
a change of 0.11 F/dec for a drop in uncertainty of 0.1 F/dec
ie all the errors in the earlier version of the were in the same direction , and than some.
So we’re back to the same old storey , all errors that were ever made in temperature measurement were masking the true horrors of global warming. And only the new “algorithms” and “bias corrections” allows us to see the true scale of what is happening to the planet.

Steven Mosher says:
April 29, 2014 at 8:54 am
quote:
A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.
endquote
yes, Steven – there is logical reason for both these statements, such as:
1) if, for example, we have had some global earth recovery from the last ice age – this will show up AND will be increased by the addition of more (later) stations especially in any area grid weighted system.
and
2) UHI – a known and significant effect – will also cause the same effect when adding more stations, as again, more stations simply exaggerates the differences introduced by UHI. if there was one New York station in 1850, compared to 10 today, obvisouly one expects the 10 of today to show the UHI more significantly!
and equally
3) if, there is any actual measurable anthropogenic GLOBAL effect on temperatures, (whether caused by CO2, deforestation or even cow farts!) – this too will be further exaggerated by addition of more stations.
Logically, if, as is obviously the case, there have been an ever increasing number of stations – we all know that there will be an increase in any measured trend due to the number of station ‘bias’ that is introduced into the weighting/gridded scheme. Is this subsequently accounted for and ‘removed’. you would think something like the BEST project would have done this? but AFAIK no major datasets make this adjustment as part of their weighting adjustment? do you know if they do within the gridding code?
Whilst I agree with your observation, I’d like to know if this is being considered in the actual adjustment procedure – by Hadcrut, Giss, Best or whoever?

@Mosh
I should add that I presume no such ‘correction’ can or has been made, otherwise how can your observations be seen? Again, logically, if the correction/adjustment procedures are statistically and physically ‘sound’ – one would presume significant differences due to ‘number of stations’ should be relatively unnoticeable? Just asking……

Alarmists must loathe the satellite era, as there is much less opportunity to fiddle with the numbers.
But go back a century and anything goes, if the current trend continues we will soon find the LÍA ending around 1915.

Mosher says:
A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.
….
No way, you simply can’t add more stations in the past, the past is gone.
Are you saying that the number of stations influences the “observed” data?

To suggest that scientists of the early 20th were misreading thermometers by three degrees F is simply unsupportable. If there is a reason to believe there was a 3°F error that can be detected and corrected now the proper correction is to throw out the bad data. At the very least cease cocantation for the purpose of establishing trend.

Zeke,
In your first link, you are linking to yourself blogging.
Are you asserting that you are an authority on this “obscure product”?
Are you officially connected with the product, or are you commenting as an objective observer?
Andrew

“I’m just trying to figure out why, when you look at those graphs as a whole, you can’t clearly see World War I, the Depression, or World War II. ”
Assuming you’re serious:
Understand that during the current 17 year “pause” in global warming, we’ve seen emitted into the atmosphere fully 25 percent of all the anthro Co2 produced since the beginning of industrial revolution. Expecting short term fluctuations of C02 to show up,,.,especially from a much lower baseline… is not realistic…despite all the simplistic (some might say propagandistic) talk about “control knobs.”

At 9:06 AM, pablo an ex pat says…
“and they expect to be credible, how exactly ?”
I’ll give you three ways:
1) How would you like to lose your job, today?
2) How would you like to be investigated by the IRS?, and
3) ‘We have found some irregularities in your medical records; you’ll notice these impacts January 2017.
That’s how.

I’d like to ask something, don’t know if it’s OK or not, hope it is. I love WUWT, think it’s extremely valuable, and I can’t begin to think how to properly express how grateful I am to Anthony Watts and how much I respect his work.
With that clear, I understand Steven Mosher’s argument. I haven’t heard the counter argument. Anthony, if you’ve got time, what do you make of this? Agree, disagree, indifferent? I guess what I’m getting at is, I understand Steven’s position. I haven’t heard anyone address why he’s wrong other than what I consider to be implausible arguments about his integrity or motivations.
I don’t know if I should have posted the question here or at Lucia’s. If it was rude to ask here, I apologize in advance and withdraw the question. At any rate, again, thanks so much for all you do and keep up the great work!

OT
Just for fun on weather current. http://www.arapahoebasin.com/
Check out Current Conditions and view the Web Cams. 11%F , last three days 12″, new snow 4″, the managements comment ” its just like winter so we are going to stay open”.

I wanted to tell Trenberth that the reason for the “pause” was because they couldn’t eliminate the coldest weather stations any more–we’d catch them.
We caught this right away. That says they don’t think we can communicate this easily to the public at large.
The believers may be wrong, but they’ve had enough of a stranglehold on the press and public opinion to crash Western economies by sabotaging oil and coil and now fracking. This has killed myriads to millions of people by freezing and boiling people to death with high heating and air conditioning prices, and by the food riots we called “the Arab Spring.” It has also harmed wildlife enormously, by vilifying the basis of terrestrial Life (CO2) and by preventing ecologists from researching any harm that is going on by cheaply assigning it all to “global warming” or “climate change” and most of all by harming the economy. The wealth of a nation determines whether it can afford to invest in pollution controls, and how much.
It MATTERS–enormously–that we win the battle for public opinion.
Imo, the ways to do that are :
1. Follow the money. The least of it is the fact that the Koch brothers are a standing joke in our community. More importantly, the alarmists want YOUR money, thru taxes. Also important is that they crash your ability to pay any taxes.
2. WE are the scientists. The alarmists are POLITICIANS–Algore ran for President, remember? We also need to hit hard that IPCC stands for InterGOVERNMENTAL Panel on Climate Change. Anybody who ever took a college logic class knows that politicians lie 100% of the time. We can tell the public that the Null Hypothesis means that you don’t just make something up and consider it proven if it seems to make sense (Trenberth’s redefinition).
3. The Scientific Method is heavily under attack, which threatens all our well-being. See points 1 and 2, and then show them America’s NSF calls for research–very, very heavily weighted to “proving global warming.” The public may not understand science very well, and solving the hysteria will improve that, but showing them what the NSF has done to climate science is simple enough that they will “get it.”
4. Show the politicians–desperate for money–that what really gets more tax income is improving people’s economic freedom. AGW is counterproductive to them. Politicians like to communicate and are quoted in the lamestream controlled presses. They’re also a little smarter than average (with exceptions to be sure) and can understand our points a little ahead of the general public. But we have told them a little too much that the actual science is stable climate not warming. Money is where their heart is. Deal with that, and we’ll see a sea change in public opinion.

The reason the entire record cools is NCDC is now adding an adjustment for elevation. There is a sampling bias towards low elevation sites, since that is where most people reside and take weather observations. Cooler high elevation sites are under-sampled, so they are attempting to adjust for that, hence a lower temperature.

Mosh’ says:
A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.
===
How do you explain that ?
This necessarily means that the “more stations” are from a different distribution to the ones you already have included.
So what is it about the ones that were not used previously that causes the past to be cooler and present warmer. This is quite a surprising result that requires explanation. Not just “that’s way it is, your theory is busted”.

Mark Bofill says
April 29, 2014 at 9:53 am: [ “…” ]
Mark, are you advocating censorship?
I don’t think so. But if people can’t express their views, that’s what it amounts to.
Steve Mosher has his opinions, some of which I agree with. But I disagree with his idea that he, or anyone else, should be the gatekeeper of the debate. Who gets to draw the line? Who do we exclude? WUWT got very popular by allowing all points of view. Alarmist blogs censor skeptics, and as a result their traffic is pathetic.
I agree with you about S. Goddard, too. He is every bit as sincere and ethical as Mosher, or anyone on the skeptic side [skeptics are the only honest kind of scientists, BTW]. To label Goddard’s point of view as “conspiracy crap”, and to tell folks they should not read Lord Monckton, is a reflection on Steven Mosher, not on them.

Mark Albright says:
April 29, 2014 at 10:12 am
The reason the entire record cools is NCDC is now adding an adjustment for elevation. There is a sampling bias towards low elevation sites, since that is where most people reside and take weather observations. Cooler high elevation sites are under-sampled, so they are attempting to adjust for that, hence a lower temperature.
====
Sounds like a plausible explanation but I did not see that mentioned , where do you find this information?

You know what they say, “If you can’t beat ’em then cheat ’em”.
Which roughly translated says that every time you feel you are losing the AGW argument just make the data more extreme. Science, honesty and objectivity be damned.

Indeed – I’d saved the earlier plot for CA, and had recently run it again on the site, as the rhetoric was heating up out here about our current drought cycle.
I’d ask the author, or Anthony – can you guys improve on the quality of the graphics here (and preferably such that the before and after image, appear in one single file) such that readers can more easily save the graphics for our future efforts to share these examples with others.
The best way to catch others attention here is to be able to shoot a quick email off to someone who’s blaming it all on AGW. I even have a habit of printing out a good graphic – like these examples – and keeping it in my wallet. We’ll be out at dinner with folks, and someone will bring up how we’re causing CA to cook because of ACC. I’ll pull it out of my wallet and say, “perhaps it’s not even true,” while handing them the graphic.
It works wonders.

No! Quite the opposite. I think Steven’s correct in this.
The trouble is, of course I’d think Steven’s correct in this. I’ve heard his argument, and I haven’t invested the time that he or AW has in investigating temperature records. What I’m really asking is, does anyone I respect have any good counterarguments that I haven’t heard. Just trying to be a good skeptic. Having heard why Steven thinks he’s right, why is Steven wrong?

DBStealey,
I’m a little alarmed that you thought I was advocating censorship. Nothing could be further from my mind. I guess I missed something in the comment I linked, or phrased something poorly? It wouldn’t be the first time I’d proven myself to be a careless dumb-ass. Still, if this is the case if you could point me to it I’d be grateful behind my embarrassed blush. 🙂

Just a minor adjustment of the temperature record to better fit the CO2 record. What’s the problem!
The problem is that is doesn’t better fit the CO2 record. The slope is steeper prior to 1950. But it is not until 1950 that CO2 becomes a significant player. So it actually supports the “recovery from the LIA by unknown mechanism” theory.

“Just trying to be a good skeptic”
Mark B,
The first step in being a good skeptic is to not just take someone’s word for something. So, if someone on the internet says:
“A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.”
You should require that they provide all the evidence that is relevant to their assertion, or conclude it’s mere assertion.
In the case of the above, we can conclude it’s just assertion, since no relevant evidence has been provided.
Andrew

They can modified the temperature data all they want. What they can’t change is the physical property of water that it freezes at 32 degrees F. Thus the day will come when the claims of record heat will conflict with the observation that it is snowing and the lakes are still frozen. Then what?

We’ve gone from “Hide the Decline” to “Ride the Incline.”
Maybe we just need to add more “high elevation” stations in the present to get everything just right…
And, how does one add a high elevation station in the past if it wasn’t there to begin with?

My eyes opened up when out of the blue, 1934 was adjusted down from the hottest year on record with little or no explaination. Hence, 1998 could take the top spot.
Don’t get me wrong, I have little confidence in any of the temp sets based on surface obs. The US based record sets are poor, but they are considered the best globally. They can’t be cured through adjustment or concatenation theory.

Bad Andrew,
I’m willing to wager within reason that Steven Mosher has been transparent with BEST, and that if I cared to spend the time, I’d find the data on which he bases his assertions. To save time, for the sake of argument I assume Steven isn’t B.S.ing me. I can always check later.
Thanks.

Steven Mosher;
I have to admit to considerable confusion on your explanation. Perhaps there’s something I am missing, or perhaps the matter just exceeds my mental capabilities. But I distinctly recall you saying, on multiple occasions, that station drop out doesn’t change anything. That fewer stations gave the same answer as more stations. I simply cannot reconcile those statements with your current position which seems to be that adding stations does affect things. If so, then removing stations should not.
Could you please explain how you can stand by both positions at the same time?

Greg says:
April 29, 2014 at 8:46 am
Sure they can. We can safely predict that in the next few years the past will get colder.
This is obvious from the lack of warming or “pause”. Since there is no warming this decade, the effect of GHG forcing is to make the past cooler.
You have no grasp on how climatolgy works, Where have you been ???
————————-
But a few years after that, they’ll adjust it again, making their previous prediction of the past incorrect. Every time they adjust the data, they prove that their previous predictions of what already happened was incorrect.

This new method of deriving temperatures appears to completely eliminate CO2 as a climate driver. The graphs shown seem to indicate that, since temperatures have been on a fairly steady upward trend since 1895, long before large amounts of CO2 were emitted by industry and transportation, there is less evidence that man-produced CO2 had much if anything to do with the increase in temperatures (at least in the US). If temperatures rose dramatically from 1895 to 1940, as the graphs indicate, with no possible connection to CO2, why then does the increase after 1940 need to be attributed to CO2, and why is the increase before 1940 were apparently at least as rapid, or more rapid, than after 1940? (I was unable to download anything from NOAA, so my observation is based on the graphs above and therefore based on limited input)

It’s simply amazing how EVERY “error” they find in the record always seems to require cooling the past and warming the present. If someone flipped a coin ‘heads’ 10 times in a row I think I’d have already asked that someone else do the tossing with a new coin and do it where everyone can see it.

I thought one of the early Hansen adjustments was to remove the higher elevation reporting stations and increase the number of lower elevation stations — so shouldn’t the downward adjustment for elevation be made on the younger data rather than on the older data?

Guest essayist by Dan Travers said,
“As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.”

– – – – – – – – – – –
Dan Travers,
Yes, your quoted statement above does seem to be the case. Changes to datasets from the major institutional bodies who make them do seem to have the effect of showing higher warming rates after the changes compared to before the changes.
POSSIBLE CAUSE CATEGORIES (PCC) – these are the possible reasons all the major institutional bodies who make GAST datasets always increase the rate of historical warming when they make changes to their datasets
PCC #1 – It is just a coincidence
PCC #2 – Climate science is a young developing science so all changes are improvements from the earlier state of the dataset construction techniques and data measurement analysis. Improved science is just finding more warming rates in reality, so it is expected that all the major institutional bodies will always have increases in the historical warming rate.
PCC #3 – The major bodies are just professionally collaborating with each other closely and they are all openly and professionally coordinating closely the changes. So it is expected that the changes are in the same direction across all dataset changes and all major institutional bodies.
PCC #4 – outside pressure on each of the major bodies is causing creeping warming rate exaggerationism in their datasets
PCC #5 – the scientists involved with the major institutional bodies have a commonly shared consensus that there is a need to communicate better to save the planet. They are increasingly changing the historical warming rate because they think it is a better way to communicate climate science in order to save the planet.
PCC #6 – its karma
So, which of those PCCs are the more likely reasons and which are less likely reasons that all the changes to datasets from the major institutional bodies who make them do seem to have the effect of showing higher historical warming rates after the changes compared to before the changes?
Take your pick.
For me it is a toss-up between PCC #2 and PCC #5 as the reason the changes are always increasing the historical warming rate. Although PCC #6 should be given some focus.
John

I have had it with this sort of politically corrected data crap. Surely it must be official now that AGW is a pure, unadulterated HOAX and accordingly I formally declare myself a denier. This latest boondoggle with the numbers is risible.
Jo Nova’s current post about sea level rise numbers looking like they need an upward fiddle ( cos they are going against the AGW thesis) is getting ahead of the curve. I am sure they will be politically “corrected” shortly.
The closest analogy to this sort of figure fiddling I can think of comes from the porn industry where they have girls help the guys “keep the wood” between scenes. That relegates AGW to science porn for the political raincoat brigade which is about right.

E. Martin says:
April 29, 2014 at 11:44 am
Wouldn’t reporting the names and positions of the “adjustments” perpetrators at NCDC inspire them to justify their changes?
=======
This may have less to do with the actual data, it looks like the criteria used for Anomalies is of interest.

This is a good illustration for the U.S., and the like far beyond U.S. data alone is shown at http://hidethedecline.eu
Rewriting of data by the usual suspects is paramount, as without it climate history is consistent, logical, and believable, not having events occurring without a cause (from the “pause,” to the global cooling scare of the 1960s-1970s, to the LIA), basically ending up with my usual illustration: http://tinyurl.com/nbnh7hq .

This is more evidence that as someone on WUWT said a few months ago: “In climate science only the past is uncertain.” Does anyone think this will be the last attempt to modify the temp records? Has the record been perfected now or will there be more adjustments to it?

The semantics are evolving rapidly in the area of anthropogenic sins, but this rings a bell. Isn’t this how the Intentionally Pathetic Central Committee is mandated to appease the catastrophically Angry God of Weather?

A matter of considerable popular interest investigated during the year was the proposed establishment of a number of suburban meteorological stations around large cities with a view to getting at truer records. It appears, however, that, with the shelters now in use and considering the various corrections applied, the readings are substantially the same. The question was submitted to observers at a number of selected stations, and lengthy and valuable reports made thereon. It has long been established in meteorology that the average temperatures observed in large cities are higher than those observed in the country nearby….
-The Report of the Chief of the Weather Bureau for 1892, p. 577
What means “the various corrections applied”? Anybody know?

Hi Steve Mosher…
Very simple question. How on earth can you ‘add’ stations to the past ?
Can you just explain this one statement please. It makes no logical sense. At all.
There is another argument as well….
Data is just pure data. Climate science makes its adjustments valid as they are peer reviewed adjustments.
If a business adjusted its books….. It would get done for fraud.
I genuinely don’t see how any adjustments of the climate record are valid at all.
The data is what it is….. You can’t justify modifying it because it doesn’t fit your theory.
If the temperature in place a was recorded at 110 degrees 20 years ago. That’s the only valid data we have.
Why do climate scientists think that just because their methodology of adjusting the data has been peer reviewed it makes it valid ?

M Seward says:I have had it with this sort of politically corrected data crap. Surely it must be official now that AGW is a pure, unadulterated HOAX and accordingly I formally declare myself a denier. This latest boondoggle with the numbers is risible.
That recalls the “Harry_read_me” file, where programmer Harry makes some comments about the preposterous state of whatever passes for ‘data’ in mainstream climatology. A few excerpts:

Just how off-beam are these datasets?!!
I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh! There truly is no end in sight… I suspected a couple of stations were being counted twice, so using ‘comm’ I looked for identical headers. Unfortunately there weren’t any!! So I have invented two stations, hmm… Who added those two series together? When? Why? Untraceable, except anecdotally… I am beginning to wish I could just blindly merge based on WMO code. the trouble is that then I’m continuing the approach that created these broken databases.
Here, the expected 1990-2003 period is MISSING – so the correlations aren’t so hot! Yet the WMO codes and station names /locations are identical (or close). What the hell is supposed to happen here? Oh yeah – there is no ‘supposed’, I can make it up. So I have 🙂
You can’t imagine what this has cost me – to actually allow the operator to assign false WMO codes!! But what else is there in such situations? Especially when dealing with a ‘Master’ database of dubious provenance (which, er, they all are and always will be).
False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding 1 at a time until a number is found with no matches in the database… there is no central repository for WMO codes – especially made-up ones – we’ll have to chance duplicating one that’s present in one of the other databases…
I added the nuclear option – to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don’t think people care enough to fix ‘em…
Oh, GOD. What is going on? Are we data sparse and just looking at the climatology? How can a synthetic dataset derived from tmp and dtr produce the same statistics as an ‘real’ dataset derived from observations?
I disagree with publishing datasets that are simple arithmetic derivations of other datasets published at the same time, when the real data could be published instead.. but no.
The suggested way forward is to not use any observations after 1989, but to allow synthetics to take over.
Oh, ****. It’s the bloody WMO codes again. **** these bloody non-standard, ambiguous, illogical systems. Amateur hour again.

Any major adjustment of US temperature records that reflect an increase in the apparent warming trend for Contiguous US by some 50% (0.088F/decade to 0.136F/decade between 1895 and 2012 deserves close scrutiny.) The adjustment for states like MAINE,MICHIGAN,TEXAS,PENNSYLVANIA,, NORTH CAROLINA are especially troublesome. The Maine change from .01 to 0.26 is a 26 times greater warming trend than before . . These call to question the entire process. Either we have bad previous records or bad current adjustments or both. Who said that climate science is sound when we are now making such huge corrections to our base data going back a century, not to mention the failed models and bad predictions for the last 17 years.

Thanks Mod for the correction to my last post — it was very kind!
Reading over the comments, I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.
If alterations to data sets, collected over time, is required then there is a logical reason for doing so. They didn’t destroy the original data sets.
I do have an issue with projections based on an evolving understanding of control data but you’ll find Scientists never to this.
Your target(s) for the misplaced anger should be more insightful and should be based on the “conclusions” from those who release “results”.

“I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain.”
So you think that NOAA/NCDC/NASA are an arrangement of magical characters that confer innocence on people?
Has anyone ever talked to you about how human beings behave? Did you listen?
Andrew

There’s only one reason I can come up with for why older averages would get cooler while newer ones get warmer. Good old UHI or micro site contamination (MSC). In the past there were more rural stations because more of the US was rural. So, the odds are better that any added station will show less UHI/MSC. For recent times the odds now shift to more stations being urban or at a minimum having some MSC problems. Hence, when these are added in they increase the trend. The bottom line is the increase is purely phantom based on the problems we all have seen over the years.
If there was some kind of systematic problem it should impact old and new records about the same. You’d think this logic should have been easy to figure out by the scientists. I wonder why they continue to ignore simple logic.

John McClure says:
April 29, 2014 at 1:52 pm
Reading over the comments, I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.
Yes but … the problem is simple researcher bias. They have no motive to look for anything that lowers the trend and completely accept anything that increases the trend. So, when they find something that increases the trend they stop thinking immediately and accept it. When they find anything that reduces the trend they either just assume it’s an error or figure out some reason (whether true or not) to ignore it.
Researcher bias is the entire reason for double blind studies in medical research. They found it was almost a 100% rule. It isn’t necessarily intentional, it just happens.

When you see a temperature set expressed in °F, and which averages temperatures across regions without using anomalies, it’s likely to be carrying a lot of baggage. And that’s the case with Drd964X. To understand why they need to do what they are doing, you need to read the paper of Fenimore.
Drd964X has been built up over years, primarily for agricultural uses etc. It’s not intended for climate science. So it isn’t even internally consistent. It uses different methods for pre-1931 data than later. Why? Because that’s how states used to do it. Continuity was more important than physical climate accuracy.
Not using anomalies caused problems with inhomogeneity. They describe “valley effects” – regions where stations were lower than average topography. And “ridge effects”.
The change in absolute values is inevitable when you change the station set. We saw that trying to compare USHCN and USCRN. USCRN reports cooler temperatures. Why? Their stations are, on average, higher. And there are latitude effects etc.

John McClure says:
April 29, 2014 at 1:52 pm
Thanks Mod for the correction to my last post — it was very kind!
Reading over the comments, I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.

– – – – – – – – – –
John McClure,
I quite agree about your caution in attributing anything to the scientists that is pejorative. But consider the possibility that they think that what they are doing is good for saving the planet; which is one of my six ‘Possible Cause Categories’ (PCC).

John Whitman says:
April 29, 2014 at 11:34 am
[. . .]
PCC #5 – [one possible reason to consider is that] the scientists involved with the major institutional bodies [making datasets] have a commonly shared consensus that there is a need to communicate better to save the planet. They are increasingly changing the historical warming rate [higher] because they think it is a better way to communicate climate science in order to save the planet.
[. . .]

I do not know that is true. But the fact that all the major institutional bodies making the datasets have consistently (?always?) increased the historical warming rate when they make changes. Why? See my other 5 PCCs.
John

Richard M says:
April 29, 2014 at 2:07 pm
I completely agree if we’re discussing Pharma but climate science isn’t a hard science. Its a soft science based more on “educated guess” than scientific fact. Its in its infancy yet evolving well thanks to all the global milk.
The issue isn’t the real need to understand, the issue is the rush to judgement! Scientific dialogue over an issue we can share globally is what we need — not action as its poorly defined. Stupid solutions to an issue that has yet to be properly defined is a Carnie hire.

Hi steve. If a station at elevation x reports a temperature of 59 degrees based on an elevation of 3000 feet. That what’s the temperation is at that station and elevation. The world is a 3D place.
Doing the maths and stating if the station was at sea level the temperature would be 57 degrees at that location isn’t valid. It’s an assumption.
The data is what it is..
The temperature at place a with a elevation of 3000 feet was 59 degrees.
You can adjust it all you want but fundamentally any data post adjustment isn’t a record.

Steven Mosher says:If the results disagree with your theory, your theory is busted.
That is what we’ve been pointing out incessantly here: the catastrophic AGW conjecture is well and thoroughly busted, because it doesn’t agree with empirical evidence. It’s a dead parrot.

It has warmed so much that Lake Superior is still 60% ice covered, probably the highest level for this time of year since 1895. How can it be so much warmer. The freeze temperature of water has not varied in 13 billion years so it must be colder!!!.
We cannot rely on the numbers produced by the NCDC. They have adjusted the records about 10 times now. Are they saying, that the 9th adjustment made last year was not accurate (at the time, they said it made the record perfect). Are they saying that the 1st adjustment in 1988 was not accurate (at the time they said it was fully corrected at that point).
The climate is not different than it was in 1900. There is no record from 1900 produced by the NCDC that we can rely on anymore.
We need a forensic audit team to go in and fix our temperature history.

JustAnotherPoster says: April 29, 2014 at 2:28 pm
“If a station at elevation x reports a temperature of 59 degrees based on an elevation of 3000 feet. That what’s the temperation is at that station and elevation. The world is a 3D place.”
Yes, it is. And you can find all that in the GHCN-D data, unadjusted. But it’s not what people generally want to know. If you want to know thw average temperature in Maine, or the US, you then have to worry about which set of stations you have used. If your 3000 ft station in Maine misses a month, the state’s average goes up. Doesn’t mean it was warmer. That’s why you need anomalies for regional averaging.

John Whitman says:
April 29, 2014 at 2:18 pm
John,
Thanks for the comment but the part no one seems to understand or refuses to is the reality. Scientists are free thinkers here and bound by nothing political.
No question we have the characters who game the system but they lose all merit when they do.
I think the distinction you’re looking for is the notion “Scientist” in soft science in relation to the real thing?

Bill Illis says:
April 29, 2014 at 2:41 pm
We need a forensic audit team to go in and fix our temperature history.

– – – – – – – – – –
Bill Illis,
I agree.
My View: The audit team cannot be depend in any way on government funding or have any connection to the major institutional bodies that make the datasets. It needs to be privately funded and to include; statisticians; medical and software research experienced people, professional industry QA/QC experts, professional IT data handling experts, professional auditing company personnel and of course some skeptical climate scientists.
From my perspective, that is the only way to increase the confidence in climate science from its low level.
John

John McClure: “I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.”
That is the mindset that makes is responsible for so much acceptance of stuff like “The brutal cold is a result of global warming; the scientists say so.”
In the first place, there’s plenty of gain on offer, since producing results helpful to the skeptic cause can be a career-shortening move in this administration. Listen to the Christy recount an example in his interview over at Alex Epstein’s site.
But there’s something else, something that I do myself all the time. When a computer program I’ve written gives me a result that makes sense to me, I don’t much look for problems. Otherwise, I look for the bug. Many’s the time I’ve gone round and round to find the bug, only to realize eventually that the program was actually giving me the right answer; it just hadn’t seemed right to me. So I’m biased, and only because I’m trying to get it right. Feynman’s recounting of the elementary-charge experiments comes to mind in this context.
So if you’re convinced–because other scientists who you think must be right have tell you so–that the physics tells us warming has to be occurring, you think your results are wrong if that’s not what you see.
And, by the way, there’s something that I’ve seen first hand a lot. Those PhDs routinely make errors first-year undergraduates could detect. (Or even lawyers; I once had to send a tiny, around fifteen-line, central-to-the-system swatch of C code back to a couple of–very bright–PhDs three times before they finally got it right.) So there are a lot of things to correct all the time and therefore plenty of opportunity for (even unintentionally) selective correction.
And, judging by how misleading so much that comes out of the IPCC and other catastrophic-global-warming proponents is, I also wouldn’t discount the possibility of intentional falsehoods just because the folks involved are scientists.

Bill ILLIS
You said “We cannot rely on the numbers produced by the NCDC. When one sees this level of adjustments , I tend to agree with you . The new apparent warming rates by state have changed in a major way. About 1/4 of the 48 states had their apparent warming rate increased by a factor of 2, 4 ,12 and even 26 times higher. Another 1/4 had their warming rate stay the same or slightly reduced . The rest all had their warming rate increased but by lesser amounts .. How can one go to states like Maine or Michigan or Texas and say oops, but your climate has just warmed more by a factor of 26,12 or 9 times since 1895. Eventually NCDC may get to the point where the public no longer believes or trusts any of their data. . To announce this apparent 50% warming trend increase of the entire Contiguous US just after US experienced its 34 th coldest winter and when their annual and winter temperatures have been declining for 17 years is even more peculiar.

John McClure says:
April 29, 2014 at 2:43 pm
@John Whitman on April 29, 2014 at 2:18 pm
John,
Thanks for the comment but the part no one seems to understand or refuses to is the reality. Scientists are free thinkers here and bound by nothing political.
No question we have the characters who game the system but they lose all merit when they do.
I think the distinction you’re looking for is the notion “Scientist” in soft science in relation to the real thing?
– – – – – – – – – – – –
John McClure,
You point out the possibility of making a distinction between ‘real’ and ‘soft’ science. ‘Real’ scientist as in one who conforms to an objective concept of science and ‘soft’ scientist as in one who conforms to a subjective concept of science? I would say that is the distinction.
With that distinction in mind then the point is, prima fascia, that there are reasonable issues raised by a broader science community and by responsible citizens interested in the science. The reasonable issues are about whether the major institutional bodies doing datasets are ‘soft’ (subjective) scientists doing ‘soft’ (subjective) science or are they ‘real’ (objective) scientists doing ‘real’ (objective) science.
In order to establish which kind of scientists and science is involved, would you agree with Bill Illis (April 29, 2014 at 2:41 pm) there should be a forensic audit team to go over the processes and products of the major institutional bodies doing the datasets?
John

The clever thing about increasing the heating trend by cooling the past is that today’s temperatures can be verified, those of the past cannot, and have larger error bars as an additional excuse. Given the useful nature of the modifications for the Warmistas, I can only conclude it is a deliberate, dishonest, strategy.

Did they offer a justification for this adjustment? Whatever justifies the adjustment also justifies the claim that their original data was not trustworthy. They have established that the original data was not trustworthy. Why was it not trustworthy? What guarantee can they give us that they have eliminated the problem that made the original data not trustworthy. To cut to the chase, what are their criteria for making adjustments in the future? Could they please state those. Honest people would that. Then we could use their criteria to make our own assessments of how well they make adjustments. Honest people would welcome such assessments.

I think you folks are all missing the most important post in this thread, by JonF, I believe….
Let them make 1895 cooler by whatever means they choose.
This introduces a HUGE rise in temperatures well before manmade CO2 could have possibly played an influence on our climate. It simply destroys the theory that the 1970-1997 rise HAS to be CO2….
Game over…..

This data churning is the equivalent of ‘confusion marketing’.
Anyone who has tried to study cellphone/internet sales literature will accept that it is not possible to come to firm conclusions, on the basis of the information given, about which is the best deal.
With all the major data running against their modelled scenarios they are throwing in another dollop of confusion in an attempt to maintain their failed narrative.

Someone tell me how in Michigan where I live the ice is still at record levels yet NCDC claims it has been colder in the recent past, that being the previous 100 years.
As observed by Steven Goddard, there have been massive adjustments for Michigan temperatures.http://stevengoddard.wordpress.com/2014/04/17/march-madness-in-michigan/
Steve Mosher and Nick Stokes, please explain 5 degrees of adjusting the past cooler and the present warmer. How is that justified given the amount of ice on the Great Lakes this past Winter and Spring.
This is where I doubt either one of you will respond because there is no logical explanation, no double speaking barf mulch that can have any explanation other than the temperatures are 100% phony and it likely applies to virtually all of the “adjustments” coming out of the temperature gatekeepers.

It took until April 29 10:41 am for a commenter (evanmjones) to point out the obvious: The changes have made the slope steeper prior to 1950. But it is not until 1950 that CO2 becomes a significant player. So it means that climate sensitivity to GHGs must be lower than previously thought. As evanmjones says, it adds support to the “recovery from the LIA by unknown mechanism” theory.

charles nelson says:
April 29, 2014 at 3:40 pm
This data churning is the equivalent of ‘confusion marketing
==========
The Marketing Aspect has been an issue from day one. The initial IPCC effort was the basis for the creation on the UNFCCC. Stupid Is as Stupid Tells It To Do ; )

DR says: April 29, 2014 at 3:53 pm
“As observed by Steven Goddard, there have been massive adjustments for Michigan temperatures.”
They aren’t massive adjustments to the same data. They are different datasets with a different mix of stations. You just can’t compare regional averages unless they use anomalies.

Thanks for the link to your source code Zeke. I’m interested in you breakpoint and kriging logic and whether it’s applicable to sea-level and precipitation trends. Any thoughts? Personally I use R, but I should be able to muddle through your matlab code.

Given the fact that so many sites have so many unique characteristics and given the long term quality of the data, the simplest approach is the one most likely to give an accurate idea of what is happening in the long run. Not interested in weighted averages, etc. What is the trend for maximum temperatures for all stations, unadjusted. Possibly an urban heat island adjustment will be required, but even that should be avoided if possible, by excluding stations that are found in built up areas that weren’t previously built up.

@ Nick Stokes
Utter nonsense. NCDC claims this past winter/spring in Michigan was warmer than 4 other winters while we are looking at record ice.
The ice tells the truth on the Great Lakes and inland lakes for that matter as well. NOAA is adjusting the raw data. It doesn’t matter if their reasoning is “peer reviewed”, it is an impossible physical occurrence that it can be warmer yet have hugely more ice area on the Great Lakes. Stop obfuscating. If you use a base period of 1901-2010 and the “anomaly” is higher in 2014 than 4 other years, that means it was warmer in 2014. The adjustments are there for anyone to see.
Michigan has some of the better rated weather stations in the country. I want you to explain how Spring 2014 can be warmer than 4 other years when the total ice of all the Great Lakes, not just Superior, is at all time record highs.
To adjust temperatures upward in 2014 is a load of bullshit and as far as I’m concerned it is a concerted effort by a few in control of how and why it is done to fit their political master’s narrative while they themselves are political activists, including one man that on his own bio page claimed he had a PhD, and didn’t, and is now in charge of it all.
You still have not explained how Michigan can have record ice even today, yet it is supposedly warmer than four other years. It is entirely accomplished by adjustments to the raw data.

omnologos says: April 29, 2014 at 8:04 amThis of course being completely pointless, since we all agree that the USA is only 3% of the planet ((c) J Hansen)
And apparently misses the whole point of the article:
Which is about adjustments to historical temperature records, not about whether USA temperatures are indicative of global warming.

I’d like to ask something, don’t know if it’s OK or not, hope it is. I love WUWT, think it’s extremely valuable, and I can’t begin to think how to properly express how grateful I am to Anthony Watts and how much I respect his work.
With that clear, I understand Steven Mosher’s argument. I haven’t heard the counter argument. Anthony, if you’ve got time, what do you make of this? Agree, disagree, indifferent?

There’s nothing there worth replying to. The comment you refer to isn’t an argument about facts. The first part is a critique of skeptics’ strategy, the second an undirected reading list including a disgraceful and unsubstantiated ad-hom against Monckton. Re (1), there is no central skeptical command directing strategy, so everyone does what they like, and good on them. Re (2), it isn’t our job to read an open-ended list of general material (least of all an ill-mannered and abusively presented one) and construct Mosher’s argument for him. Let him put a clear concise argument up and there will be people willing to look at it.

DR says: April 29, 2014 at 5:52 pm
“I want you to explain how Spring 2014 can be warmer than 4 other years”
I’d like you to give your evidence for saying that it isn’t. NCDC has at least added up the data. I can’t see that you’ve done anything like that. In fact, in one place you’re saying winter was fifth coldest – here it’s spring.
What I was saying about Goddard’s plot is that it doesn’t show the effect of adjustments at all. It shows the mean of different sets of stations. It looks like the first one he calculated himself. There could be some difference due to adjustments, but you can’t tell until you account for the effects of latitude etc.

AJ,
I suspect kriging would be useful for both (or at least more useful than other spatial interpolation techniques). As far as breakpoint detection goes, it really depends on the structure of ihomogenities in the data, as well as the underlying “true” field. Pairwise homogenization approaches tend to work well when the underlying field is smooth and well correlated over distance, or the field is very densely sampled, such that you can easily identify localized biases without removing real signals through false positives (ala GHCN with some arctic stations). It also works well when inhomogenities tend toward larger step-changes, and less well (though its still useful) when they are gradual trend biases or lots of small breaks. This paper by Williams et al looks into cases when homogenization does and doesn’t work for temperature records, which should help a bit: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf
It might also be useful to do some tests on synthetic data with different known inhomogenities added, to see how well different methods perform.

Nick Stokes,
I caught them red handed lying
In February we had snow, 3 to 6 inches of snow. Enough snow to bury my feet in and it was reported as RAIN!
Don’t tell me it was because of different weather stations or micro climate or what ever because I am close enough to that darn weather station that I can walk, through the snow, to it.
Oh and when I told them I caught them lying they readjusted the data to say we had 0.37 inches of snow but above freezing temps. You CAN”T BURY YOUR FEET IN 0.37 inches of snow and I do not get 3 inches of ice on my water tanks when the temperature is ‘Above Freezing’
Also ‘Adjustments’ are not needed at the weather station in question because it is only about a decade old.
THEY LIED! And the raw data was ‘adjusted’ before it was 24 hours old.

I find the whole Mosher/Zeke/Stokes fusillade amusing SOP for them. They all miss the point by focusing on minutiae.
The main point to remember: all national temperature averages are a mathematical construct, with choices made by the creator of the construct. Some constructs are closer to reality than others, but none of them match reality exactly.

Let’s assume that they are correct and the past was cooler and the past warming trend is correct.
Since the present warming trend is flat, the total temperature record is very good evidence that the affect of increased CO2 levels is cooling.
Sometimes it isn’t necessary to do anything, maybe we should just let them dig a deeper hole.

@Nick Stokes
Has the freezing temperature of water changed? Come on Nick, you’re a smart guy. Explain how there can be more ice than ever recorded on the Great Lakes in January-March, yet 2014 is still warmer than 4 other years recorded in Michigan during the same period.
You said there are no massive adjustments, but of course in ‘Nick Stokes’ speak, massive could mean anything from zero to infinity. Yet, there are the before and after charts at the beginning of the thread and somehow in your mind there is no massive adjustment while it stares you in the face.
I’m sure if Anthony or someone would post Michigan’s weather station siting quality ratings, they will be higher than most other states, yet a 100+ year trend of -.010/decade turned into +.21/decade in just a few years time by the geniuses who figured out people were complete idiots that kept thermometer records.

Anthony,
I’ve lived in Michigan my entire life, 1/2 mile from Lake Huron. As one who has been ice fishing since 5 years old, in the next 57 years it didn’t take a lot of brain power to figure out that when temperatures go above freezing and the sun gets higher in the sky, the ice melts. Throw in some rain and it goes away quickly. Instead it remained very cold and continued to snow. Our heating bills speak for themselves and can be lowered by making it colder in the house, but water cannot change its freezing temperature.
Virtually nobody I know living in Michigan remembers such a cold winter and extended ice fishing season. Spring 1960 was colder than 2014? Not a chance. I just find it incredulous looking at the “new and improved” Michigan temperature records NCDC spews out can be taken seriously. I don’t care how many peer reviewed approvals Mosher/Stokes/Zeke references, THE ICE DOES NOT LIE. My mother is 93 years old this July; lived in Michigan her whole life.
There is no other state in the country that can validate/invalidate temperature data by simply looking at the ice. Seriously, these adjustments are absurd, even the 2009 version is a joke. My mother is 93 and would ask what school taught you the last ten years in Michigan were warmer than the 1930’s.http://stevengoddard.files.wordpress.com/2014/04/screenhunter_253-apr-17-06-04.gif

“A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.” SM quote
How does that tie in with the mass altering individual station records!!!???

Mike Jonas – excellent point.
Steve Mosher – I don’t understand how it automatically follows that, if the past was cooler than the present, and you then add more stations from the past to the existing records, that this will cool the past further. As a thought experiment, what if there was an extra 100,000 stations out there which we haven’t yet added the records from? Would the past be cooled every time we added another batch? How large would the temperature adjustment be, by the time they were all added?

DR says: April 29, 2014 at 7:29 pm
“Explain how there can be more ice than ever recorded on the Great Lakes in January-March, yet 2014 is still warmer than 4 other years recorded in Michigan during the same period.”
Again, how about some actual data? That’s what NCDC was looking ta.
Here’s Alpena/Phelps, right on the shore of Lake Huron. GHCN V3 unadjusted data. And yes, March was cold. Av -6.7°C.
But how about March 1885: -10.3°C. Or 1960: -8.2°C. Here’s the ten coldest:
1885 -10.3
1960 -8.2
1883 -8.0
1888 -7.7
1923 -7.0
1926 -6.9
1877 -6.8
1887 -6.8
2014 -6.7

1) What is the margin of error in the measurement? Is it less than 0.01 F? If not, we cannot say if the trend is flat or positive or negative.
2) Are the data adjusted to account for UHI? Large towns can 1 to 3 C warmer than surrounding rural area.

Nick Stokes: GHCN V3 unadjusted data.
Methinks I should have said a bit more about my thoughts at 1:09 PM. It is clear that way back in 1892 the Weather Bureau was concerned with UHI and how to deal with it in area records. The report clearly states that corrections have been made and that the result is that the urban temperatures are substantially the same as in the nearby rural areas. It is not clear whether ‘corrections’ means physical location of the instruments, but it could just as well mean ‘adjustments’ to the readings, done on an ad hoc basis by persons unknown. It is not beyond imagination that current ‘corrections’ are being inflicted on numbers that were already ‘corrected.’

John Slayton says: April 29, 2014 at 10:57 pm
“The report clearly states that corrections have been made”
People here have funny ideas about adjustments. They aren’t “alterations to the historic record”. They are usually someone taking a mass of digitized data and putting it through a computer algorithm prior to calculation of a regional or global average. It’s not “correcting the past”. It’s trying to estimate what is representative of the region.
There was no capability to do this in 1892. Thermometer data was read, written into a log book, sent to some central office with telegrams or written forms, when it may have got into the newspaper. Then it sat, until unearthed by digitizers in the 1970’s or thereabouts.
Those records still largely exist, as do the newspaper reports. No-one has gone through manually altering them. And they were transcribed into GHCN as part of a major project in the early 1990’s. GHCN re-issued them on CD’s. That’s not how adjustment happens.
GHCN publishes daily and monthly unadjusted data. I’ve compared some of that with contemporary news reports (some reported here). It matches. And you can check that the monthly data is indeed an arithmetic average of the daily data.

Nick Stokes says:
April 29, 2014 at 4:19 pm
DR says: April 29, 2014 at 3:53 pm
“As observed by Steven Goddard, there have been massive adjustments for Michigan temperatures.”
===============
Nick says..”They aren’t massive adjustments to the same data. They are different datasets with a different mix of stations. You just can’t compare regional averages unless they use anomalies.
=================================
What Mr. Goddard’s graph shows is the total difference between NCDC reported temperatures and GHCN HCN raw temperatures. What Mr. Goddard often does is take ALL GHCN continuously active stations, and use those for a basis. In this case, using those stations, the reported T matches the Northern and more Southern Great lakes record smashing ice as the clear cut coldest march on record. This is ,IMV, both constructive and informative. In the case of the US as a whole this method far more closely matches the early official US charts, shown in this post of Mr Goddard here… http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/
You see, when you attempt to factor in THOUSANDS of station changes, and just as many disparate sitting issues at each station, and a very uncertain science on UHI with disparate views by educated people, and add in a legitimately questionable TOB adjustment, and you shake and bake the entire record into regional averaging and anomalies of ever changing stations, the chances of ending up with a FUBAR record grows exponentially.
EM Smith did some excellent articles on this.cauldron of mashed assumptions and stations changes. Mark Boffill, I highly recommend you go to his sight here. E.M. is both brilliant and a very very rare straight shooter. http://chiefio.wordpress.com/category/agw-and-gistemp-issues/
The concomitant cooling of the past, (both the attempted scrubbing of the MWP, as well as the false cooling of the late 30 and early 40 in the US and other places), in combination with questionable adjustments and recent divergence from RSS , and, in addition, UHI questions etc, all lead to my perspective that the VERY MINOR warming detected, (certainly nothing outside of past periods of warming) plus the failed predictions of doom, is “much ado about nothing”.

One difference between Nick Stokes and Steve Mosher, SM does a drive by and NS comes back and continues what he has started. Whether you agree with what Nick says or not he is to be congratulated for getting into a discussion. Until Steve Mosher does the same he should be ignored as a troll would/should be.

Fascinating stuff, but you guys should be very wary of seeing conspiracy in everything. A sober analysis of the justifications and the methodology is badly needed, but the changes should not be dismissed as part of THE AGENDA out of hand.
Remember that this is USA-only ground station data, and it will be no more than a footnote to the greater debate, unless of course the same kind of adjustments can be rolled out globally…
But what is also clear is that, if we take the revised data at face value, it becomes much more difficult to establish any ACCELERATION in the warming trend for USA, as it backdates much of the warming to the earliest part of the graph, a time when CO2 emissions were substantially less, thereby flattening the curve. So if it IS a conspiratorial move it may be counter-productive.

David A says: April 30, 2014 at 1:10 am
“What Mr. Goddard’s graph shows is the total difference between NCDC reported temperatures and GHCN HCN raw temperatures. What Mr. Goddard often does is take ALL GHCN continuously active stations, and use those for a basis.”
And that’s a problem with making up your own index. The answer you get depends on the mix of stations. More North, and the average goes down. USHCN has tried to make their distribution reasonably representative, though USCRN is more systematic. But GHCN historically was just a collection of whatever data could be retrieved. No one ever tried to make it representative of Michigan.
As to whether it “matches” the ice – that’s a very subjective judgment. The data for Alpena suggests otherwise.

Alpena is only one location, assuming raw and no sitting issues, and assuming no adjustments to the past record at that location. The ice is not subjective, The Great Lakes are less affected by currents then the ocean ice shelves. (No warm of cold currents from far away locations) Wind is likewise less of a factor on the Great Lakes,(landlocked body of water) and can be both positive and negative.to ice coverage.
Beyond that, the Great Lakes cover a massive area, and the ice is, to a degree, one massive thermometer covering a vast area. It is the coldest year on record ON the Great Lakes, and in the continuously active GHCN stations, and not by just a little. (This covers a very large area) Adding in the testimony of the Great Lakes ice being massively above the previous record, is a cogent and constructive argument.
Also, my general arguments on using all continuously active locations are sound, as are my points with regard to the likely errors of the current system you advocate, which I call FUBAR.
Your point on the differences between the two data sets is not without potential merit however. A map of the differences in the locations would be informative Keep in mind, the location of the storm tracks and the cold air following those weather systems, can easily overwhelm a hundred or more miles of N/S difference In addition Goddard’s number one coolest ranking for those stations is ALSO based on a past of those same locations..
Additionally most of Mr.Goddard’s map do not compose of different data sets, as this link shows..
.http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/

One finale point about the continuously active GHCN stations. if they were in error due to random station location, they would show both a warmer or cooler past and present, compared to your “well chosen” stations. However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present, then the adjusted homogenized current data sets. A random selection of stations should average out.
This consistently is testimony to the assertion of a manmade cooling of the past, and warming of the present, and is in line with Jim Hansen’s early US T charts..

David A says: April 30, 2014 at 3:16 am
“One finale point about the continuously active GHCN stations. if they were in error due to random station location, they would show both a warmer or cooler past and present, compared to your “well chosen” stations. However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present”
Yes, of course. They aren’t located randomly. GHCN stations are predominantly in the south of the state. That’s why you get a warmer average. But SG attributes that to “adjustment”.
You get more warmth in the past because that’s when GHCN was just collecting whatever it could. When it became an ongoing program in the mid 90s, that got rationalized, with a more even distribution.
People who do averages properly know all this. They use gridding and area weighting to avoid distribution biases. And they use anomalies to counter the effects of things like latitude.

Nick Stokes: People here have funny ideas about adjustments.
May well be true. Your description of adjustment would leave one with the impression that adjustments are only about groups of stations, and are applied uniformly across those groups. Things might be heading in that direction, but they haven’t been so restricted up to now, at least if we are to believe NCDC’s description of the procedure at http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL (yeah, my bold):The data for each station in the USHCN are subjected to the following quality control and homogeneity testing and adjustment procedures.
A quality control procedure is performed… to identify suspects (> 3.5 standard deviations away from the mean) and outliers (> 5.0 standard deviations). Until recently these suspects and outliers were hand-verified with the original records.
….
The homogeneity adjustment scheme described in Karl and Williams (1987) is performed using the station history metadata file to account for time series discontinuities due to random station moves and other station changes.
….
Currently all data adjustments in the USHCN are based on the use of metadata.
I find it hard to believe you are serious when you say adjustments are not ‘alterations to the historic record’ or that they are not ‘correcting the past.’ If I go to http://cdiac.ornl.gov/cgi-bin/broker?id=048839&_PROGRAM=prog.gplot_meanclim_mon_yr2012.sas&_SERVICE=default&param=TMAX&minyear=1893&maxyear=2012 and get the station history graph for Tejon Ranch TMAX, I get a very different history than if I pull down Tejon Ranch TMAXTOBS at http://cdiac.ornl.gov/cgi-bin/broker?id=048839&_PROGRAM=prog.gplot_meanclim_mon_yr2012.sas&_SERVICE=default&param=TMAXTOBS&minyear=1893&maxyear=2012 .
In the cited 1892 document, the head of the Weather Bureau explicitly says ‘corrections’ have been made that bring urban temperatures into line with surrounding areas. Are you suggesting he was mistaken? My intention here was not to imply that the current processes of systematic adjustment are in any way similar to his ‘corrections.’ I probably should have stated that in the first place, but I thought it was adequately suggested by the phrase ‘ad hoc basis by persons unknown.’

John F. Hultquist says: @ April 29, 2014 at 10:16 pm
Gail Combs says “I caught them lying …”
Perhaps the NWS reported “Water Equivalent” rather than snow depth insofar as snows are greatly different in the former.
>>>>>>>>>>>>>>>>>>>
NO
Jeff Masters originally reported the February 12th readings as RAIN and the temperatures as above freezing. I called him a liar on his blog. He then changed it to a high above freezing and snow. When I called him a liar a second time, he changed the records to a high of 31°F and 0.37 inches of snow.
I wish I had gotten screen shots to prove what I am saying but I did not. One does not expect the past to change like it does in some Orwell novel.
Another example is the Norfolk City vs Norfolk International Airport I posted here on WUWT a few years ago.

…Here is a quick look at the only city & close by airport listed for North Carolina. The city is on the North Carolina/Virgina border and right on the ocean. Take a look at the city vs the airport! Norfolk City andNorfolk International Airport
Here is a graph of the raw 1856 to 2009 Atlantic Multidecadal OscillationAmazing how the temperatures follow the Atlantic ocean oscillation as long as the weather station is not sitting at an airport isn’t it?

The Norfolk International Airport was a straight line of ever increasing temperatures while Norfolk City followed the oscillations of the Atlantic Multidecadal Oscillation.
Now those charts are no longer available and the Norfolk International Airport chart matches Norfolk City and the Atlantic Multidecadal Oscillation.
What is also interesting is NONE of the NC ‘official weather stations’ are any further into the mountains than Chapel Hill in the middle of the state (Piedmont.)
Again from that older comment of mine.

I live in North Carolina and did a quick look at my state. It is VERY interesting.
There is nothing closer to the mountains than Chapel Hill which is just west of Raleigh. All the areas with longitudes further west are also further south and that puts them on the seacoast. Chapel Hill is on the plains. Seems the mountain areas are no longer part of the record, imagine that.
I also found that at Wunderground the Moncure NC station is no longer available, it flips you to the new Sanford NC airport. It was available the last time I looked. Asheville NC is the big city in the mountains, home of the Biltmore Estate (1895) Its weather station now only goes back to 2005 at the AIRPORT of course. The site also directs you to the “nearby” (80 miles) city of Greenville (downtown) which only goes back to 1970. That site has been declared “unofficial” The “official” station is now the Greenville/Spartanburg, South Carolina (Airport)….

E. M. Smith in looking at the “Death of the Thermometers” found similar situations where the thermometers that were dropped were in the mountains.

GHCN – GIStemp Interactions – The Bolivia EffectAlright Already, what is this Bolivia Effect?
Notice that nice rosy red over the top of Bolivia? Bolivia is that country near, but not on, the coast just about half way up the Pacific Ocean side. It has a patch of high cold Andes Mountains where most of the population live….
One Small Problem with the anomally map. There has not been any thermometer data for Bolivia in GHCN since 1990.
None. Nada. Zip. Zilch. Nothing. Empty Set.
So just how can it be so Hot Hot Hot! in Bolivia if there is NO data from the last 20 years?
Easy. GIStemp “makes it up” from “nearby” thermometers up to 1200 km away. So what is within 1200 km of Bolivia? The beaches of Chili, Peru and the Amazon Jungle.
Not exactly the same as snow capped peaks and high cold desert, but hey, you gotta make do with what you have, you know? (The official excuse given is that the data acceptance window closes on one day of the month and Bolivia does not report until after that date. Oh, and they never ever would want to go back and add date into the past after a close date. Yet they are happy to fiddle with, adjust, modify, and wholesale change and delete old data as they change their adjustment methods…)

Yes, this part is what I was referring to. More specifically Steven’s argument that the temperature record isn’t the way to go.

Re (1), there is no central skeptical command directing strategy, so everyone does what they like, and good on them.

So who’s been sending me the checks? Sure, they’re signed ‘Big Oil’ but the check says the bank name is First Skeptic Central. 😉
No seriously, obviously we all know that. I’m pretty sure Steven knew it when he wrote the comment. I was just curious about Anthony’s take.
Thanks for your response.

John Slayton says: April 30, 2014 at 3:47 am
“and get the station history graph for Tejon Ranch TMAX, I get a very different history than if I pull down Tejon Ranch TMAXTOBS”
There’s no mystery there. You can get the original data, or you can get data with TOBS correction applied. It’s clearly labelled. Are you saying that knowledge of the effect of TOBS should be suppressed?
As to the head of the Weather Bureau, he didn’t “explicitly say ‘corrections’ have been made that bring urban temperatures into line with surrounding areas” at all. He said, in your quote: “with the shelters now in use and considering the various corrections applied, the readings are substantially the same.”
He said, “the readings are substantially the same”. He didn’t say the corrections made them so, or that that was their purpose. I suspect it’s a reference to instrument calibrations. Incidentally, what “Weather Bureau” are we talking about?

Re Nick Stokes @5.09;
“Are you saying that knowledge of the effect of TOBS should be suppressed? ”
Are you saying that there is some new knowledge of TOBS that we did not previously possess and that this knowledge resulted in a unidirectional change to 100 year old data?
The fact that there have been a series of these corrections, and that they always go in the same direction, tells me more about the scientists than the data or our knowledge.

Bob Kutz says: April 30, 2014 at 5:57 am
“Are you saying that there is some new knowledge of TOBS that we did not previously possess and that this knowledge resulted in a unidirectional change to 100 year old data?”
It’s as old as USHCN itself. But data with and without TOBS correction has always been available. Do you think it should be otherwise?
TOBS is clearcut. Many observing times changed. The changes have been recorded. It was a systematic change from the previously set 5pm to a now common 9am or 10am. It affects readings in a predictable way. It’s hard to justify not correcting.

TOBS is clearcut. Many observing times changed. The changes have been recorded. It was a systematic change from the previously set 5pm to a now common 9am or 10am. It affects readings in a predictable way. It’s hard to justify not correcting.

Ah, no. This is an utter fabrication. The adjustments are not at all in line with any reasonable TOBS adjustment & you know it.

Nick, your understanding of TOBS differs from mine in such a way that one of us is just making up a story that does not reflect reality.
From GHCN;
The TOB software is an empirical model used to estimate the time of observation biases associated with different observation schedules and the routine computes the TOB with respect to daily readings taken at midnight. Details on the procedure are given in, “A Model to Estimate the Time of Observation Bias Associated with Monthly Mean Maximum, Minimum, and Mean Temperatures.” by Karl, Williams, et al.1986, Journal of Climate and Applied Meteorology 15: 145-160.
SO . . . Mr. Stokes, since your story varies wildly from what the GHCN claims on their own website, maybe you ought to think about looking into things before making assertions.
“From the previously set 5pm to a now common 9am or 10 am.”???
Typical warmist sheep. You don’t bother looking into things and asking why. You just accept the gruel you are fed and defend those who provide it, to the point of just making things up as needed.
Nick, the data set without TOBS shows about half the warming of the data set with TOBS added. The most recent data set increases the difference. How is it that our understanding of temperature observations 100 years ago now allows us to more accurately adjust them ALL DOWN?
Doesn’t that even pique your curiosity, just a little bit???

Nick Stokes: Incidentally, what “Weather Bureau” are we talking about?
Ah, good, a question where we might find an agreed upon answer. It was, per the title page:U.S. Department of Agriculture. Report of the Chief of the Weather Bureau for 1891.
This was the first report since the Weather Bureau was moved from the Signal Corps to the Ag Department. The Chief referred to was Professor Mark W. Harrington of the University of Michigan.
A short publication history and easy archive of Bureau reports is available at:http://www.lib.noaa.gov/collections/imgdocmaps/reportofthechief.html
Well, it’s a new day in LA and it’s going to be hot. Been nice chatting with you.
: >)

Nick Stokes says…
David A says: April 30, 2014 at 3:16 am
“One finale point about the continuously active GHCN stations. if they were in error due to random station location, they would show both a warmer or cooler past and present, compared to your “well chosen” stations. However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present”
==============================================================
Nick…”Yes, of course. They aren’t located randomly. GHCN stations are predominantly in the south of the state. That’s why you get a warmer average. But SG attributes that to “adjustment”.
======================================================================
David…, assertion without evidence,, and, as my post made abundantly clear, this is consistently true throughput the US. BTW, I notice you forgot to mention the cooler present, which I stated. If they are further south, why are they not warmer in the present?
=================
Nick says…You get more warmth in the past because that’s when GHCN was just collecting whatever it could. When it became an ongoing program in the mid 90s, that got rationalized, with a more even distribution.
=======================================
David, Hum?,, fails basic logic I again stated that random distribution would produce random results. The warmest side has consistently been criticized for every all their the adjustments , cooling the past, warming the present, just happens to lead one to increased perception of CAGW,
Once again, if those station continuously active stations were consistently [too] warm in the past, they would be to warm today, but now, those random unbiased stations, just happen to be [too] cool in the present. Sounds like you are trying to have your cake, and eat it to.
Again, they are warmer in the past, and cooler in the present; clear evidence of the artificial and likely wrong nature of adjustments.
=========================================================
Nick says…People who do averages properly know all this. They use gridding and area weighting to avoid distribution biases. And they use anomalies to counter the effects of things like latitude.
===============================================================
David..”Nick, I am sorry if I do not bow at the feet of those who can decipher the holy book of CO2, but rudimentary logic precludes your assertion. By only answering one small part of my post, and very poorly at that, you are not convincing. Also, as I already stated, when you attempt to factor in THOUSANDS of station changes, and just as many disparate sitting issues at each station, and a very uncertain science on UHI with disparate views by educated people, and add in a legitimately questionable TOB adjustment, and you shake and bake the entire record into regional averaging and anomalies of ever changing stations, the chances of ending up with a FUBAR record grows exponentially.
The truth is the stations choices have a curious tendency to move towards airports, cool the past, warm the present, and through algorithms and elimination of many stations, spread [their] anomalies to large areas. Now, we are not talking about a major movement of whatever minor or non existent warming, but even .10 or .05 over ten or so years, is a large swing on the graphs that are produced. As I said, the increasing divergence of RSS from the surface data sets, is as great as the predicted warming trend. I also challenged you or Mr. Mosher to just explain the Iceland adjustments. I also linked you to a series of graphs by Mr Goddard here, http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/ where your mix of data sets does not even happen for the most part. and you and Mr. Mosher failed to respond.

@Nick Stokes.
How can you compare and use a station which recorded a max temp of lets say 65 degrees 50 years ago at 5pm to one of 60 degrees ago at 9am
And why do the adjustments ALWAY go in the same way…
This blog post on a SAP Data Analytic site is still prehaps one of the best i’ve seen with regards to global warming.http://scn.sap.com/community/hana-in-memory/blog/2013/11/06/big-data-geek–is-it-getting-warmer-in-virginia–noaa-hourly-climate-data–part-2 (and featured here a while back)
The pure unadjusted data is loaded in from raw CSV files, the methodology is open…. and guess what. The data shows absolutely zero warming trend at all.
the biggest mystery of climate science is when other clever people analyse the data. It shows flat trend lines, yet after NOAA and other agencies process the data this becomes a warming trend.
Its one of the worlds intriguing mysteries… the fact that non climate scientists often struggle to come up with hockey sticks… warming trends and any sort of “human fingerprint” save for the odd UHI efffect.
Once you start adjusting data the data isn’t a record of anything anymore, its your interpretation of what the data should look like.
Again Just because you think the adjustments are valid, and they have peer reviewed papers doesn’t make them valid. Its your own opinion that they are valid. I don’t think they are and many others don’t think the data adjustments are valid.
Peer review isn’t a particularly valid back stop.
Raw Data is what it is. Adjusting the data means thats its not an observation anymore.

If we ignore for the moment the mechanics of the recent temperature adjustments , do the resulting warming rates make any sense .The data shows that the greatest warming rates( 0.2 to 0.29F/decade ) are for the NORTH EAST, EAST NORTH CENTRAL and WEST NORTH CENTRAL or mostly states close to the Canadian border . This looks strange as one would expect the SOUTH or CENTRAL parts to have warmed more . However when one looks at the warming rates in Canada, things have warmed more. Canada as a whole has warmed 1.6 C since 1948 mostly due to the extra warming of the ARCTIC and the Atlantic Ocean . This equates to 0.432F/decade compared to 0.135F/decade for CONTIGUOUS United States . However the CANADIAN regions next to United States have warmed less . For example , the Great Lakes and St Lawrence Valley region only show an annual warming rate of 0.9C over about 64 years or 0.25F / decade. This is similar to the 0.2 to 0.29 F /decade for the US states close to the Canadian border as per the new adjustments . So I cannot say that the new warming rates are high. The figure of 0.135F/decade also equates to 0.74C/CENTURY which is similar to the warming rate for the globe for the past century.
What is interesting is that the annual and winter temperatures for Contiguous US are declining since 1998 or the last 17 years and the greatest cooling is for the very states next to the Canadian border . So things are changing recently from the 1895 -2013 warming trend.
Winter temperatures in United States have been declining now for 17 years at about 1.78F/ decade according to NCDC/NOAA, CLIMATE AT A GLANCE data. Matter of fact, in United States, 8 out 12 months of the year are cooling. Winters, spring [2months] and fall are all cooling while only 3 months, namely March , June and July are still warming. ANNUAL US temperatures are declining at (-0.36 F/DECADE) since 1998. So there is very little evidence of global warming in United States for the last 16-17 years.

Nick, concerning TOB adjustments, as several links I have given you show, the ajsutments do not start there, or end there. In fact they are changing the current as well, as the link in my post above shows with regard to Illinois. Also, there are reasonable arguments that show the TOB issue as overstated.

One of the reasons for changing the temperature record and pushing down past temperatures is that it also biases the proxy record. Often, proxy records are calibrated by comparing to past temperatures. Therefore, by pushing down past temperatures in the official record, it also pushes down proxy estimates of past temperatures and makes them unreliable.
One of the major goals of climate change and global warming believers is to make it appear that the current temperature is warmer than past periods like the Medieval period. Pushing down past temperatures to bias the calibrations is a good way to do that.

Bob Kutz says:
April 30, 2014 at 7:14 am
Nick, your understanding of TOBS differs from mine in such a way that one of us is just making up a story that does not reflect reality.
From GHCN;
The TOB software is an empirical model used to estimate the time of observation biases associated with different observation schedules and the routine computes the TOB with respect to daily readings taken at midnight. Details on the procedure are given in, “A Model to Estimate the Time of Observation Bias Associated with Monthly Mean Maximum, Minimum, and Mean Temperatures.” by Karl, Williams, et al.1986, Journal of Climate and Applied Meteorology 15: 145-160.
SO . . . Mr. Stokes, since your story varies wildly from what the GHCN claims on their own website, maybe you ought to think about looking into things before making assertions.
“From the previously set 5pm to a now common 9am or 10 am.”???
Well if you read the paper you referenced, for example Table 1, you’ll see that in 1931 14% of the stations used AM vs 79% who used PM, whereas in 1984 the split was 42% vs 47%.
Which seems to agree with what Nick said.
It also reports that “it is rare to find cooperative stations with uniform observing throughout their history, and in many instances observers have changed their time of observation several times in a single decade”.Nick, the data set without TOBS shows about half the warming of the data set with TOBS added. The most recent data set increases the difference. How is it that our understanding of temperature observations 100 years ago now allows us to more accurately adjust them ALL DOWN?
Because the mean trend changes doesn’t mean that all the past observations are adjusted downwards. Clearly if one wants to accurately present past data along with today’s it’s necessary to put them on a common basis, this is what the TOBS adjustment does.

@Phil “Because the mean trend changes doesn’t mean that all the past observations are adjusted downwards. Clearly if one wants to accurately present past data along with today’s it’s necessary to put them on a common basis, this is what the TOBS adjustment does”.
your adjusting the temperatures pure and simple.
Wouldn’t it be cleaner to just admit the historical temperature records are a mess ?
You can’t accurately present past data with present data if the data isn’t similar or a match otherwise your making subjective adjustments to what you think the data should be now what the data was.
The fact is, you can’t really compare past data to present, because when you do…. it doesn’t show the global warming trend.
thus you have to adjust the past data to make it “comparable” and he presto a slight warming trend appears.
Why on earth should a temperature on a given date and time be adjusted ?
That was the temperature recorded accurately then. You can’t legitimately say because it was X Degrees at 9pm is should be Y degrees at 3pm. Thats impossible.
The NOAA and NDCD data then becomes a temperature estimate and not a temperature record….. no error bars are shown and the entire GHCN data set is a complete mess… but no one has enough balls to say this.
the adjustments aren’t really legitimate and deep down i think all you guys know that.
Its your opinion thats their legtimate based on a few peer reviewed papers.
Go ask an accountant what would happen if he started adjusting a companies books….

Bean says: @ April 30, 2014 at 9:14 am
How is temperature measured to 1/1000 of a degree going back to 1895. What is the error band?
>>>>>>>>>>>>>>>>>
That is one of the big LIES.
The data point represents one instrument, one point in time taken at one location. It is a SINGLE unique un replicated data point. However they then apply statistics for multiple readings taken of the same thing. (Think 100 readings of a bathtub full of water.) This is how they get the dubious extra precision.
See: Australian temperature records shoddy, inaccurate, unreliable. Surprise!

The BOM say their temperature records are high quality. An independent audit team has just produced a report showing that as many as 85 -95% of all Australian sites in the pre-Celsius era (before 1972) did not comply with the BOM’s own stipulations. The audit shows 20-30% of all the measurements back then were rounded or possibly truncated. Even modern electronic equipment was at times, so faulty and unmonitored that one station rounded all the readings for nearly 10 years! These sloppy errors may have created an artificial warming trend. The BOM are issuing pronouncements of trends to two decimal places like this one in the BOM’s Annual Climate Summary 2011 of “0.52 °C above average” yet relying on patchy data that did not meet its own compliance standards around half the time. It’s doubtful they can justify one decimal place, let alone two…

Phil. says: @ April 30, 2014 at 9:25 am
….Because the mean trend changes doesn’t mean that all the past observations are adjusted downwards. Clearly if one wants to accurately present past data along with today’s it’s necessary to put them on a common basis, this is what the TOBS adjustment does.
>>>>>>>>>>>>>>>>
The Six min max thermometer invented in 1782, has been around for over two hundred years. TOBS adjustment causing “Warming” which the raw vs ‘adjusted’ data shows therefore just does not make sense. All you are doing is assigning Thursday’s “high reading’ that was read in the morning on Thursday back to Wednesday. Only when the pattern of reading was changed should an adjustment actually matter and given all the other blanks, station moves… it is a really minor factor unless operators changed frequently.

By the way, for those of you who would like to study temperature trends back before the **current** global warming scare, I give you the Monthly Weather Review, Vol. 61, No. 9:http://www.odlt.org/dcd/docs/kincer-061-09-0251.pdf
Article Title: “IS OUR CLIMATE CHANGING? A STUDY OF LONG-TIME TEMPERATURE TRENDS”
Money quote:
“In concluding this study, other weather features directly related to general temperature conditions were examined such as the occurrence of frost in the fall and spring, the number of days in winter with certain low temperatures, the occurrence of freezing weather in the fall and spring seasons, the length of the winters, as indicated by the first frost in fall and the last in spring, etc. All of these confirm the general statement that we are in the midst of a period of abnormal warmth, which has come on more or less gradually for many years.”
Published: September 1933

Frank K. says:
April 30, 2014 at 9:28 am
Does anyone have access to the TOBS adjustment algorithms that NOAA uses? I can’t access the software at the link Zeke H. posted earlier? I’m looking for actual code. Thanks.
=============================================
Try EM Smith’s site and on the subject graph he has at least two main headers with over fifty posts. Some of the dealt extensively with just this I think. He may be starting to post more, and he responds to over 90 percent of comments.
BTW Frank, the old records, even Jim Hansen’s through the early 1990s confirm the very warm US
USHCN stations in continues operation from the early 1900s to now, set the vast majority of their records highs in the 1930s and 40s. (Same station, and there is no adjustment on the record highs.) I have linked these to Nick, Zeke and Mosher several times. They never comment.

@David A.
Thanks. I want to see if I can get a hold of the actual code NOAA uses to do their raw data adjustments such as TOBS, station moves, UHI, etc. TOBS is a big one, and it’s empirical (which is a to say, it’s a model). While it is described in papers, no one (as far as I know) has published any of the actual codes used to calculate the TOBS adjustment at NOAA/NCDC.
I find the 1933 article amusing since apparently we were going through our first “global warming” crisis back then. It’s also neat to see annual temperature averages reported for places like Philadelphia and New Haven all the way back to the late 1700s! (and no TOBS)

Frank K. says:
April 30, 2014 at 12:42 pm
@David A.
Thanks. I want to see if I can get a hold of the actual code NOAA uses to do their raw data adjustments such as TOBS, station moves, UHI, etc. TOBS is a big one, and it’s empirical (which is a to say, it’s a model). While it is described in papers, no one (as far as I know) has published any of the actual codes used to calculate the TOBS adjustment at NOAA/NCDC.
The authors of the method referred to by NOAA used to provide copies of the original FORTRAN program.http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281986%29025%3C0145%3AAMTETT%3E2.0.CO%3B2
The formula used is given in the paper.JustAnotherPoster says:
April 30, 2014 at 9:39 am
Why on earth should a temperature on a given date and time be adjusted ?
Read the paper and you’ll find out why and how.

David A says: April 30, 2014 at 7:53 am
“I also challenged you or Mr. Mosher to just explain the Iceland adjustments.”
I did a post on that topic here.“However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present”
It isn’t the stations – it’s the subset average, which just depends on what Steven Goddard has put in the subset. I’ve made a Google Maps based station display here. You can zoom in on Michigan to see the station distribution – sparser in the North. You can sub-select time periods to see how it changed. It’s a relative change – I don’t have a similar gadget for USHCN, but they probably have a south bias too. The comparison just depends on which is more biased at any point in time.JustAnotherPoster says: April 30, 2014 at 8:04 am
“The pure unadjusted data is loaded in from raw CSV files, the methodology is open…. and guess what. The data shows absolutely zero warming trend at all.”
I run a program which every month analyses the raw GHCN data. Here is an early post, which also compares the results of some other bloggers, and the main indices. No adjustments used at all, and very little different from the main indices.Bob Kutz says: April 30, 2014 at 7:14 am
“Nick, your understanding of TOBS differs from mine in such a way that one of us is just making up a story that does not reflect reality.”
You can read more about my misunderstanding here.“From the previously set 5pm to a now common 9am or 10 am.”???
Yes. Here is Vose of NOAA:“[6] There has been a systematic change in the preferred observation time in the U.S. Cooperative Observing Network over the past century. Prior to the 1940s most observers recorded near sunset in accordance with U.S. Weather Bureau instructions, and thus the U.S. climate record as a whole contains a slight warm bias during the first half of the century. A switch to morning observation times has steadily occurred during the latter half of the century to support operational hydrological requirements, resulting in a broad-scale nonclimatic cooling effect. In other words, the systematic change in the time of observation in the United States in the past 50 years has artificially reduced the temperature trend in the U.S. climate record”

Frank K. says:
April 30, 2014 at 12:42 pm
@David A.
Thanks. I want to see if I can get a hold of the actual code NOAA uses
======================================================
I think EM did that. Besides being an economist, he is a computer geek as well. You can look at Phil’s source, but I am confident EM can show you many issues with the code.

Hi David A.
I think you’re confusing the awful code GISTEMP (which of course is a NASA GISS product) with the NOAA/NCDC codes, which nobody has seen (yet). I want to see the NOAA/NCDC codes. NOAA/NCDC supplies the data used to churn out the ever-changing temperature “anomaly”plots…

Lots of justifiable arguing here over exactly how to measure the temps. Here are a few points that come to mind…
… virtually all the warming before 1945 had to be natural. This almost always gets lumped in as AGW but that’s very unlikely considering the low CO2 emissions compared to now. So, lowering the pre-WWII temps only makes the previous NATURAL warming stronger meaning more recent AGW is less extreme by comparison.
… even with all the people working on this, the errors are as big as the signals. That’s reason enough there to be a skeptic
… humans are only sensitive to about a 3 deg difference in temperatures. So, in my lifetime, the temperature has changed only a fraction of what I can even sense, and that fraction is based on a rather dubious data set. That makes me realize that exactly no one would have noticed global warming at all if not for the hype.
… why is it that climate model forecasts have always busted high and observed data has always been adjusted to make the trend bigger? An unbiased forecast or data adjustment should have a 50-50 probability of either direction, so what is the probability that ALL went in the direction of the CAGW enthusiasts?

Frank K says…
Hi David A.
I think you’re confusing the awful code GISTEMP (which of course is a NASA GISS product) with the NOAA/NCDC codes, which nobody has seen (yet).
====================================
Frank, you are exactly correct, and when I checked out Nick Stokes link where he says this..
Nick Stokes says:
April 30, 2014 at 3:24 pm
David A says: April 30, 2014 at 7:53 am
“I also challenged you or Mr. Mosher to just explain the Iceland adjustments.”
============================
Nick responds, “I did a post on that topic here.”
========================================
Nick has a curious way sometimes, because that post confirms the following about the Iceland adjustments….
1.Regarding the GISS adjustments, the current situation is absurd. GISS starts from the GHCN adjusted data, which includes (erroneous) adjustments for breaks and homogeneity adjustments. It then applies its own algorithm to remove ‘suspicious records’, and then applies its own homogeneity adjustment!
2. .The data for Reykjavik shows that the v2 adjustments cool by about 0.9 in the 60s and 70s, increasing to 1.3 before 1940 and 1.7 before 1920.
So the adjustment-fabricated warming in v2 is similar in magnitude to that shown in the graph for v3.1, though more monotonic. It is a similar picture for Stykkisholmur, with downward adjustments of around a degree before 1970.
3. The counter argument is that the GHCN algorithm is all about neighboring station records. That’s how it works. I recommend you read the papers on it.
4. Which clearly fails because…in all 8 of the Iceland stations unadjusted data, there is a sharp drop around 1965, and this is well established from a number of sources.
In 7 of the 8 cases the GHCN adjustment algorithm incorrectly puts in a break here and gets rid of this sharp drop.
Similarly, the raw data consistently shows a warm period around 1930-1940 which is also established in the literature on air temps and SST temps (papers by Hanna et al). In most cases the GHCN algorithm adjusts these downwards.
So whatever is flagging the GHCN adjustments, it’s not nearby neighbors. If the algorithm looked at neighbors in any sensible way it would know that the raw data was valid.
5. Here’s what the Iceland Met Office says: “The GHCN “corrections” are grossly in error in the case of Reykjavik but not quite as bad for the other stations. But we will have a better look. We do not accept these “corrections”. Of course if they would publish the code, the error could be found fairly easily/
6. In the absence of complete metadata as to changes at each station, there will be inevitable uncertainty in how to do the adjustments. (I just want to initially have explained how ONE AREA of Iceland’s adjustments are done.)
7. It is all mixed up. the GHCN adjustment code is NOT available for download. The GISS code is, and gets changed periodically. We do not know how they interact. The interaction itself possibly creates further erroneous artifacts.
——————————————————————————————————
Nick misdirects when he pretends that his post solved my challenge to explain just the Iceland adjustments. The truth is those adjustments clearly illustrate the problem and the answer is FUBAR.
I wish to take a brief sidebar with regard to Nicks lowball tactic of starting his post with hurled insults at WUWT. WUWT is a community far larger then any that will ever read Nick’s posts. On any given thread it is common to see more then a hundred comments. To cherry pick some uninformed and uneducated comments, is the same as making unjustified adjustments to Iceland, or starting Arctic Ice coverage graphs in 1979, it is cherry picking for the purpose of creating a false impression. WUWT has, in every post, very good and very poor comments from a very wide audience.
Finally Nick responded to this comment of mine.. “However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present” by stating….It isn’t the stations – it’s the subset average, which just depends on what Steven Goddard has put in the subset.”
Nick ignored entirely my direction to numerous graphs which clearly are direct station to station comparisons, raw vs adjusted. Those graphs, reading station to SAME station, are raw warmer in the past, and cooler in the present, as well as verses the “official record” they are also consistently warmer in the past, and cooler in the present. (If location were the cause, they would not flip flop, but would remain warmer in the present as well.)
All the best
David A

Still no NOAA data access:
“NCDC recently experienced an IT failure leading to a degradation of services. We are returning services to full functionality in a methodical fashion. We expect full services to be available by Thursday, May 1, however the exact date and time cannot be predicted with full confidence. Thank you for your patience.”

David A says:
May 1, 2014 at 4:58 am
7. It is all mixed up. the GHCN adjustment code is NOT available for download. The GISS code is, and gets changed periodically. We do not know how they interact. The interaction itself possibly creates further erroneous artifacts.
First, forget about the GISS code – it’s just one interpretation of the GHCN data. As they are fond of saying, they don’t do anything but process the NOAA data, which anyone can do on their own. What I am more interested in is how the “raw” data gets processed by NOAA, and particularly how the TOBS model is implemented. Did you know that TOBS only affects the U.S. data? That is every other climate/weather station around the world from 1880 to the present day took their measurements at the same clock time every day of every year of every decade. And we KNOW this is true because…well, just because! Of course, the U.S. is just 2% of the earth’s surface, so maybe all of this doesn’t matter…

NCDC is abiding by an old saying in the garment district: “The man wants a blue suit, turn on the blue lights.” Nothing short of criminal penalties for forging government data will curtail this nonsense.

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy