Main Menu

How the US Temperature Record is Adjusted

There has been criticism of the potential for official weather stations in the USA to record artificially high temperatures because of the changing environments in which they exist, for example, new asphalt, new building or new air conditioning outlets. Meteorologist, Anthony Watts, has documented evidence of the problem and Canadian academic, Ross McKitrick, has attempted to calculate just how artificially elevated temperatures might be as a consequence.

A reader of this blog, Michael Hammer, recently studied the official data from the US official weather stations and in particular how it is adjusted after it has been collected. Mr Hammer concludes that the temperature rise profile claimed by the US government is largely if not entirely an artefact of the adjustments applied after the raw data is collected from the weather stations.

Does the US Temperature Record Support Global Warming?
By Michael Hammer

IN the US, the National Oceanic and Atmospheric Administration (NOAA) collects, analyses and publishes temperature data for the United States. As part of the analysis process, NOAA applies several adjustments to the raw data.

If we consider, the above graph, which shows, their plot of the raw data (dark pink) and the adjusted data (pale pink), it is obvious that the adjustments have little impact on data from early in the 20th century but adjust later temperature readings upwards by an increasing amount. This means that the adjustments will create an apparent warming trend over the 20th century. [Click on the above chart for a better larger view, this chart can also be viewed at http://cdiac.ornl.gov/epubs/ndp/ushcn/ndp019.html .]

NOAA state that they adjust the raw data for five factors. The magnitude of the adjustments are shown in Figure 2.

Figure 2. Form of individual corrections applied by NOAA. The black line is the adjustment for time of observation. The red line is for a change in maximum/minimum thermometers used. The yellow line is for changes in station siting. The pale blue line is for filling in missing data from individual station records. The purple line is for UHI effects (this correction is now removed). [Click on the chart for a better larger view or visit the same website as for Figure 1.]

It is obvious that the only adjustment which reduces the reported warming is UHI which is a linear correction of 0.1F or about 0.06C per century, Figure 2. Note also that the latest indications are that even this minimal UHI adjustment has now been removed in the latest round of revisions to the historical record. To put this in perspective, in my previous article on this site I presented bureau of meteorology data which shows that the UHI impact for Melbourne Australia was 1.5C over the last 40 years equivalent to 3.75C per century and highly non linear.

Compare the treatment of UHI with the adjustments made for measuring stations that have moved out of the city centre, typically to the airport. These show lower temperatures at their new location and the later readings have been adjusted upwards so as to match the earlier readings. The airport readings are lower because the station has moved away from the city UHI. Raising the airport readings, while not adding downwards compensation for UHI, results in an overstatement of the amount of warming. This would seem to be clear evidence of bias. It would be more accurate to lower the earlier city readings to match the airport readings rather than vice versa.

Note also the similarity between the shape of the time of observation adjustment and the claimed global warming record over the 20th century especially the steep rise since 1970. This is even more pronounced if one looks at the total adjustment shown in Figure 3 (again from the same site as Figure 1). As a comparison, a recent version of the claimed 20th century global temperature record downloaded from www.giss.nasa.gov is shown in Figure 4.

Figure 3. Magnitude of the total correction applied by NOAA

[Click on the charts for a larger/better view.]

Figure 4. Temperature anomaly profile from NASA GISS

Since the total corrections for the US look so similar to the claimed temperature anomaly, it begs the questions as to what the raw data looks like without any corrections. Does it show the claimed rapidly accelerating warming trend claimed by the AGW advocates? To determine this I took the raw data from the USHCN graph shown in Figure 1 and plotted this using a 5 year mean (blue trace), matching the smoothing in the NASA GISS profile shown in Figure 4. The result is shown in Figure 5. Please note that while the plot is one that I generated, the data comes directly from the raw data from Figure 1 published by NOAA.

Clearly the shape of this graph bears no similarity at all to the graph shown in Figure 4. The graph does not even remotely correlate to the shape of the CO2 versus time graph. The warming was greatest in the 1930’s before CO2 started to rise rapidly. The rate of rise in 1920, the early 1930’s and the early 1950’s is significantly greater than anything in the last 30 years. Despite the rapid rise in CO2 since 1960, the 1970’s to early 1980’s was the time of the global cooling scare and looking at the graph in Figure 5 one can see why (almost 2F cooling over 50 years).

A linear least squares trend line, created using the Excel trend line function (Red trace) shows a small temperature rise of 0.09C per century which is far less than the rise claimed by AGW supporters and clearly of no concern. However, the data shown in figure 5 bears little if any resemblance to a linear function. One can always fit a linear trend line to any data but that does not mean the fitted line has any significance. For example, if instead I fit a second order trend line (a parabolic) the result is extremely different. That suggests a temperature peak around 1950 with an underlying cooling trend since. Which trend line is the more significant one? If there was really a strong underlying linear rise over the time period it should have shown up in the 2nd order trend line as well. This suggests that it is questionable whether any relevant underlying trend can be determined from the data.

It would appear that the temperature rise profile claimed by the adjusted data is largely if not entirely an artefact arising from the adjustments applied (as shown in Figure 3), not from the experimental data record. In fact, the raw data does not in any way support the AGW theory.

Based on this data, the US temperature data does not correlate with carbon dioxide levels. The warming over the last 3 decades is completely unremarkable and if present at all is significantly less than occurred in the 1930’s. It is questionable whether any long term temperature rise over the 20th century can be inferred from the data but if there is any it is far less than claimed by the AGW proponents.

The corrected data from NOAA has been used as evidence of anthropogenic global warming yet it would appear that the rising trend over the 20th century is largely if not entirely an artefact arising from the “corrections” applied to the experimental data, at least in the US, and is not visible in the uncorrected experimental data record.

This is an extremely serious issue. It is completely unacceptable, and scientifically meaningless, to claim experimental confirmation of a theory when the confirmation arises from the “corrections” to the raw data rather than from the raw data itself. This is even more the case if the organisation carrying out the corrections has published material indicating that it supports the theory under discussion. In any other branch of science that would be treated with profound scepticism if not indeed rejected outright. I believe the same standards should be applied in this case.

Michael,
You’ve compared Fig 4, which is global temperature, with Fig 5, which is “raw data”. What you haven’t made clear is that they are different things. Fig 5 is an unadjusted plot of continental US temperature, not global. The features you have discussed are present also in the corrected record. The 0.09C per century is the US figure, not global. I’m not aware of any IPCC estimate that this contradicts.

But “raw data” is itself a misnomer. By far the biggest component of the adjustment is the TOBS (time of observation) effect. This arises from two main causes:
1. Actual time shift of recording data. Data is historically often recorded at three hourly intervals. “Raw data” simply uses the clock time as written. But there are time shifts to be accounted for due to introduction of daylight saving etc. The “raw data” is simply wrong in that respect.
2. Adjustment for missing values. In older times, there are many. “Raw data” simply leaves these out. However, there is a strong tendency for missing values to be at awkward times for human observers, like early morning. Simple omission creates a warm bias, because the missing values are at cold times of day. In effect, it replaces the missing values with the daily average. This is a very rough estimate. It’s much better to replace them with a time-of-day based estimate.

I did not say that all the corrections are wrong. Maybe time of day correction has some merit – but since NOAA don’t say how they applied it and what justification there is in the raw data for doing so I can’t judge. On the other hand, I am suggesting that some of the corrections are very questionable. Specifically (for reasons I went through in my previous article) I believe the treatment of urban heat island and measurement site changes are both extremely questionable and I presented data in my previous article to support that claim. UHI would have the opposite effect to their time of observation corrections so maybe the two would cancel each other out.

The important issue is that without the corrections there is no long term warming trend in the US data. In my previous article I showed there is also none in the Victorian temperature record. In the discussion that followed the previous posting I also documented data from New South Wales which similarly showed no long term warming trend. The trend is created by the adjustments and from a scientific point of view this is an exceptionally dangerous way to argue.

One cannot show lack of bias simply by looking at one or two of the corrections and arguing that qualitatively they are justified. Bias can come in from the quantitative level of the adjustment and even more can come about from corrections not applied. Overall, if there is no trend in the raw data it is extremely questionable to claim it in the correted data.

I wonder what your reaction would be to a researcher who came to you with experimental data in support of his pet theory which showed no trend and claimed; Ahh but there are several corrections which I believe are appropriate to apply and when I apply them an obvious trend emerges. I can tell you my reaction, exactly the reaction I am expressing in this post.

With regard to your point that the NOAA data is for the US and the GISS data is for the world, you are quite right and I do note that in the text. However, I have now looked at about 1.3 large countries (South Eastern Australia and the US) I find that in both cases there is no warming trend in the raw data (I plan to look at the rest of Australia). Thats 2 out of 2 and to me thats very significant. Are you suggesting that the global warming trend is restricted to only a few selected countries. If so it is not “global” at all. If it is global, it should show up in all country records.

“You’ve compared Fig 4, which is global temperature, with Fig 5, which is “raw data”. ”

There seems to be some romantic idea here about “raw data” being “pristine”, holding some secret purity that the nasty ‘greens’ are abusing. Anyone who has worked with “raw data” knows it is called that for a reason. There are all kinds of problems that occur with raw data, from any source. The radiosonde raw data is useless unless it is adjusted, read up on it. The satellite ‘raw data’ is useless unless it is adjusted. The UAH team adjust the raw satellite data.

Michael,
There’s a lot written on the USHCN adjustments – how they are done and why. The NOAA summary is here.

It is not true that adjustment causes a big uptrend to emerge in CONUS temperatures. In your comparison to Fig 5, you should have shown adjusted US temps, not adjusted world temps. Then you would see that adjusted US temps also show an apparently small rising trend. It’s due largely to a local hot period in the 1930’s. The reasons for this have been much discussed.

There goes Luke, winning arguments by profoundly deep and compelling reason, as always.
You look at land, Luke, because that is where people live and experience climate as weather.
You look at land, because the AGW promotion industry claims their data on the land, and predictions about the land, are meaningful.
But if the land data is garbage- and it is- then why have confidence in the oceans any how?
Of course you know that, because you pitch out an abstract from Hadley, which has not covered itself in glory these past years, and curse us proles for daring to raise our questions.
I think the better term for you is ‘faux intellectual’.

Well if you don’t have enough confidence in the land data due to UHI (but that’s you not me) have TWO alternative ocean data sets back to the 1800s you numb nuts denialist. We’ve had gutful of lying denialists – time to learn from the Motty’s School of Arts, Bush Pig Etiquette and Personal Diplomacy and dropping you lot. Fuck – I’ve just smashed another keyboard. My best Logitech too ….

Or perhaps you could do species behaviour, plant flowering. A whole bloody massive Nature paper on that. Or bore holes.

Good ol’ Luke, demonstrating that the difference between AGW and a cult is precisely nil.
You may actually think you are helping to save the world and or humanity.
God save us from the apocalyptic believing fools of AGW.

Thanks to all for the links – and especially Sod’s to NOAA’s attempt to rebut Watts’ survey of surface stations (www.surfacestations.org), where we read NOAA’s view that “for detecting climate change, the concern is not the absolute temperature … but how that temperature changes over time”. Mirabile dictu, so IPCC has indeed got it wrong, with its emphasis in countless graphs over nearly 20 years on “anomalies” (i.e absolute levels) of global surface temperatures against some arbitrary average over 30 years say 1960 to 1990. So can we look forward to NOAA announcing in January next year that 2009 was not one of the n hottest years ever but showed merely a “rising trend”? (even if it won’t).

The subject of UHI has been discussed in detail on several websites. WUWT and Climateaudit are just a few.

Some of Steve McIntyre’s postings several years ago showed how polluted the land temperature records were by using a study intended to show no UHI existed. The study completed by Peterson was trashed by Steve using a few graphs he produced using the exact data utillized by Peterson.

So now we are supposed to swallow the line that the land is cooling, but that is irrelevant . The air is cooling , that’s irrelevant too .

We must now look to the oceans . What next ? There will be a next ,because already those figures are sus.

In the 1950s my father reported all the local ( in the bush ) weather, on a big monthly sheet for the Dept of Met. Cloud , temp,type cloud ,how much,wind direction ,strength. He taught us as kids to do this. Mate I have to tell you that your disastrous climate crisis seems to be specialising in cities.
I can put up with that.

Ian T and Hunter – are you guys that bereft of intelligence – the reason to use TWO INDEPENDENT ocean data sets is conduct an “alternative” analysis. Guess what you get essentially the same answer as the land based analysis – i.e the planet has warmed. Kinda strange eh?

I like Fig 5; the upside down, bilious green smile/frown is about right for the ‘caught with their hands in the till again’ approach to the temperature data boys.

Can anyone point to an adjustment which decreased temperature trends?

Anyway, this is all by the by; Will Steffen, after the interview with Senator Fielding, which left the Wong wabbits ibbitie ibbiting, has released the new AGW model: Ocean heat; and viola, adjustment city;

A problem with the whole argument is that the data has been collected over a long period of time with continuously changing methods and standards. The only real fact we can glean from the data (raw or adjusted) is that the Earth as a system is far too complex for our technology to gain absolute meaning from. I regard the statistics as similar to climate in that they may give us an overview of the general direction things are headed, however infinite variables make it impossible to predict the next week let alone the next 100 years. I can read any number of inferences from the given data but still there are factors that are not included in the article, which could turn the data on its head in a small period of time.
I think the bigger picture has been lost amongst the bickering and small talk. Examine all current methods of food production and natural resource management. Figure out the methods that have worked over the last ten years or so in individual regions and try to apply these principles on a regional scale rather than relying on the tried and true(or not so) methods.
I think there needs to be less focus on the minutiae of global warming or climate change theory and more focus on our ability to work with natures complex, if disrupted, rhythm. And to not forget to look after the innocent organisms that are witnessing our rise towards our flawed climax.
Shoot me down if you like but this is just the opinion of a grain of sand on a beach.

Michael Hammer: Have you plotted the SST anomalies for the U.S. coastal waters and compared them to your before and after US surface temperature data? I prepared a post about Gulf of Mexico, Eastern Pacific, and Western Atlantic SST anomalies (and a combination thereof) a couple of months ago that you might find interesting.http://bobtisdale.blogspot.com/2009/03/sst-anomalies-of-us-coastal-waters.html

There has been further adjustments done to USHCN in the new Version 2.

The total adjustments are now about 0.425C or 0.765F (from about 1920).

The (different method) of adjustments in USHCNV2 can be seen in Figure 4 (page 41) and Figure 7 (page 44) here. [There is one other set of adjustments Figure 10 (page 47 but there is no net change from this adjustment].

To all of you who don’t think the UH effect influences temperature, take a look at an example I found.
In Casino, a rural town on the NSW North Coast, there are two weather stations situated within 300m of each other. The manual station is located 5m from a tarred road and surrounded by buildings (a new house has been built recently within 25m). The AWS is situated on a grassed oval with no buildings within 50+m and open to the elements from all sides.
Over the past 10 years, the manual station average temperature for both maximum and minimum has been +0.5C above the automatic station (you can check for yourself at the BOM’s recent observations site).
I believe this a practical example of how temp is affected by the immediate environment. If there are any other examples of stations within such close proximity and having such different surroundings, it would be interesting to see if the same effect applies.

If accountants did smoothing and data shifting on annual reports, they would go to jail.
Wishfull thinking is not science. Someday, these forced errors come back to haunt people. Steroids are illegal for sports but pumping a little steroids on thermometers, so harmless.

To start with some of the earlier comments. The suggestion was made that adjusting data is not necessarily unreasonable or indicative of bias and infact there are many cases where data needs considerable processing to extract the information of interest. All this is quite true but I would not have thought reading a thermometer and writing down the reading to be one of those situations.

Nick made the comment that the time of day correction is valid because the data is read every 3 hours and things like daylight saving change the measurement times. At first thought that sounds reasonable but consider, the average temperature (excluding abrupt changes from fronts and other wheather phenomena – which should average out over climate time scales as distinct from weather time scales) varies roughly in a sinusoidal fashion. Given that, I did a very simple calculation on Excell. Compute the value of a sinusoid with a period of 24 hours at hourly intervals. Now sum and average every third reading (coresponding to reading the temperature every 3 hours). Do that starting with the first reading, then starting with the second and then starting with the third. This corresponds to varying the time of measurement. The result? The maximum difference in the average over 24 hours is 0.0002% of the true mean. Thus if the true mean is 14C the difference is a whopping 0.000028C.

Ah but maybe we should not look at the daily average but the maximum and minimums discarding other data. So if before daylight saving the daytime reading co-incided exactly with the maximum then after daylight saying it is 1 hour displaced and for a sine function that corresponds to a shift of 15 degrees and a change in value of 3.4% of the maximum swing. So if the maximum to minimum variation is say 30 to 10 it corresponds to a change of 0.034 * 20 = 0.68 degrees. Substantial you might say. but hang on a minute, time zones move in 1 hour increments so different locations have their maximums and minimums at different imes of the day. Thus there is an equal probability that before daylight saving the measurement times did not co-incide with the maximum or minimum temperature. In that case a 1 hour shift could just as easily increase the reading as reduce it and on average across many locations the difference should again average out to zero.

So what about that missing data. Nick’s claim is that the missing data corresponds to akward times of the day when it was colder so leaving this data out creates a false warm bias which needs to be corrected. Again plausible at first sight but is it true that the missing data is always odd measurement times during the day or is it entire days, weeks, months or even years missing? My understanding is that often it is much longer periods. Then again does missing data only apply to old data and not to more recent data? Have observers become more disciplined over the years? If not then the effect is uniform across the record and adjusting older values down while not changing newer results represents bias. AGW supporters would no doubt claim that indeed record keeping has become more disciplined. However what about station dropout? This has indeed been observed in recent times and it is nearly always the rural stations which drop out and even more, the rural stations from cold regions such as Siberia. These dropouts represent recent missing data which undoubtedly creates a warm bias. The 30% dropout of mainly rural stations across Siberia with the breakup of the USSR has been widely documented. To avoid bias that would have to also be compensated for but the well documented step in the temperate record at about that time strongly suggests it has not been. To compensate for purturbing factors when they support a theory and not when they run counter to that theory is the most common form of bias.

Then again what about the removal of compensation for UHI. UHI is so well documented it is beyond dispute. To not compensate for it is bias.

I can only say again that where a trend is not visible in the raw data and is created by the adjustments to that data the trend must be viewed with considerable circumspection. At the very least, the adjustments need to be srcutinised with extreme care to avoid bias. I do not think such scrutiny has been applied and even a cursury inspection suggests grave concern that the trend observed is a result of bias.

With regard to the many comments supplying evidence of recent warming. If you look at the raw data for the US in figure 5 it certainly does show warming from 1980 to 2000 so the raw data does indeed support warming over this period. However the record strongly sugests this is not a long term trend but merely short term (by climate standards) fluctuation not supporting the AGW theory.

There was a suggestion that my fitting a parabolic curve to the US data was absurd and irrelevant. I have to disagree. When one tries to fit a trend line to data (unless you have prior knowledge of the form of the expected underlying tend) there are a small suite of curves to try. These would be an exponential, a logarithmic function, a power law and a polynomial (hence these functions being included in Excell). Now given the claims about AGW, the logical function to try first would be an exponential yet AGW proponents do not do this, they use a linear fit which is the simplest polynomial. Why?

OK, so why didn’t I try an exponential? Because an exponential rises faster than linear and if the linear fit shows more or less zero slope then the exponential fit is not going to show anything useful. I could have tried a logarthmic fit. This rises more slowly than linear and given the large peak in the 1930s it would have shown significant rise in the early part of the 20th century flattening off later – which runs counter to the AGW hypothesis. However such a fit pre supposes a trend in 1 direction and in this case we want to leave open the possibiliyt that the data could be varying both up and down. Hence a polynomial fit is a very reasonable one to try.

The simplest polynomial fit is the first order or linear one and as I show this gives almost zero trend. The next simplest is the second order or parabolic fit. This shows far from zero trend and indeed suggests a rise followed by a fall. Such a trend is exactly what is claimed from events such as the medieval warm period (rise to a maximum followed by a falling away again) and the little ice age (drop to a minimum followed by a rise) and to explore the possibility that somethng similar is happening again is entirely reasonable. One could also try the 3rd order polynomial fit or cubic.

SJT
Thanks for the link. I see that the Bureau gives the following criteria for the stations that it has chosen.
* high quality and long climate records (at least 30 year records -RCS definition),
* a location in an area away from large urban centres, and
* a reasonable likelihood of continued, long-term operation.

Just checked 3 stations in my area to see if they meet requirements.
Yamba Pilot station. In the past fourteen months, data is missing for 11 of these months (so does not meet point 1).
Inverell (Raglan St) – started in 1995 (so does not meet 2nd point).
Coffs Harbour – appears OK except records only go back to 1943.

All stations appear to be reasonably removed from buildings. I will look at the others but 2/3 stations failing to meet their own
requirements does not inspire confidence.
Anyway, my point was an 0.5C difference between 2 stations 300m apart with different surroundings – one with buildings and one without.

Nick made the comment that the time of day correction is valid because the data is read every 3 hours and things like daylight saving change the measurement times. At first thought that sounds reasonable but consider

This makes me wonder a little because when i first started out as a trainee chemist at the Port Kembla steelworks, 45 years ago, one of my tasks was to go out to the Stevenson’s screen and record the maximum and minimum temperature of the previous day and night as well as maintain the recording psychrometer and check it against a wet ad dry psychrometer. The thermometer looked something like <a href=”http://www.brannan.co.uk/images/pl/12-407-3.jpg”<this.

If something like that is the Stevenson’s screens and since they only need to be read and reset once daily I imagine that most of them would have, then why would time of observation matter as long as it’s after the minimum and before that maximum if read in the morning? Does anyone know what is being used today?

Michael,
There’s an unfortunate tendency on blogs to resolve issues of fact by armwaving (sometimes excel-fuelled) rather than reference to people who know about the actual data. My own comments related to my rather ancient contact with the Australian data – it seems that US issues are somewhat different. The paper usually referenced is Karl et al 1986. US relies more on max/min observations, and a key issue is the “climatological day”, basically determined by when someone looks at the thermometer. That has been changing. We don’t need to speculate – Karl et al have looked in detail at the data and described what they saw. There are other factors which they also describe.

The fact is that an adjustment is clearly needed. There is lots of data on the diurnal pattern, so the adjustment is based on good information. Full details about the actual time discrepancy is available. What do you suggest – should they not do it?

I have to comment that sceptics have many complaints about the complexities involved in accurately measuring average temperatures. Well, there are complexities, much studied, and dealing with them usually involves the use of adjustments (how else?). Then sceptics complain about the number of adjustments 🙁

It’s now off-topic, but I can’t resist a comment on your calc about the small effect of shifting the time of periodic sampling, based on a sinusoid assumption. It’s very small for a sinusoid, but that is a particular property, resulting from the fact that the sine is an entire function in the complex plane (see Poisson summation formula etc). Deviation from sine makes the effect much bigger.

All the chat about whether the adjustments are needed or not is all very interesting, but I think you have *all* missed the point – if “raw” data needs to be adjusted, then those adjustments *must* increase the error margins of the data. No if’s, no but’s – this *must* happen, because these adjustments are *not* made by reference to other, known good, data sources, they are *estimates* of what people *think* is required. So if the adjusted data shows a trend that is on the same order as the adjustments, that trend *cannot* be said to be significant. Ever. For any reason. It’s in the noise, and *no* post-hoc adjustment will magically pull out the “real” data. None. We must live with the uncertainty, collecting more and better data until we have a trend that lies well outside the error margins of the data – error margins that include such “adjustments”, as they must.

And one other thing – the first thing with statistical analysis is *use the raw data*. Do NOT use averages, do NOT filter the data to remove high frequency noise – this is throwing away valuable data. Yet temperature and CO2 data used to “prove” AGW is almost always an average of one form or another. And when you average data sets and perform a correlation test, you *always* get higher correlation coefficients than you would with the raw (un-averaged) data. Always. Such correlations are completely meaningless. Completely. Always.

Nick; your complaint about sceptics being sceptical of both ends of the adjustment issue would have more merit if the fact that temperature ‘adjustments’ by the official sources are always up was dealt with more critically by yourself and other leading spokespersons for AGW; as for hand-waving this analysis of Australian adjustment procedures is more substantial then that;

SJT has raised the RCS categories by BoM; these are supposed to be the remedy for the failings of the old data and the failings of the modern adjustments; Ian has already noted some problems with the RCS; another significant problem is that the majority of these sites show no AGW effect [I have only looked at about 60 of them] yet national trend graphs from BoM invariably show AGW consistent trends; this would, on the face of it, appear to be a massive contradiction with the designated creme de la creme of the old sites having temperature records inconsistent with the national agglomerated data history.

Cohenite – if you were any good and looking after your sceptics party properly – you’d have got the best selection of Aussie non-UHI properly-sited met stations sites from BoM’s ADAM database and worked it up for us in an analysis? Whats’ your problem?

Even ask BoM if they’re happy with your final list.

Don’t keep banging on about Melbourne UHI – leave the bastard out ! Don’t look at ANY capital city.

Then instead of these interminable try-on posts by players – looking at the negative end – give us a proper analysis of the positive end where the data are OK. Do extremes and frost while you’re about it !

You’d have thought the Aussie Sceptics Party would have done that by now of they were any good? Imagine if Senator Fielding asked you? Surely you’re not going to settle for a BoM analysis?

Coho – the RCS is an investment in the future. We now have a quality reference network. Every nation should have one. Most of the US site should simply be scrapped. Every site should have a pre-inspection, checklist, photographed, GPSed. All sites should be revisited every 2-3 years.

However the RCS won’t help you with the long term analysis – you’ll have to do the best you can. And if you weren’t such a conspiratorial varmit you’d have a relationship with David Jones and know exactly what they’d done. But that’s what you get for playing secret hand-shake fifth columnists.

You see – some of us might have come to climate change by finding big trends in rural locations. Those building plant models. It may have even come as a surprise. Just like when growers reckon frosts aren’t as bad as they were and you find a bloody big trend in the data.

All this was fund incidentally – almost by accident – before climate change was even popular !

Ok, our very own Abbott and Costello deplore the “appalling” mistakes of the sceptics; “cubics ARE a 3rd degree polynomial” intones sod; sod by name, sod by nature. I tell you what guys, you compile your list of sceptic ‘mistakes’ and I’ll present the mistakes from the other side of the fence; here’s my counter to the earth-shattering mistake from MH’s “ZERO knowledge on statistics.”

sod you clearly did not understand what I wrote. I know a 3rd order polynomial is a cubic. You can see I said third order or cubic and I also said first order or linear, second order or parabolic. Your comment is so unreasonable given the other two references I am forced to the conclusion you are trying to make up reasons to criticise. Presumably that means you can find nothing of substance to criticise in what I wrote – thanks.

Nick, you comment that in America they use max/min thermometers. I think that is indeed correct, it slipped my mind when I was writing the last response. However if max min thermometers are used the situation is even worse. the weather reporting is basd on an ssumption that there is one maximum and one minimum per 24 hour period. Changing the time of measurement does not change the max and min recorded unless one assumes silly measurement times such as close to the maximum temperature.

However to your more significant point that all the discussion on these blog sites is more or less armwaving. I have to say thats exactly the way I see the entire AGW issue. Sure CO2 is a green house gas and increasing its concentration must increase the retained heat to some degree. The serious questions is how much. So AGW is qualitatively supportable BUT the issue is whether or not it is significant and thats a quantitative issue. I strongly question whether AGW at a significant or dangerous level is quantitatively supportable.

It seems there is general agreement that the direct effect of doubling CO2 is somewhere between about 0.5 and 1C (most people incline closer to the 1C). We have already had half a doubling with a further half predicted by 2070 so maybe a further 0.25 to 0.5C from the direct effect of this CO2 rise. To get this up to a significant or dangerous level, AGW supporters have to call on all sorts of positive feedback effects. Positive feedback from water vapour, positive feedback from clouds, positive feeedback from everything. Where does most of this evidence for positive feedback come from – from models, a claim that only by assuming such feedback can the models be made to match what we observe. But the models do not match what is observed in forward predictive trials. And every natural system I can think of shows net negative feedback. Given the almost universality of negative feedback in natural systems claiming not just net positive feedback but very strong positive feedback should require overwhelming proof, yet very llittle is offered.

I see article after article claiming to show that warming has occured over the period from 1970 to 2000. Sure it has, the uncorrected USHCN record shows it clearly but that warming is less than occured in the first 3 decades of the 20th century and looks like random fluctuation. So what, it does not show correlation with the long term CO2 record so it does not support the AGW hypothesis.

To me the bottom line is, we have had half a doubling of CO2 over the 20th century and the AGW supporters claim an 0.5C rise over that time. So if we accept that (I don’t accept the claimed causation) we should expect another 0.5C to 2070. Hardly an issue to destroy our economy over. The only way I could see that being wrong would be if there were long time constants so that we have not seen all the effects of the first half doubling as yet. Well I explored that issue in a previous post and showed that no mater what time constant one assumes it is not possible to get 0.5C from 1900 to 2007 with a further 3C (IPCC 4th assessment report summary for policy makers page 12 of 18 4th bullet popint “global average surface warming following a doubling of carbon dioxide concentrations is likely to be in the range 2 to 4.5C with a best estimate of about 3C”) by 2070 unless one assumes a positive feedback co-efficient for water greate than 1. Nick, you commented on the analysis and accepted it as generally valid.

Coming back to the subject of this thread, if your raw data shows no trend and the trend claimed is the result of data corrections you are on shaky ground. If the people deciding on the corrections strongly support the hypothesis which the trend is hoping to prove you are on even shakier ground. If the adjustments to the historical record change with time and always in the direction so as to further support the hypothesis (GISS) the ground gets still shakier and if adjustments that clearly have some relevance going the other way are omitted (UHI adjustment) it gets still worse. Overall AGW seems to me an edifice built on quick sand.

‘However to your more significant point that all the discussion on these blog sites is more or less armwaving. I have to say thats exactly the way I see the entire AGW issue. Sure CO2 is a green house gas and increasing its concentration must increase the retained heat to some degree. The serious questions is how much. So AGW is qualitatively supportable BUT the issue is whether or not it is significant and thats a quantitative issue. I strongly question whether AGW at a significant or dangerous level is quantitatively supportable.”

I sometimes wonder if sites like this exist just to create the impression there is a huge debate. I really do wonder why Jennifer persists in putting up topics that push junk science such as G&T and Miskolczi, or political musings that are pointless, when the issue is about science.

There is still much debate here about whether or not the greenhouse effect even exists, or if it breaks the second law of thermodyamics.

I have said years ago, the only real debate is the extent to which CO2 will cause warming. It’s the claim at the basis of the whole schemozzle this has become, it surely can’t be too hard to focus on it and sort it out.

sod you clearly did not understand what I wrote. I know a 3rd order polynomial is a cubic. You can see I said third order or cubic and I also said first order or linear, second order or parabolic. Your comment is so unreasonable given the other two references I am forced to the conclusion you are trying to make up reasons to criticise. Presumably that means you can find nothing of substance to criticise in what I wrote – thanks.

I have seen other analysis of trends using polynomials, and the criticism is that they don’t handle endpoints well if there is a temporary excursion from the trend. It appears to me to be a valid criticism.

Hardly an issue to destroy our economy over. The only way I could see that being wrong would be if there were long time constants so that we have not seen all the effects of the first half doubling as yet.

There are positive feedback effects that are only just starting to appear, such as albedo change and methane released from permafrost. The heat content of oceans is clearly having a dramatic effect on the Arctic. The rate of change there is ahead of schedule. The air temperature in models will show periods where it drops. The scientific advice to Penny Wong http://www.environment.gov.au/minister/wong/2009/pubs/tr20090624c.pdf tells the story.

How strange, I am accused of knowing nothing about statistics because I don’t know a 3rd order polynomial is a cubic. When I point out that I do and my words have been misread, then don’t know about statistics because of handling of endpoints. The really interesting point of course is that the issue is not statistics at all, it is curve fitting. Not quite the same thing.

I must apologise to SJT – I missed out on mentioning the further positive feedback factors he mentions – albedo change, methane release. Any others I should know about? One question, do you acknowledge any negative feedback factors at all or do you seriously think there are none in Earth’s climate system? If so, how do you reconcile that with the fact that negative feedback plays an overwhelming role in natural systems? Do you understand how a system with multiple positive feedback loops and no negative feedabck loops is likely to behave?

The talk about ocean heat content is interesting. You see a few years ago it was all about warming of the land – no mention of the oceans at all. Then when it became too obvious that the land and air was not warming the argument became ocean acidification and coral bleaching. When that did not serve, suddenly ocean warming is discovered to save the day.

I have successfully worked in research now for more than 30 years and I have seen such situations before. Every time one line of evidence ceases to serve a new line of evidence is discovered and the earlier one dismissed as irrelevant. I need hardly say that the eventual outcome for the hypothesis was never good. If you put up a line of evidence and it ceases to support the hypothesis it is not valid to simply drop it and change to a different argument. This alone should be enough to raise major red flags. Apart from which, I am seeing reports that the oceans have started to cool. To me its like a rerun of about 2003. Any guesses on what the next argument will be?

Please readers don’t you see that money can only be spent once. If we spend trillions on climate change and it is a false hypothesis thats trillions we don’t have to spend on real issues that need attention. Even worse, if in the process we destroy the economies of our countries we might never have the resources to find solutions to our real problems. There is no argument that mankind is outgrowing reliance on chemical energy but wind power and solar (at least on earth’s surface) cannot replace them. Why not spend the money to find a real alternative energy solution. Yes that will take time but we have time so long as we dont squander our resources on false crisies.

“Any others I should know about? One question, do you acknowledge any negative feedback factors at all or do you seriously think there are none in Earth’s climate system?”

Of course there are. That is why there is so much debate and research still going on. The new generation of models on the more powerful hardware now available will enable a better understanding of clouds.

“The talk about ocean heat content is interesting. You see a few years ago it was all about warming of the land – no mention of the oceans at all. Then when it became too obvious that the land and air was not warming the argument became ocean acidification and coral bleaching. When that did not serve, suddenly ocean warming is discovered to save the day.”

You are too cynical. The research has been extending on many fronts, including the oceans, land, models and several expensive space projects. The argos project is one such recent project that has been relatively new. You don’t get something like that going overnight. That’s the reason they can start to give informed information on the ocean heat content.

Argo started deployments in 2000, it finished that phase in 2007. They must have been planning it for years before 2000.

“The talk about ocean heat content is interesting. You see a few years ago it was all about warming of the land – no mention of the oceans at all. Then when it became too obvious that the land and air was not warming the argument became ocean acidification and coral bleaching. When that did not serve, suddenly ocean warming is discovered to save the day.”

sod you clearly did not understand what I wrote. I know a 3rd order polynomial is a cubic. You can see I said third order or cubic and I also said first order or linear, second order or parabolic.

you used a different term on the other ones. i think there is a difference to “second order or parabolic FIT”.(you moved the fit to the front. at best, that got me confused.
but if you say you meant the right thing, i will of course accept it. it doesn t make sense to discuss the meaning of his words with the author..

Your comment is so unreasonable given the other two references I am forced to the conclusion you are trying to make up reasons to criticise. Presumably that means you can find nothing of substance to criticise in what I wrote – thanks.

pretty lame. the time of observation bias link i gave above is to John Daly, who is a denialist like you. he is using actual data to show the effect. time of observation bias is a fact.

The really interesting point of course is that the issue is not statistics at all, it is curve fitting. Not quite the same thing.

this is complete nonsense. you were not talking about splines. but about “trends”, “significant” and “linear least square”.
words that i will of course find in my university statistics textbook (Ulrich Krengel “Einführung in die Wahrscheinlichkeitstheorie und Statistik”).

the really interesting thing is, that you are discussing the curve fitting without looking at the statistics behind your claims!

i did a simple test. as i didn t have your data, i just took 20 datapoints from your graph (eyeball..) and run the regressions in excel.
both the linear and second order polynomial have extremely bad R² values. (0.004 and 0.02)

things change pretty dramatically with the 3rd order polynomial. (R² 0.5) the problem with that cubic regression is, that it bends UPWARD at the end!
higher polynomials again don t significantly increase the correlation and look very similar to the cubic one.

i am curious, why don t you show us the graph with the cubic and R² values?

A linear least squares trend line, created using the Excel trend line function (Red trace) shows a small temperature rise of 0.09C per century which is far less than the rise claimed by AGW supporters and clearly of no concern. However, the data shown in figure 5 bears little if any resemblance to a linear function. One can always fit a linear trend line to any data but that does not mean the fitted line has any significance. For example, if instead I fit a second order trend line (a parabolic) the result is extremely different. That suggests a temperature peak around 1950 with an underlying cooling trend since. Which trend line is the more significant one? If there was really a strong underlying linear rise over the time period it should have shown up in the 2nd order trend line as well. This suggests that it is questionable whether any relevant underlying trend can be determined from the data.

you are making pretty strong claims about the trends. but you do NOT even use the most basic statistical tools to back up your claims.

every person with any knowledge in statistics will tell you the same: stick to the linear regression. use others only, if you have a very strong reason to do it. (like everyone doing work on that subject is using it. or the graph looks like a perfect fit to a non-linear regression)

but do NOT randomly chose a pretty bad non linear fit (your parabolic one) and ignore a much better one (the cubic one) just because one of them fits your ideas (the parabolic pointing downwards in recent times) while the other one does not (the cubic one pointing UP).

Of course we are cynical, SJT. It was ‘global warming’ until it began to cool – then it was ‘climate change’. When they couldn’t find the ‘hot spot’ in the atmosphere, they ‘discovered’ it in the oceans.
They dismissed the MWP on the basis of one study in the N Hemisphere, disregarding all others taken in both hemispheres. They based pre-1950 CO2 levels on Callender’s cherry picking and disregarded all the measurements done by scientists from 1850. The Siple ice core sample shows CO2 levels were 330ppm around 1900 but the data was manipulated by moving the dates forward to match the Mauna Loa data.
NASA dropped off lots of weather stations in 1990 and uses up to 1250km shading to cover the non-stationed areas. Yearly average temperatures prior to 1980 were ‘cooled’ whilst post 1980 were ‘warmed’ after NASA made ‘necessary adjustments’. Even our own BOM has began to use 1950 (a cooler time) as a start date for comparing temperature.
They say this is ‘unprecedented warming’ and disregard 1910-1940 spike with less CO2 input. They totally disregarded the low ice extent in the early part of the 1900s and conclude the present situation is the worst.
And you say we are too cynical.

Michael,
I reiterate my complaint about armwaving. Karl et al carefully studied the real data about TOBS, and computed the effect of choice of climatological day. You are just speculating. You should read what he says.

The cutoff does matter for min-max. Suppose the day runs from 9am to 9am. That would be common – the postmaster reads and resets the thermometer when opening the office, say. Suppose there is a big frost on Monday morning. The coldest temp is at 6am, and that counts as the Sunday minimum. But 9am is still cold, and that is the Monday minimum. The same frost is counted twice, which creates a cold bias. If the thermometer was read at 8am, Monday would have been counted as even colder.

But if instead the day ended at 5pm, the frost would have affected only one day. Instead, you’d be counting hot afternoons twice. A warm bias.

Hammer is just another died in the wool denialist masquerading as an objective scientist. I note he’s totally ducked a reasonable analysis of the centennial change warming signal. Fails on the literature review – as with all denialists a rampant cherry-picker as our little Melbourne episode demonstrated.

Perhaps insufferable sod can give us another lecture, this time on smoothing? Now where to next; the long term with luke’s newest toy, EOF time-variable correlations, or the micro-climate effects at particular sites? Ah, AGW, the growing textbook of dissembling, artiface and the gobemouche.

You need to sort your scales out before making these sort of wild claims:Since the total corrections for the US look so similar to the claimed temperature anomaly, it begs the questions as to what the raw data looks like without any corrections. Does it show the claimed rapidly accelerating warming trend claimed by the AGW advocates?

sod; you miss [ignore? I don’t know how smart you are] the point; the actual data contradicts the trend produced by the smoothing; “just a minor caption error”?! The difference is between a false [that is contradicted by the data] trend which sustains alarmism and a declining trend which disproves the AGW bandwagon; wicked. And anyway how can you overlook a fundamental shift in the smoothing parameter from 11 to 15; Steffen is accumulating quite a body of form; his outrageous comments fresh from Copenhagen about AGW being worse than before when it was only going to destroy the world! Then his post-Fielding comment that atmospheric temperature was old hat, the important thing was ocean heat which had an increasing upward trend; this is just plain wrong and now another ‘oversight’ which allows alarmism to be cranked up again. Steffen has no shame and neither do you bozos for defending him.

we chose M=15. In hindsight, the averaging period of 11 years that we used in the 2007 Science paper was too short to determine a robust climate trend. The 2-sigma error of an 11-year trend is about +/- 0.2 ºC, i.e. as large as the trend itself. Therefore, an 11-year trend is still strongly affected by interannual variability (i.e. weather). You can tell from the fact that adding just one cool year – 2008 – significantly changes the trend line, even though 2008 is entirely within the normal range of natural variability around the trend line and thus should not affect any statistically robust trend estimate. -stefan]

but i guess, just like Miachael, you are not interested in that “statistics” stuff. you prefer your own little view of things and conspiracy theories.

Slightly off topic but hopefully Jennifer will indulge me. I have recently seen many comments to the effect that the last 10 years of cooling is not significant. Just random fluctuations and not abnormal. Yet the entire AGW hypothesis is based on the period from about 1973 to 1998 a 25 year period. This implies 10 years is just random fluctuations but 25 years is a definitive trend with an underlying cause. A bit odd that a 2.5:1 change in time scale makes such a huge differnec in outlook.

However look at the raw data for the US. From around 1930 to about 1975-1980 there was a 45-50 year long distinct cooling trend. Well if 25 years is definitive then 45 years must be even more so. So what caused this cooling trend and how do we know the subsequent 25 years of warming was not say rebound because the cause of the cooling trend stopped.

sod; really, you have no idea; the post by David Stockwell shows that expanding the 11 point smoothing period to a 14 year period completely reverses the downward trend of the 11 period; what Steffen says is garbage because the 15 point smoothing he used to produce the non-linear ‘trend’ [remember your little exposition of the dangers of non-linear trends?] is actually worse than the 14 pointer supposed by Jean S; David Stockwell explains it here;

michael; don’t you know the cooling period in the middle of the 20thC still had an underlying warming trend which was masked by either natural variation, EOF2 from luke’s Parker et al paper, or global dimming caused by aerosols 🙂 BTW, if you are interested in a new unpublished paper which looks at a different statistical approach to the temperature history of Australia and globally, with a specific rebuttal of the Easterling denigration of the post 1998 cooling, drop me a line and I’ll send you a copy.

My mistake for not differentiating Stefan Rahmstorf and Will Steffen; of course Stefan is the one with quaint ideas about preserving robust trends while Steffen is the one who is the expert in faux and phony reasons for alarmism; come to think about it there is no difference.

Coho,
No, you are pinning your faith on a mirage here. David does,’t show a trend reversal – he shows a different endpoint behaviour. But in all these smoothing processes, endpoint behaviour is arbitrary. Smoothing implies taking account of past and future values, and near the end, future values have to be guessed, one way or another. That is why choosing 11 yr or 15 yr makes a marked difference near 2009, but not at earlier times. The 15yr plot just uses more of the guessed values. It will then depend on what the guess is, as DS shows when looking at MRC etc.

If you could really find a magic way of reliably improving estimates of current trend, you could make a killing on the stock market.

As Luke has pointed out before, the correlation between the satellite data and the surface data is good, just look up woodfortrees. The baseline is different, but wiggles up and down indicate both sources are measuring the same changes pretty closely.

Luke says ‘the RCS is an investment in the future. We now have a quality reference network.’

So I just checked the NSW weather stations used in ‘Australia’s Reference Climate Station Network’. These stations have been chosen on the basis of their ‘high quality, long-term climate monitoring, particularly with regard to climate change analysis.’ Preference was given to the following criteria:
* high quality and long climate records, (the BOM gives a figure of 30 years only)
* a location in an area away from large urban centres, and
* a reasonable likelihood of continued, long-term operation.

All the stations appear to be in open areas and would not be influenced by the UH effect. Of the 21 NSW stations they have selected I have noted the following.

* Two stations appear to have been closed but are still listed (Nowra RAN Air in 2000 and Point Perpendicular in 2004).
* Seven stations began recording data in the 1990s. (Richmond RAAF seems to have reverted to data from a previous station 0.4km away but their long-term average temps are not consistent).
* Four stations go back to 1950, 3 go back to 1940 and 1 goes back to 1939.
* Only 4 go back to 1910 and before (Yamba, Tibooburra, Bathurst and Moruya).

So, of the 21 stations listed, 9 do not fit the criteria 1 to be a RCS (42%) and 8 fit the criteria but don’t have the past history to assure a ‘long-term climate monitoring, particularly with regard to climate change analysis’. Of the four that have a long, uninterrupted history, Yamba has temperature gaps in the past 11/14 months so I would question its ‘high quality’ data.

As a matter of interest I ran the temp data for Tibooburra, Bathurst and Moruya using 1911-1940 and compared it with the 1971-2000 averages and found a mean average temperature rise of less than 0.2C over those periods.

Maybe someone from the BOM can let us know if the above is correct (the source I use is at http://www.bom.gov.au/climate/change/reference.shtml#rcsmap) as these stations may have been updated.
Also, do some of the above have neighbouring WS which go back some time and that data is used to calculate long-term averages?

What possibly could explain that sort of correction, where it seems that for half a century, the observations have been made earlier and earlier in the morning (or later and later in the evening)…

The Time of Measurement correction should look like noise… randomly earlier or later in the day. It should not show a trend.

What explanation could one offer? That those making the observations over the last 50 years are getting more and more lazy, and don’t get around to logging the time till later than they did in previous years?

Perhaps the observers are all getting older… and walk more slowly out to the thermometers… slower and slower each year, delaying their arrival at the monitoring station?

Is it possible that television commercials have been gradually shifting such that observers pry themselves away from the tv at a subtly different time?

Maybe people are talking more on the phone, and as a result, get to their data collection when they are done yapping?

For the life of me…. I just don’t get it.

Well, I suppose there is one more. If it really is getting warmer, and observers prefer to stay out of the heat… they might take readings more in the mornings or evenings…. but really, I don’t think people can notice a 0.3 difference in temperatures.

OK…. I just read where Time of Observation adjustments come from… I should have guessed. Software. It is empirically derived. “The TOB-adjustment software uses an empirical model to estimate and adjust the monthly temperature values so that they more closely resemble values based on the local midnight summary period.”

Ah, this still seems very funky. how come this software sees fit to generate gradual 50 year upward trends?

And since it is supposed to be correcting the data to look like the “midnight summary period”, which one might assume is a cold part of the daily temp cycle, and the observations probably made during warmer times of day… then the corrections should be *downward*.

The systematic time of observation bias would be of
little concern with regard to temperature trends provided
that the observation time at a given station did not
change during its operational history. As shown in Fig.
3, however, there has been a widespread conversion
from afternoon to morning observation times in the
HCN. Prior to the 1940s, for example, most observers
recorded near sunset in accordance with U.S. Weather
Bureau instructions. Consequently, the U.S. climate
record as a whole contains a slight positive (warm) bias
during the first half of the century. A switch to morning
observation times has steadily occurred during the latter
half of the century to support operational hydrological
requirements. The result is a broad-scale reduction in
mean temperatures that is simply caused by the
conversion in the daily reading schedule of the
Cooperative Observers. In other words, the gradual
conversion to morning observation times in the United
States during the past 50 years has artificially reduced
the true temperature trend in the U.S. climate record
(Karl et al. 1986; Vose et al. 2003; Hubbard and Lin
2006; Pielke et al. 2007a).

if you don t understand or believe the articles, you can make a simple test in a excel or calc sheet. generate two random columns of numbers, low min in the first (0-5, for example) and high max in the second (20-25).
a measurement at the “best” time of observation, those two numbers are the min and max value of the day. (generate a day average from them)
to simulate a worst case scenario, simulate a n observation time close to the maximum, by calculating the day maximum as the as the higher number between this day max and thelast day max value.
do the same for minums. the difference is massive!

————————

if you on the other hand prefer to believe a denilaits, just check the Pielke article

Very ‘smooth’ Stokesy; so changing the “endpoint behaviour” is different from changing the trend; as always I defer to your greater wisdom on these crucial matters. Besides I’m still wondering why noone wants to talk about Karoly and DTR and why the adjustments are always upwards.

And then all I said was that it *seemed funky*, and asked a simple question, based on what I had already read.

Read for yourself:

“Ah, this still seems very funky. how come this software sees fit to generate gradual 50 year upward trends?”

sod goes apoplectic:
—————————
all this uneducated posts about TOB are simply stupid. please read something about the subject, before you comment on it!!!!
—————————

BTW, your first link seems broken.

I am finding it a little hard to believe that it is taking 50 years to get observers to change to morning readings. They must be stubborn old b*stards. 50 weeks maybe… but regardless:

I checked the Peilke article…. and there is scant information about the TOB adjustment… just a cite of a paper that claimed it is robust.

SO, the idea that as more and more stations added the same biases to their data…. the overall shift represents an accumulation of error for the entire dataset… now, in reference to what? Is the bias being normalized to the first year in the data? What year is considered to be a year where there is no bias?

we all know that global warming is a scam to get more tax dollars from already struggling americans. anyone with common sense can see that. but they will pass this ridiculous bill anyway, because ignorance is running this country right now. we will pay 250.00 extra a month on electric bills and have our hard earned money stolen from us for something fake. all of government is a scam. we need to get them all out of office and start new and fresh. i mean even the scientist who first raved about global warming had to retract their story, because it was misread. that was their words. we misread the data. that is something you didn’t see on cnn, and the AllBarackChannel. thank you for putting this out there for all who cares to know the truth to see. you are a patriot!

Notice that its not a case of blaming Goddard alone. The Goddard criminals are far more visible than the people who fake up the Hadcru figures and whoever that third group of frauds are (their name escapes me but who cares?). These other two groups keep a lower profile than Goddard members who are media queens. But they are all purveyors of lies just the same.

The job that all three outfits had was quite simple. Apply quality control to the data, prior to going to work with it with the statistical techniques. Then they ought to have been using the satelite/balloon data to test their output.

Neither of these three supposedly scientific outfits have reached so much as a Sesame Street level of ability in logical inference. If you look at all three of the ground data amalgamations, well they don’t agree with each-other perfectly. But they disagree far more with the satellite and the balloon data then they do with eachother.

So we have the balloon data. We have the satellite data. Both of which agree with each-other. Convergent evidence. Good data. If the two sets of data were not good data they would not confirm each-other.

Then we come to the ground data aggregations. And suddenly we find that “One of these things is not like the others. One of these things just doesn’t belong. Can you tell me which thing is not like the others………..”

So we see that these people are not scientists of any sort but total failures. And an ideology has crept in that you just throw in all the data together, no quality control and then you go to work on it. Aggregate it all and don’t let anyone else check your work if you can help it.

Well how about if the Soviet Union collapses and we aren’t getting their Siberian data all of a sudden? If they just throw everything in how is that going to affect things? Its going to make the 90’s warming look a lot higher than it ought to have looked. Siberia being cold right? If I’m not going too fast for the fraud side of the argument.

What about if areas in Africa or other hotter-than-average regions are steadily coming on board over time? How is that going to rig the figures? Its going to produce a rising gradient of temperature on to of what ups and downs are already there. Thats what its going to do. And did revelations of the heat island effect and inappropriately placed measuring stations lead to a massive culling of historical data? I think we all know the answer to that don’t we?

So all of them, and just Goddard, are failures. They had the balloon and satellite data that would have allowed them to see if they were on the right track. Then when they had nutted out their methodology and exercised enough quality control over the data they had, they could have worked backwards to the pre-Satellite error.

Its no use getting any of this same crowd to try and reform their act. Each country ought to just sack them all. No need to replace them either. We know what needs to be done. Full steam ahead on governmental cost-cutting and clearing the road free of obstructions to synthetic diesel and nuclear power. Just release all the raw data onto the internet and let people argue it out. No conclusion anyone came up with could change the fact that we need to move quickly to make nuclear and synthetic diesel investor-friendly prospects. Billions of lives are counting on us doing just this.

We have to understand that something has happened in the English-Speaking world with our public servants. There was a time where, for the most part, they were reasonably content with merely getting a free ride on their hosts. Now the parasite has turned nasty. And it wants to actively damage its host for no reason. Just out of spite and nihilism or so it seems. We see this tendency rearing its head at every turn.

[…] Michael Hammer after studying the official data from the US official weather stations, and in particular how it is adjusted after it has been collected, has concluded that the temperature rise profile claimed by the US government is largely if not entirely an artifact of the adjustments applied after the raw data is collected from the weather stations. http://jennifermarohasy.com/blog/2009/06/how-the-us-temperature-record-is-adjusted/ […]