Update Alert: New graphs up, updating text now. – new text is in red, silly mistaken old text is in strike out. Sorry for the lousy quality control, there is a reason I am not paid to do this! The fixes made my case stronger I think.

Update Alert: I have errors in the last four graphs (global averages) which I will fix as soon as I can. Sadly this one person shop is juggling too much right now – my bad. It only changes the magnitude of the results, not the contours. Now if I can keep MS Excel from bombing again! – end alert

What I appreciate Chiefio doing here is just staying with the basic data and asking a basic question – what was the accumulated change in Temp over time (what he calls dT/dt). Â He posted his data and did some graphs, but I wanted to do my own look at his results and did my own graphs of the data – hopefully Chiefio will not mind.

As far as I know Chiefio was using the raw data from GHCN, which is the basic data for GISS, CRU and NCDC before they do all their special ‘adjustments’. So here are the regional results in my own format with some additional information added.

Let’s begin with the Pacific Region:

Let me first highlight what is being shown. The blue line is the annual cumulative dT/dt produced by Chiefio. The black line is a 3rd power polynomial trend line that shows what the temperature is doing over time. The yellow bars are the one standard deviation range – I like to see where the data’s normal variance is when determining if there is any significance to trends. Note these are not error bars.

Most importantly is the red and green lines. The red lines are the number of stations (in thousands) use to generate the temperature for each year. It has a huge bearing on what we see in the graphs. The green line indicates when this region of the world went through a massive culling of stations – usually in rural and high altitude areas. This is something Chiefio notes over and over again in his analysis.

All graphs can be clicked to enlarge in a new window.

So what do we see in the Pacific region? We see that the region was warmer than today around the first World War, and about the same as today around World War II (+.2 vs +.4, which is insignificant and in the error bars). Â We also see massive station drop outs in 1997, after which the temperature finally breaks out of the standard deviation range and starts rising in true hockey stick fashion. The number of stations in this region began at around 30, peaked at around 710, and then dropped back to around 120-160.

Looking at this region alone it there is no clear indication of ‘global’ warming, human made or otherwise. Let’s move onto South America:

Here we see something completely different. Here we see pretty steady rise in temperature all through the period. But the number of stations is much smaller, peaking around 300 before dropping back below 200. It is interesting how the modern warming accelerates with the culling of stations (one of which we all know is the station in the high altitude capitol of Bolivia). But let’s agree that in this case we see a steady increase in warming. Conversely, the fact is there is not a huge population or industrial base in this region, which has one of the lowest CO2 foot prints in the world. How odd, why such a strong signal there?

Let’s move to Africa now:

Very strange indeed, and totally different than the other two. As Africa became more industrialized after WW II it actually cooled off. Not until 1979 (which coincidentally marks the culling of temperature stations) do we see signs of rampant warming. Â This makes little sense, if CO2 was the driver. Why would it kick in all of sudden in 1979 – seems a little late.

One thing Africa shows which also shows up in other regions is the how the increasing number of temperature stations initially result in decreasing temperatures.

My guess: for many decades prior the small number of readings are probably coming from large population centers, and it is only when you see electricity and other modern infrastructure move out into the country do you see the number of stations rise and expand, and then the global temperature falls. This cold spell in the middle of all these graphs could just be the march of technology out of the (urban heat islands) cities to the cooler country side and higher elevations. Something worth considering before claiming the end of the world is nigh.Â In Africa the station numbers grew to a peak of around 500 before being culled back by over half.

Now let’s move onto North America – which is very unique for many reasons.

Like the Pacific region there is no warming here – global or otherwise. Which is probably why NASA GISS has been telling the news media there is no sign of global warming in the US and it will not show up (if it ever does) for 3-4 decades to come.

The period 1930-1960 looks to be the same as 1997-2009. Even with massive station drop out there is no CO2 induced warming, even though the US is supposed to be the CO2 producing monster of the world! Again, how strange and incoherent with conventional wisdom.

The US once boasted over 2,500 stations in the creation of its temperature value – a strong sample size that would even out all sorts of biases and errors and UHI. Now the sample size is one tenth that amount. Why? Why go with spottier data?

One last stop on our world tour, and that is Europe:

Here we see something very bizarre – runaway global warming on an alarming scale! Europe shows the steepest ‘global’ warming of all the regions Chiefio processed (Asia was left out). But almost all of Europe’s warming coincides with the culling of stations, from a high of around 800 to around 250 now. It is interesting to note that the 1930-1940 period was about as warm as now though. Is this CO2? Doubtful.

So what does this mean globally? Some regions show very recent warming, and others show none at all. I decided to produce four global average graphs to see if I can tease out what is happening. Because of the scaling of the number of ground stations, the red line is now 10,000’s of stations instead of 1,000’s as in the previous graphs.

The first graphs simply averages the yearly dT/dt for each of these regions into one global averag:

Interestingly we start to see the form of the classic IPCC, CRU, NCDC, GISS graph, including a warming of 0.8Â°C. But what I found really interesting was how global temperature so closely followed the number of stations used in the computation. In the early years, with sparse global stations, the world looked pretty cold. As the number of stations grew the temperature followed in lock step. Was this due to technology moving from the higher latitudes of Europe to other regions of the world?

Then around 1950-1960 (surely only coincidentally aligned with the jet age and space age, etc) the number of stations increases explosively – and the world cooled down again. Imagine that. Is this about the time the cold war was going global? Is this when we began putting sensors and weather stations all over the planet?

Then the great station culling took place and the temperature started to rise again. Hardly all coincidence.

By this graph alone we can see global temperature is tied more to number of stations than CO2 production or atmospheric levels. But what really bothered me was how high the hockey stick was rising! Â Did we vindicate Micheal Mann? I mean there is a 3Â°C increase in temp from 1980 to 2009! One would think this would be headline news.

I decided to run the same graph and this time take out Europe (which seems to be a hot bed of hot air), and was not surprised to see the world’s overheated temperature drop 0.15Â°C1.5Â°C. Without Europe, cooler heads seem to take over:

But we still have the same driver here – number (and one would assume altitude and distribution) of stations. What I realized is I was treating all regions as equal when averaging, which is mathematically wrong. Weighting should be based on the number of temperature readings because that is the temperature of the world at those points. Some areas have more thermometers than others, but that should not effect the over trend over the century. I needed to remove the regional idiosyncrasies.

So this next graph computes a weighted temperature increase (dT/dt) based on the percentage of stations used to create the value. basically multiply the percent of total stations for the region for the year against dT/dt. This removes the problem with number of stations in any given year since the numbers appeared to grow and be culled about the same in all areas. What came out of this exercise was pretty damn interesting:

Hmmm, where did all the global warming go? Yes, we do see temperatures rising after the great thermometer extinction, but it is now not very significant at 0.225Â°C and is not much different than the temperature range from 1925-1961 (difference of 0.05Â°C). One would argue that without the culling there may be very little change – but we don’t know for sure.

I changed the red line here to be the average of all the values, which comes out to +0.o6Â°C+0.28Â°C in this case. But note how the recent spike is not all that high beyondÂ the variation in temperature marked by the standard deviation [1] range. Moreover, the rise in temp did not build up with time as if driven by increasing CO2 levels. Everything stayed flat until the great thermometer drop off.

So what happens if we remove pesky Europe from the rest of the global data and see what everyone else shows ‘globally':

Poof, AGW disappears! Without that hot spot Europe things look very different – globally. Under this approach having Europe in or out doesn’t matter much. The great thermometer extinction has some minor effect, but the current warm period is not any different than 1925-1961. Coincidence? Clearly not!

Conclusions:

It is clear if we just use the temperature readings and give them equal weight there is little to no global warming detected over the last 130 years.

Instead of being driven by man-made CO2, temperature seems to follow the number of stations in use, first showing a cooling planet as technology moved out of the large cities, and then showing a warmer planet as they were removed in recent years.

Europe’s data is a mess and an outlier and should not be used to out weigh the Pacific and North America record. It is not by accident most or all of the warming disappears when you pull Europe out of the mix – which means there is something statistically wrong here.

What I would like to see is equal numbers of high altitude and high latitude stations compared to low altitude and low latitudes used to create a more evenly spread temperature record. I would bet a more balanced sample would show the Earth is still operating within its normal parameters, and the only source of man made warming is unproven extrapolations and adjustments.

Having access to the Historic instrumental records enables me to look closely at the circumstances of each one on an individual basis and form impressions about the veracity of the data and the manner it is colected and used.

Firstly, whilst I appreciate that part of the reason for Dr Hansen’s GISS start date of 1880 was due to the number of US stations that commenced around that year, the actual scientific logic of using this date escapes me as coverage in both hemispheres is so poor. Having written a number of articles examining the individual records I am forced to conclude that the start date of 1880 was primarily because it enabled Giss to commence from a low point rather than from a rather higher one if a start date a decade earlier had been chosen.

However, that is by the by as I was intrigued by your comment

“I share a severe skepticism about the global climate indexes since they are 99.99+% extrapolations and adjustments â€“ not measurements! Only .01% of the data are measurements…”

Now I have been following EM Smith’s posts for some time but don’t think I have ever seen this figure used. Did you say it for dramatic effect or is it an actual verifiable comment?

If so I guess that many of the historic records on my site constitute a good proprtion of the genuine measurements and they are so beset by station moves and UHI that any sign of man made warming over the last 300 years is imossible to see -in fact I would think that the 1720’s were as warm or warmer than today.

So, can you clarify if your comment is meant to be colourful or factual? Thanks. Great article!

Welcome and thanks for posting. I calculated the number based on a crude experiment I did to ascertain how quickly temperature decays with distance (not to mention time, but just assume there is no ‘time of day’ issues going back to 1880!).

The post is here with some back-of-the-envelope calculations on the accuracy one might expect from a temperature reading as you go out 100 km. Depending on what confidence level you want (std dev 1, 2 or 3) you get different accuracy numbers, but at best I claim any reading is only good to +/- 2Â° over 100 KM due simple natural variability. That is the ‘error’ around all readings as you move away 100 km.

When dealing with 500×500 km grids created by a few readings inside the grid, the gridded value is probably slightly worse (I did not do the math, but I assume RSS the accuracy n times would suffice). This is the way GISS and others build up a regional ESTIMATE of temperature for comparison day to day, month to month, quarter to quarter and year to year – using these grids. Each comparison compounds the core uncertainty/error.

Worse yet, they fill in 500×500 km grids that have no readings by extrapolating nearby grids (even ones which also had not readings). This really compounds the error since these are not measurements but predictions based on a value 1000’s of km away.

So if you want tenth of a degree accuracy (their claimed error bars over 130 years) you reverse this and assume you need thermometer every 10 kilometers to keep the accuracy/error down to a level where you can detect tenth of a degree effects outside the noise.

From there you can do the math based on land surface area and realize less than 0.01% of the land is measured and the rest of the data is crude and inaccurate guesstimates (due to the decaying accuracy of distance).

Works in orbital mechanics as well, but they are simple compared to generating a global index of temp daily, weekly,monthly, quarterly and annually.

BTW – I agree whole heartedly about the silliness of going back to 1880. In another post I noted how you have to account for the antiquating technology and inability to synchronize time (since the measurement has to be made the same time each day) globally back then.

Great site! No comments, just questions. Who made the decision to cull the weather station data in each of the regions and on what basis? Where are the papers justifying this and who are the authors? Who is the controlling authority? Who funds them? What happened to the weather stations that were culled? Are they still producing information available locally and databased locally by whoever is running them? Are they still operational and are their data sets still available or accessible? Why did the number of weather stations used drop in different regions at different times in these databases you referenced? Who controls these databases and how does one access them? Do these databases still contain the raw data from all the weather stations that were culled? Are their older papers in the literature, local or international which reference temperature data from stations that were culled. For instance, are their any European or country specific papers such as Russian that use data from weather stations that were culled from 1988-2000? Thanks to all of the people doing the investigative leg work on this issue!

There are no papers justifying the process of computing a global index from measurements, or working backwards in time. If there were we would see the huge uncertainties.

Surprisingly, the controlling authorities are the governments funding these institutions. The UK funds CRU, we fund NASA GISS, NOAA NCDC and GHCN to some extent.

there are thousands of weather sites in operation producing data and the data supposedly is still archived.

No idea about the strategy behind the culling, but I can tell you most scientists are heading the other direction given the rise in computational power. Today’s commercial computers can easily process 100’s of 1,000’s of data points instantly – there is no reason to lower the number of measurements if you want more accuracy.

Not sure on the rest of your questions, the data is spread around the world, but the above mentioned groups have agreements to share data.

and as for anecdotal, observational evidence – I was joking about everyone freaking out about a little snow here in East Texas. We woke up this morning to more snow on the ground than in anyone’s living memory, it’s beautiful but also totally amazing.

By the way, I forgot to add that I believe in the relative accuracy of a thermometer when measuring the microclimate inmediately around it. However, when that thermometer moves elsewhere or the area around it changes substantially, we are no longer comparing like for like.

I find the notrion of sticking together thousands of periapetic microclimate readings and believing it to be any sort of meaure of a global temperature to be completely ludicrous.

[…] TrueOrange …you have taken some sort of statistics or made a best fit graph before right? Instead of being driven by man-made CO2, temperature seems to follow the number of stations in use, first showing a cooling planet as […]

[…] I noted then that the data shows the distinct possibility that global temperature cooled as thermometers actually spread outside major cities, along with technology. Instead of just taking measurements in and around human enclaves, as a result of WWII and the Cold War we saw weather stations proliferate into urban areas across the globe. It seems pretty clear the expansion of measurements could honestly be a cause for some of the cooling seen in the last century, prior to the deletion of thousands of measurement sites from the estimates of the global temperature index. […]