Back to the 6th grader who some would prefer to believe over trained climate scientists. His anyalysis covers 28 rural stations showing no increase, and 28 urban stations showing an increase. The analysis is restricted to the USA.

This is a common theme for all the recent spate of analysis claiming to find flaws in out temperature history - pick a very small number of stations and show a result. With thousands of stations in our temperature record this data is a cherry picker's goldmine. It does not even need to be done deliberately - consider if dozens or hundreds of people get curious and decide to locate a few stations at random and check the trends. Most of these will see that most of the stations have a genuine warming trend, conclude that the world really is warming and go on their business. I have done so myself. Others will happen to pick stations out of the set that is not warming, and conclude that something must be wrong and write up their analysis.

In comparison consider an analysis done based on the data collected by Anthony Watts surface stations project. This analysis compares the 70 stations rated CRN 1 or 2 to the rest of the stations, and finds an identical trend.

Mike, this isn't a thread about global warming. I would have thought you'd rather comment on the trend issues raised in earlier posts instead of bringing GW into it.

If GW (in the sense of its attribution to man, which seems to be the only reason it's ever raised) is so critical to trend analysis then perhaps the moderators might consider changing the thread name. I think we need a break from the GW debate.

For mine, the older data just isnt solid enough to make reasonable conclusions regarding whether the temperatures of the past 30-50 years represent an average increase or decrease over longer term trends.

The data over the past 30-50 years appears to be a little more solid but then shorter term climate cycles come into play in analysing the trends and the drivers.

I guess the question for me then is whether this thread is about measuring temperature trends only or is it also interested in trying to identify the drivers behind the trend and then predict where we are headed in the future.

Edited by Locke (11/12/200910:50)

_________________________
This post and any other post by Locke is NOT an official forecast & should not be used as such. It's just my opinion & may or may not be backed by sound meteorological data. For official information, refer to Australian Bureau of Meteorology products.

It could cover all sorts of things but the thing that annoys me is the prior assumption on the part of some GW proponents that GW is the main, if not the only driver, and that it therefore qualifies to pop up everywhere and go unchallenged simply because a group of scientists said it should.

For example, if one looks at the graphs I posted, how on earth is it possible for man's contributions to disrupt an obvious natural pattern (noting that these cycles were occurring long before CO2 became a factor)?

Anyway each to their own I guess. It's getting impossible to discuss such a thing nowadays.

For mine, the older data just isnt solid enough to make reasonable conclusions regarding whether the temperatures of the past 30-50 years represent an average increase or decrease over longer term trends.

The data over the past 30-50 years appears to be a little more solid but then shorter term climate cycles come into play in analysing the trends and the drivers.

I guess the question for me then is whether this thread is about measuring temperature trends only or is it also interested in trying to identify the drivers behind the trend and then predict where we are headed in the future.

i tend to agree with this that really the older data will most likely be adjusted in some way to cater for time of day, thermometer type etc and the reliability of all data, including the current is not 2 decimal places, it can not even be claimed to be accurate to one. add to that the problem of longer term cycles and the temperature record is a very flimsy piece of evidence for ANY case. cooling or warming. it just should not be used as evidence or proof of anything.

when calculating averages or any form of multiplication or division of numbers, you need one more level of accuracy to be able to hold the lesser level of accuracy. ie if you have a reading of 22.5 deg c with an accuracy of + or - 0.2 deg c and then another reading of 24.5 +- 0.2 deg c and you want an average. it would be fair to look at that and say well it will be in the middle = 23.5 +- 0.4 deg c. you can see that the inaccuracy gets multiplied by 2 because the inaccuracy/uncretainty has doubled in range. now lets say that the level of accuracy is 0.5 deg c or 0.6 deg c, well then things change because the place error becomes a full number 24.5+-0.6 22+-0.6 = what? you cant say with ceratinty even whether it is 22 or 24 deg. so to deal with decimal places or inaccuracies such as this, they must have a place higher level of accuracy than the result. ie if you are to say the average is 23.5 deg c, then you need at least two figures with one more place ie 22.55 +-0.05 & 24.55 +-0.05. does this level of accuracy exist? i doubt it and even if it was shown i would not trust it, yet still we are expected to accept 0.6 deg warming as a result.

i raised the question of alice springs a while ago. this is the record-

early-

after station close-

this is blairs answer-

"There's two separate issues here. The site move from the PO to the airport can be and has been corrected for (the long overlap helps here). The instrument shelter problem is separate - before about 1910 a lot of instruments were in places like under tin verandahs, on walls or even indoors. This is very difficult to correct for, given that there's generally not much documentation about the former instrument position (and Alice Springs is especially difficult because of the lack of neighbouring stations)."

The average is a bad measure anyway in that it scarcely ever reflects an actual observation. Better is the median.

I think some sort of reasonable estimate is possible if we (a) start with raw data and (b) ensure that it's continuous.

As for inconsistencies due to changing instruments and the like, a simple resolution to that would be to run a standard t-test on the data from each instrument with the null hypothesis that the data come from identical distributions. If the data do not follow the normal distribution that is required for this particular test, they could be transformed to normality using logarithms or a power or something like that. Alternatively, a Spearman rank test for non-normal data could be applied. All tests have their drawbacks of course, but if in the end we have to reject our null hypothesis (that the data from each instrument come from identical distributions), then I think that for the purpose of prediction we would use the data for which the instrumentation was more reliable. There should also be at least 30 years of continuous data.

What do people think of this approach? It has weaknesses, but is the principle sound?

the surrounding stations would be fine to get rid of the extremes for a median, but like with alice springs for a lot of that record there is nothing within 300kms which means it must stand on its own. you could just lop off the top 10 to 20% and the bottom 10 to 20%, but it is rather crude.

the crossover period between mid 40s and 50s is all that can be used to work out the differences in the siting. that seems to vary between 0 deg and at least 1.5 deg difference over those years, so ther must be some real messy way they actually pin the two together. you can see from the data that it will not be accurate and should never be claimed to be.

there are newer sites, but i would think that satellites should be the only information robust enough to attempt to get results with a decimal place.

That would be where the inhomogeneity adjustments are made. I don't know the fine workings of that but there are advanced statistical techniques used.

So we are dealing with multiple issues: distance between stations, requiring the use of a 'nearest neighbour' or similar technique, different instrumentation and exposure conditions, changes in siting, and changes in siting conditions. And that's probably not an exhaustive list.

The warming trend shown on the graphs there relate to the 1961-1990 average, which is the standard '30 year normal' used all over the world for comparative purposes.

My difficulty with this is that the data come from only 130 stations, but then what does one do..take stations that have interrupted data with the gaps plugged by statistical inference, or a relatively small set of stations with continuous data recorded under standard high quality criteria (for screens, exposure, etc)?. Obviously this latter dataset would be much more reliable in itself.

Now in the video, Peter used 28 urban and 28 rural stations in the US. There's a common factor here; both the BOM graphs and the video study rely on a relatively low number of stations. That to me makes his selection process just as valid as any professional analysis. He used only the data that had no gaps. Would his results be any different if he had studied 130 rural stations? If, as has been suggested, he could be criticised for using only US data, what could he (or anyone) do with other world wide data that had gaps etc, such as Australia's? It's essential to do 'cherry picking' of sorts simply to get data that are reliable enough. This has nothing to do with preconceptions, as some like to think. And I doubt anyone would want to systematically go through every single US station and decide to remove from analysis only the ones that fitted one's preconceptions about warming. For one to maintain the allegation of 'cherry-picking' on the part of skeptics, that is what would have to be done. Otherwise, how could one possibly know which was which?

Maybe what has to be done as far as Australia's GISS data is concerned, is to study the 130 BOM stations only this time take a longer average, over a period during which all the stations were operational. It would be interesting to see what happens. But the bottom line is that we do not have enough continual data from all over the world, to be making dogmatic statements about warming. We can only look at individual long-running station data and draw a conclusion about that station only.

I see no list of what BOM sites are used Keith and if they have been adjusted and how many urban sites and non-urban are used. I would like to know which ones are used and if they have the original data?

And long ago post, Mike I would believe whoever plots the data correctly whoever that is.Australian data seems to mirror those USA results re urban and country, from what I looked at for many or most sites.Seeing as many "scientists' alter and adjust and amend data, I would believe Satellite data only...although, even there onecould manipulate the data also...one would hope not though...It is getting harder and harder to believeany data on temps actually. Give me the original data and I can then make up my own mind, don't give me GISSor any other series that has been adjusted, as no one knows in reality if their adjustments are correct orbiased sub-conciously. Reminds me of the independent study done when folk knew what samples were belonged to whatand all came to a certain "definite" conclusion, then they tested the samples again not knowing which was which and came upwith a completely different conclusion!

I found a list of sites here. There's more information here. There are links to daily and annual temperature data. A quick look suggests these aren't quite the same as the 130, probably because some might only be rainfall stations, and others only temperatures.

From the PDF information brief accompanying the daily data, 99 of the stations listed are non-urban. Adjustments have been made to facilitate the examination of extremes, so this might not be the best dataset to use for general purposes. Anomalies are expressed with reference to the 1961-1990 normal however it's easy enough to calculate a mean for a different (longer) time frame. Indeed, decadal means might be worthy of comparison (but a lot more work!).

Also, there appears to be some limit as to the time period due to digitisation issues (gosh..if I were dyslexic I'd read that as 'degustation!).

The data files (.tar format) will require a software package to unpack them..I don't think .tar is supported by Windows. I use a program called 7-Zip, a freeware package available from this link.

By the way, I assume that the 'raw' data described as such in the GISS data URL is exactly that..unmodified. If it isn't it ought to be!

it seem to me that huge drop in mean with the pre 1910 equipment over its time would indicate that the 'average' which was established later by the so called hq site at the airport between mid 40s and 60s where it is rather flat would mean that the pre 1910 temps were above the 'average' not under it. in any case, the early data should not be ignored when it is being relied on so heavily for later trends. you either have to integrate them properly or at least attempt to and then use the whole series as a base or you must ignore both and put them in the not enough data category.

i have never understood how 30 years could be considered a base period for any calculation when you have cycle frequencies outside its range. in fact you need twice the highest frequency ie the nyqist rate-

which means if it is possible for an ocean cycle to go longer than 15 years, then you will not have a true understanding of trends using 30 years as a base. perhaps the 30 years has just been established when the climate was considered extended weather and the pdo etc were vague theory. i think it is time to revisit this and the whole understanding of the climate system instead of this bunkering down and setting everything in stone.

"About 2/3 of the nation and 1/2 of North America have snow cover according to the latest satellite analysis from NOAA. Just as impressive is Russia being 90% covered with snow extending into China and west to our friends in Copenhagen attending the one-world government meetings there. Did you hear, there is now talk of shifting the focus to Nitrogen? The fear-mongering never ends...

On the plus side, the arctic ice that Al Gore was so worried about has returned to levels as great as any we have seen since satellites started measuring it in 1979, and we never lost the 40% that he claimed, and that is explained here. http://icecap.us/index.php

Back to the snow cover. While it is covering huge tracts of land, it's not unusual to have that much snow on the ground, although the recent blizzard did set records for cold after snowfalls of 10-20 inches and snow drifts up to 15 feet! The snow may not be terribly unusual, but it is when you consider that we're suposedly seeing global warming (which we're not, and never were) and considering the El Nino going on in the Pacific Ocean. Normally, El Nino patterns lead to milder air across the United States, but this month has been colder in just about every part of the country, except Florida.

A new arctic blast is already developing over western Canada and you can see some of that bitter cold there now. Look at the bitter cold in Russia! http://www.weatherforyou.com/cgi-bin/hw3...as&hwvMapUnits= Depending on when you read this story, the maps will change, but check them through the day and note the readings of 30 to 50 below zero...and remember that we're only in the first month of winter 2009-2010.

The bigger player this year is likely the amazingly weak sun. We just completed a 14-day period without sunpots followed by this weak one today. More amazing is that we should have hit the sunspot minimum in late 2006 or early 2007, yet the solar decline continues.

The new and expanded snowcover across the continent will feed developing storms and make it easier for new arctic air masses to form and move south into the United States, so don't look for a change in the cold pattern anytime soon..."http://www.examiner.com/x-3854-Cincinnati-Weather-Examiner~y2009m12d11-Expanding-snow-cover?cid=examiner-email