Willis: Reply to the Economist

This replaced the previous sticky, and I have this comment about the Economist. Bad form and unprofessional to use the word “denialists”. For WUWT readers who wish to complain: letters@economist.com or use their online form here. – Anthony

AN OPEN LETTER TO THE ECONOMIST

On Dec 11th, the Economist published an unsigned article attacking both me and my work. This open letter is my reply.

But I digress … you begin by quoting from the Australian Bureau of Meteorology, viz:

A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.

The impacts of these changes on the data are often comparable in size to real climate variations, so they need to be removed before long-term trends are investigated.

While this is true, it doesn’t apply. None of the GHCN adjustments are from any of those sources. This is because the GHCN does not adjust for location moves. Nor does it adjust for construction of buildings, nor for any of the other items listed. The GHCN uses none of those to make its adjustments. So all of that is totally meaningless.

Next, you say the “explanation for the dramatic change in 1941 is simple”:

As previously advised, the main temperature station moved to the radar station at the newly built Darwin airport in January 1941. The temperature station had previously been at the Darwin Post Office in the middle of the CBD, on the cliff above the port. Thus, there is a likely factor of removal of a slight urban heat island effect from 1941 onwards. However, the main factor appears to be a change in screening. The new station located at Darwin airport from January 1941 used a standard Stevenson screen. However, the previous station at Darwin PO did not have a Stevenson screen. Instead, the instrument was mounted on a horizontal enclosure without a back or sides. The postmaster had to move it during the day so that the direct tropical sun didn’t strike it! Obviously, if he forgot or was too busy, the temperature readings were a hell of a lot hotter than it really was.

This might make sense if there were any “dramatic change in 1941”. But as I clearly stated in my article, <b>there is no such dramatic change</b>. The drop in temperature was gradual and lasted from 1936 to 1940. The change from 1940 to 1941 was quite average. So that claim of yours is nonsense as well. In any case, the change in screening did not coincide with the 1941 move. In my article I cited a reference to a picture of a Stevenson Screen in use in Darwin at the turn of the century. Perhaps you didn’t bother to read that.

So, to sum up your first arguments, changes in the Stevenson Screens and other local conditions cannot be the explanation for any of the GHCN adjustments because 1) the GHCN doesn’t use local conditions to make adjustments and 2) the timing of the screen change is wrong. In addition, there was no “dramatic change in 1941”.

Next, you point out two actual mistakes I did make.

First, in my proofreading I did not catch that I that I had written “the 1941 adjustment” when I meant the 1930 adjustment. That should have been obvious to me, because there is no 1941 GHCN adjustment. My bad.

Second, I had said that the Darwin temperature data couldn’t have been adjusted by using the GHCN method. This method requires five neighboring stations to which Darwin can be compared. Why couldn’t the GHCN method be used? I said it was because in the earlier time periods like the 1930s, there were no such stations covering that time period within 500 km of Darwin. I was wrong, it fact there is one such station.

Neither of these errors of mine affect my point, which is that there are not enough neighboring stations to adjust Darwin using the main GHCN method. The GHCN folks mention this possibility, saying:

Unfortunately, they adjusted Darwin anyway. Consider the GHCN adjustment in 1920. To find five stations around Darwin covering 1920, you have to go out 1,250 km. Nor is there any guarantee that those stations will be suitable. You need to have five stations with an 80% correlation with the Darwin record … I wish you the best of luck finding those five stations.

So while my statement about stations nearer than Daly Waters was wrong as you point out (there is one nearer station that covers the 1930 adjustment), my point was correct – there are not enough neighboring stations to adjust Darwin using the GHCN method. The first GHCN adjustment to Darwin was a single year adjustment in 1901. To get five “neighboring” stations for that adjustment, you have to go out 1728 km. You fail to deal with that issue at all. Instead, you say:

“So is it reasonable, if the GHCN is using complex statistical tools to adjust the temperature readings at Darwin based on surrounding stations, that they might come up with the figures they came up with? Sure. No. Yes. I have no idea. And neither does Mr Eschenbach. Because in order to judge that, you would have to have a graduate-level understanding of statistical modeling. … I don’t understand that formula. I don’t have the math for it.”

And while I am sorry to hear of the lacunae in your math education, please don’t make the foolish assumption that others are similarly limited. I have no problem with the GHCN math. If you truly have no idea on the question as you say … then why are you excoriating my ideas on the question?

Nor is it inherently a complex question. The question is, should temperatures more than a thousand km away from Darwin be used to arbitrarily adjust Darwin’s temperature by a huge amount? You don’t have to be a rocket scientist to figure that out.

Next, as an aside you make the scurrilously false statement that I said that claims of damage due to sea level rise in Tuvalu “stemmed from attempts by locals to blame subsidence problems on the developed world, and cash in on it”. I said no such thing nor do I believe it. The claims stemmed from misguided environmentalists.

Next, you say “He makes it sound as if he’s just happened to stumble across this one site whilst perusing a debate over climate change in northern Australia. But as his link to that conversation from 2000 makes clear, Mr Eschenbach is already aware that climate change denialists have been trumpeting the apparent anomalies at Darwin for nine years.” BZZZZT, poor understanding of the implications of chronology. I looked up the conversation after I stumbled across Darwin. How dare you accuse me of lying? As I said, I went to AIS to see if what Professor Karlen had said was true. I called up a list of all of the stations in Australia that covered 1900-2000. Darwin was the first on the list, so that’s the one I looked at. Try it and see. You accusation is both wrong and totally unfounded.

You go on to say

“[Climate change denialists] do so because of that errant data at Darwin from before 1941, which makes it look as though there was a cooling trend there. The fact that climate-change researchers have to do a particularly strong correction on the data at Darwin, because they moved their dang instruments from the downtown post office to the airport, makes Darwin a perfect place to look for support if you want to claim that climate-change scientists are cooking the data.”

While a correction in Darwin is perhaps necessary, it is cannot be because they “moved their dang instruments” in January of 1941. LOOK AT THE DATA. There is no big change in January of 1941. It occurred gradually over the previous five years. So your theory falls apart upon the simplest examination of the facts.

Next you say: “Average guys with websites can do a lot of amazing things. One thing they cannot do is reveal statistical manipulation in climate-change studies that require a PhD in a related field to understand.”

Your understanding of statistics is as poor as your understanding of chronology. The statistics used by GHCN are average college level tools. You are dazzled by the fact that you don’t understand them, so you make the incredibly foolish assumption that no one without “a PhD in a related field” can understand them either. Some of us actually paid attention in class, you know.

Finally, you use your closing arguments to cheerlead for peer review. Curiously, I agree with you in theory … but the peer review system in climate science is badly broken. First, as the CRU emails clearly show, it has been subjected to enormous “old-boy” pressures to pass through bad studies without a second glance, and to deny opposing papers a fair hearing. How do you think we got the Hockeystick and its cousins? Here’s Phil Jones from the emails, talking about keeping peer-reviewed papers out of the IPCC report:

I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!

And you give your article the URL “trust_scientists”???

Second, the peer-review system is ripe for abuse. This is because the reviewers often know who the author is, while the reviewers (like you) hide in anonymity. This invites malfeasance. The system needs to be changed so that after the review, the [authors] sign their names to the paper as well as reviewers. At present, we have no way of knowing whether the paper was seriously reviewed by inquiring scientists, or simply passed through by the authors friends.

So I, like you, support peer review. I just want a peer review system that works. It must be double blind during the review period, with neither the reviewers nor the author knowing the others’ names. And the reviewers should reveal their names at the end, so that we know it wasn’t just the author’s best mates doing the author a favor.

Since we have an easily manipulated system instead of a real peer review system, I opt for public peer review by putting my work on the web. This lets anyone, even anonymous innumerates like yourself, register your objections.

Finally, the Economist did not contact me before publishing an article full of false accusations, incorrect assumptions and wrong statements … looks like peer review is not the only system in trouble here. I thought journalists were under an obligation to check their facts before making accusations …

487 thoughts on “Willis: Reply to the Economist”

There isn’t a possibility to add Facebook-shortcut links to your post/blog?
I know I for sure would be more prone to share your link on the net this way. Sorry but don’t know if that is possible, just wondered.
HLx

good – but will the author address the criticisms leveled in the comments – The most obvious being the rebuttal to his point about the absence of stations within 500km……. It is a great analysis – but let’s make sure it is right eh?

Do you think the CNN reporters will understand fraud when it is in black and white in front of them? I hope so, but I will not hold my breath.
Steve M is a Canadian genius, who has almost single handed stopped the biggest scientific lie of our age, but the media are finding it hard to grasp the significance(yet).(ditto Ottawa of course)

That’s right Anthony, I was thinking the same – this brilliant post should not be drowned by the terrible noise from Copenhagen. We know that the warming in New Zealand over the past 156 years is an illusion, as it is for the US:
Journalists: please take your time to watch this video.

How about making a list of important stories – in order of importance and sticking it on top, so that they dont get buried.
You may need some help with the order of importance. The one on the Medieval Warm period for example. The clearest explanation of Mann’s hockey Stick, Briffa’s Yamal, the various upside down ones, Richard Lindzen’s piece on the radiation budget, Lord Monkton?, Warren Mayer from Climate-skeptic has given a very good piece on Catastrophe Denied, excellent, if a bit long

Why does it seem that everytime a stations raw vs “valued data” manage to get compared by an independent source we find unwarrented upgrade adjustments … ? seems like you can’t swing a dead cat without hitting bad data …
I used to think the problem was Correct Data –> Garbage Code –> Semi-Garbage Out
Now it appears to be Garbage Data –> Garbage Code –> Garbage2 Out
G2O – the new global warming gas …

hi, first timer here, Watts as in the Reagan era, use it up before our dating of all datage the apocallous dote.
‘The’ climate(change)? The world, the supremacy? All hogwash
how bout changing your microclimate one 5 story familytreehousing foundation, fun and funding package at a time (also known as seed, funny moneylike).
i gather you gather a lot of negativity here (from a 40 sec look, connectivity rationed to punish yall for too little support, mind you, have yet to write a will to make up for not dribbling this and thataways already).
anyhoo, just to save myself fruitless searching and frust:
(check the discuss. page too)http://en.wikipedia.org/wiki/Talk:John_D._Hamaker

Forget about HArry: He’s only trying to fix Tim’s code.
“Although I have yet to see any evidence that climate change is a sign of Christ’s imminent return, human pollution is clearly another of the birth pangs of creation, as it eagerly awaits being delivered from the bondage of corruption (Romans. 19-22).
Tim Mitchell works at the Climactic Research Unit, UEA, Norwich, and is a member of South Park Evangelical Church.http://www.tickerforum.org/cgi-ticker/akcs-www?post=118625&page=13

I believe what we need to do now is to start a surfacestation type project that is focused on finding the raw daily data for the actual surfacestations around the world. Since we can not trust the data anymore, and we cant trust the custodians of the data, lets build an open source, station by station database of the RAW daily data.
A rebuilding of the US records data base with the original data is necessary. a PDF of the original data for each station with a txt file holding the same raw data, with a google map location for the station.
While some of the stations are available for free, some of them are behind government paywalls. That is not an issue for someone with a .gov email addy. I have one of those available, if we know which stations we want.
This would allow us to build a public wiki type project around the raw data. If we can get auditors for Australia and N.Z. locations to show what the known fudges are there. We can make this a global audit of public data base of raw temps.
I would suggest that any time a station is moved, we treat it like a new station. That is, each data series run is left to itself and designated as such. This way we can see what the *REAL* raw trend is.
Its time to FREE the DATA, MAKE it PUBLIC, and than we can argue about WHAT steps are applied to it.

bill (10:19:42) – interesting observation – but I think you will find it is the different scales on the three plots that is misleading you – I’ve printed then out and taking a sample date series that is displayed on each graph (1970 – 2000) I reckon they line up pretty well. The compression of the “y” axis is pretty severe and this is hiding some of the smaller peaks and troughs I think.
But yes a full analysis would need the raw data in a spreadsheet I guess…:(

shouldn’t homogenization generally enhance a temperature trend ?
trends are known to be much stronger on land masses than on oceans.
in a network of land based temperature stations, coastal locations are peripheral, in land-locations are central.
shouldn’t that, in general, lead to a higher use of trend enhancing inland locations to homogenize others compared with coastal locations ? that should result in a warming bias.

I’m sure this has already been answered before, so apologies for being a n00b.
Is there some stated algorithmic, reproducible method for deriving an “adjustment curve” from the data to apply to the data, or is it officially acknowledged as being done “by eye”? That is, is it purely judgement calls we’re looking at here, or is there an algorithm that any suitably skilled person could apply to derive an adjustment curve?

I’m wondering if it isn’t recursive homogenization.
A station in question is homogenized using surrounding stations. Are those surronding stations homogenized beforehand? how? Using surrounding stations including the one to be homogenized? So once it starts with up’ing one station, it’s surrounding stations will be up’ed, which will be used to up the first station, to up it further, and so on, causing runaway global warming 🙂

OT: The MET office release of a subset of the HADCRU3 data at the following link:http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html
From the Q&A:
“1. Are the data that you are providing the “value-added” or the “underlying” data?
The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.
2. What about the underlying data?
Underlying data are held by the national meterological services and other data providers and such data have in many cases been released for research purposes under specific licences that govern their usage and distribution.
3. Why is there no comprehensive copy of the underlying data?
The data set of temperatures back to 1850 was largely compiled in the 1980s when it was technically difficult and expensive to keep multiple copies of the database.”
This is not the end – looks like this is only the start of the homogenising debate.

“they’ve been completely above board and objective in what they have carried out…”
“there’s no definitive evidence, is there, for global warming?”
“By the time we get definitive evidence, I think it will be much too late…”
The sleazy politician, pseudo-scientist, Pachauri, gives more than enough cause to defund the IPCC and pursue charges for malfeasance in office.
An example of intellectual stupity: There is no definitve evidence (scientific fact), but that is beside the point.
Peter Liss competes with Pachuri, by saying to the skeptics: Prove that global warming/climate change is not caused by man-made Carbon Dioxide emission.
These guys are practicing criminal science.
————————————————————-
IPCC chief Dr Rajendra Pachauri tells CNN’s Becky Anderson controversial climate e-mails were just colleagues “letting off steam”
Climate chief dismisses ‘climategate’http://edition.cnn.com/video/#/video/world/2009/12/07/pachauri.ipcc.anderson.intv.cnn

Ian B (11:49:49) :
I’m sure this has already been answered before, so apologies for being a n00b.
Is there some stated algorithmic, reproducible method for deriving an “adjustment curve” from the data to apply to the data, or is it officially acknowledged as being done “by eye”? That is, is it purely judgement calls we’re looking at here, or is there an algorithm that any suitably skilled person could apply to derive an adjustment curve?
I ran a test series once. Afterward, I sent a critical instrument out for calibration. It was off by a fixed amount. In the test report, I presented the raw data (measurements), the cal lab results, and the adjusted data. That’s the right way to do it.
For the met data in question here, it would involve running the new system/location and the old system/location in parallel to obtain “calibration” data to be used in converting from one data set to the other.
One of the constant complaints about the AGW “scientists” is precisely that they:
1. Conceal/withhold the raw data.
2. Don’t explain/ disclose their conversion methods.
If it’s being “done by eye,” without any disclosure, it’s fraud.

Wow, today even the coolest web page with news from computer HW and SW industry covered ClimateGate. The Inquirer is surely most witty, well written and technically superior web page that tells you what’s hot in Silicon Valley. Please read the post by PaulW. “I’ve seen the code”http://www.theinquirer.net/inquirer/news/1565081/hackers-target-climate-science-boffins“You don’t get a hockey stick when you 1) don’t use faked data, 2) include all the data (no cherry picking allowed, don’t use 5 outliers trees as your sole data points for the present, as they did! and they hid it for years!) 3) don’t use some data upside down, which they did, or 4) splice in thermometer data at the end of proxy data. I could go on…”

At last, playing in the right ball-park.
——————————–
What Is — and What Isn’t — Evidence of Global Warming
William M. Briggshttp://pajamasmedia.com/blog/what-is-%e2%80%94-and-what-isnt-%e2%80%94-evidence-of-global-warming/?print=1
“Climategate” has everybody rethinking global warming. Many are wondering — if leading scientists were tempted to finagle their data, is the evidence for catastrophic climate change weaker than previously thought?
Actually, the evidence was never even evidence.
There is a fundamental misunderstanding — shared by nearly everybody about the nature of anthropogenic global warming theory (AGW) — over exactly what constitutes evidence for that theory and what does not.
Remember when we heard that the icebergs were melting, that polar bears were decreasing in number, that some places were drier than usual and that others were wetter, that the ocean was growing saltier here and fresher there, and that hurricanes were becoming more terrifying? Remember the hundreds of reports on what happens when it gets hot outside?
All of those observations might have been true, but absolutely none of them were evidence of AGW.
Diminishing glaciers did not prove AGW; they were instead a verification that ice melts when it gets hot. Fewer polar bears did not count in favor of AGW; it instead perhaps meant that maybe adult bears prefer a chill to get in the mood. People sidling up to microphones and trumpeting “It’s bad out there, worse than we thought!” was not evidence of AGW; it was evidence of how easily certain people could work themselves into a lather.
No observation of what happened to any particular thing when the air was warm was direct evidence of AGW. None of it.
Every breathless report you heard did nothing more than state the obvious: Some creatures and some geophysical processes act or behave differently when it is hot than when it is cold. Only this, and nothing more.
Can you recall where you were when you heard that global warming was going to cause an increase in kidney stones, more suicides in Italy, larger grape harvests in France, and smaller grape harvests in France? How about when you heard that people in one country would grow apathetic, that those in another would grow belligerent, and — my favorite [1] — that prostitutes would be on the rise in the Philippines? That the world would come to a heated end, and that women and minorities would be hardest hit?
Not a single one of these predictions was ever evidence of AGW.
For years, it was as if there was a contest for the most outlandish claim of what might happen if AGW were true. But no statement of what might happen if AGW is true is evidence for AGW. Those prognostications were only evidence of the capacity for fanciful speculation. Merely this and nothing more.
So if observations of what happens when it’s hot outside don’t verify AGW, and if predictions of what might happen given AGW were true do not verify AGW, what does? Why did people get so excited?
In the late 1990s, some places on Earth were hotter than they were in the late 1980s. These observations were indirect — and not direct — evidence of AGW. The Earth’s climate has never been static; temperatures sometimes rise and sometimes fall. So just because we see rising temperatures at one point does not prove AGW is true. After all, temperatures have been falling [2] over the last decade, and AGW supporters still say their theory is true. Rising — or falling — temperatures are thus consistent with many theories of climate, not just AGW.
Climate scientists then built AGW models, incorporating the observed temperatures. They worked hard at fitting those models so that the models could reproduce the rising temperatures of the 1990s, while at the same time fitting the falling temperatures of the 1970s, etc. They had to twist and tweak — and with the CRU emails [3], it now appears they twiddled. They had to cram those observations into the models and, by God, make them fit, like a woman trying on her favorite jeans after Thanksgiving.
They then announced to the world that AGW was true — because their models said it was.
But a model fitting old data is not direct evidence that the theory behind the model is true. Many alternate models can fit that data equally well. It is a necessary requirement for any model, were it true, to fit the data, but because it happens to is not a proof that the model is valid.
For a model to be believable it must make skillful predictions of independent data. It must, that is, make accurate forecasts of the future. The AGW models have not yet done so. There is, therefore, no direct evidence for AGW.
The models predicted warmer temperatures, but it got cooler. One of the revealed CRU emails found one prominent gentlemen saying, “We can’t account for the lack of warming at the moment and it is a travesty that we can’t.”
It is. But only if you were concerned that the AGW theory will be nevermore.

Sorry for the OT. Very Interesting from Newsbusters.http://newsbusters.org/blogs/terry-trippany/2009/12/08/met-global-warming-u-s-csv-data-dump-erroneous
Major Media Spread Erroneous U.K. Temperature Data
By Terry Trippany (Bio | Archive)
December 8, 2009 – 15:12 ET
If you go to Google today and search on the phrase “warmest decade” you will get a result set with thousands of breathless articles claiming that 2000 to 2009 is the warmest decade on record. This is on the cusp of an announcement from the UN climate talks in Copenhagen where world leaders are desperate to speed past Climategate and refocus the world’s attention on their apocalyptic global warming agenda.
The media that couldn’t bring themselves to report on the growing scandal surrounding falsified data is all on board with reporting this latest news. Yet it is clear that the Huffington Post, CBS News, the New York Times and others didn’t even bother to check the data that was released from the the UK MET (UK Government Department of Climate and Weather Change). If they had they would have immediately discovered what I found, that the US csv (comma delimited) data dump from 1851 to 2009 is erroneous in its compilation. The January column for each year shows period information instead of temperature records and the latitude appears transposed as well. It appears that they incorrectly shifted the column headers when compling the dump. (Load the raw file into Excel and compare it with the UK csv data to see the erroneous data columns side by side. Data provided by the Guardian UK.)
This data was provided as a means to “dampen the row over the hacked climate science emails”.
The Met Office also released the raw data from around 1,500 global monitoring stations in an effort to satisfy critics who have demanded that researchers be more transparent with their data in the wake of the email hacking row at the University of East Anglia. (UK Guardian, Met Office figures confirm noughties as warmest decade in recorded history)
Don’t jump to any conclusions that this is some sort of conspiracy or that the data itself is incorrect. That can not be surmised without extra information. It is just another indication of the desperation by a group of scientists, policy makers and scare mongers that are too sloppy to check their own facts and figures; even before releasing it to the whole world as proof to counter valid questions concerning the validity of their data.
With full unquestioning faith we are expected to buy into this sloppy sort of science as a foundation toward making public policy that will affect nations of the world’s people for generations to come. If they can’t even get the release of the simplest of data sets correct then what are we to wonder about what it is that they got wrong?

Anthony,
On a similar theme ,but with a slight digression, I don’t know if you have seen this before, but it seems that you have been concentrating too much on climategate and may have been pipped at the post by a sixth grader called Peter, who has made a direct comparison of rural sites with urban sites, with very interesting results.http://www.youtube.com/user/TheseData#p/a/u/1/LcsvaCPYgcI
I picked it up in a thread by gjg onhttp://www.foresight.org/nanodot/?p=3553#comment-865483
Unfortunately it seems not to have been “Peer” reviewed but has most definitely been “Pater” reviewed.

Copenhagen climate summit in disarray…
“The UN Copenhagen climate talks are in disarray today after developing countries reacted furiously to leaked documents that show world leaders will next week be asked to sign an agreement that hands more power to rich countries and sidelines the UN’s role in all future climate change negotiations.”
Wait till they figure out the whole reduced food thingy.http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-summit-disarray-danish-text

Is Google censoring? Now I only show 8 mill hits for climategate. This am it was 34 mill, which is also low in my opinion. Now it auto suggests “climategate-Copenhagen” and the articles in the lead are all pro GW? Seems suspicious to me.

Climategate: Obama’s Science Adviser Confirms the Scandal — Unintentionally
Posted By Myron Ebell On December 5, 2009http://pajamasmedia.com/blog/climategate-obamas-science-adviser-confirms-the-scandal-%e2%80%94-unintentionally/?print=1
I remember when Copenhagen Diagnosis came out because nearly every major paper ran a story on it. Global warming is happening even faster than predicted, the impacts are even worse than feared, and that sort of thing. I also remembered that the authors of Copenhagen Diagnosis included many of the usual conmen who are at the center of the alarmist scare. So I asked my CEI colleague Julie Walsh to compare the list of authors of Copenhagen Diagnosis with the scientists involved in Climategate.
I’m sure it will come as a shock that the two groups largely overlap. The “small group of scientists” up to their necks in Climategate include 12 of the 26 esteemed scientists who wrote the Copenhagen Diagnosis. Who would have ever guessed that forty-six percent of the authors of Copenhagen Diagnosis [6] belong to the Climategate gang? Small world, isn’t it?
Here’s the list of tippity-top scientists who both wrote the authoritative report that Holdren relied on to support his statements and belong to the “small group of scientists” who are now suspected of scientific fraud:
Nathan Bindoff, also a lead author of the UN Intergovernmental Panel on Climate Change’s 2007 Fourth Assessment Report (hereafter LA-IPCC FAR)
Peter Cox, also LA-IPCC FAR
David Karoly, also LA-IPCC FAR and the Third Assessment Report (TAR)
Georg Kaser, also LA-IPCC FAR
Michael E. Mann, also LA-IPCC TAR (the hockey stick scandal made him too radioactive to participate in writing FAR)
Stefan Rahmstorf, also LA-IPCC FAR
Hans Joachim Schellnhuber, merely “a longstanding member of the IPCC.”
Stephen Schneider, also LA-IPCC FAR, TAR, and the First and Second Assessment Reports (SAR) plus two of the IPCC’s synthesis reports
Steven Sherwood, only a contributing author to IPCC-FAR
Richard C. J. Somerville, co-ordinating LA-PCC FAR
Eric J. Steig, no connection to IPCC listed
Andrew Weaver, also LA-IPCC FAR, TAR, and SAR
In the interests of space, I’ve left out all of their distinguished positions as professors, editors of academic journals, and heads of institutes. You can search for their Climategate emails here [7].
Then there are those Climategate figures who didn’t help write Climate Diagnosis, but who have been involved in the IPCC assessment reports. Here are three that come to mind:
Phil Jones, contributing author IPCC TAR
Kevin Trenberth, co-ordinating LA-IPCC FAR and SAR, LA-IPCC TAR, and an author of the summaries for policymakers for FAR, TAR, and SAR
Ben Santer, convening LA-IPCC First Assessment Report
Now, I wouldn’t want to jump to any conclusions here, but it kind of looks to me like the “small group of scientists” caught out by Climategate are pretty much the same people who make up the vast and strong scientific consensus on global warming and write the official reports that the U.S. and other governments rely on to inform their policy decisions. I’m sure Dr. John P. Holdren, President Obama’s science adviser, has a plausible alternative explanation. He always does.

Hysteria (10:15:01) :
good – but will the author address the criticisms leveled in the comments – The most obvious being the rebuttal to his point about the absence of stations within 500km……. It is a great analysis – but let’s make sure it is right eh?

His article clearly stated that he was referring to other stations in 1941. I checked the Australian Bureau of Meterology website and all of the stations you listed as being within 500km have data starting after 1941.
The other issue, that was raised in the other thread, is that the ‘station zero’ data isn’t an actual station, but is the spliced data from two different stations (the Darwin Post Office, which was destroyed in 1941, and the Darwin Airport that has records starting from 1941). That’s not identified in the raw data he’s using.

I repeat (a comment I made on another thread about Darwin), go to Wolfram Alphahttp://www.wolframalpha.com/
Enter the search terms darwin airport uk temperature
Then, from the popdown menu, select “All” to see that Darwin’s avg., temp has been a very constant 80DegC for the last 50 years.
See, also, the lead article here for Greenland’s precipitous and sustained DROP in temperature, that you will not hear about from the warmers.http://antigreen.blogspot.com/2009/09/melting-greenland-you-would-be-hard-put.html
If a warmer’s lips are moving, he’s almost certainly lying.

poetpiet (10:47:47) :
I can only suggest ventilating the room for a few hours before posting. While I appreciate that there is a correlation between crack use and HadCRUT post 1980, I’m just not convinced of causation.manfred (11:24:57) :shouldn’t homogenization generally enhance a temperature trend ?
Enhancement is the problem. Homogenization [in this sense] should seek only to make a sensible series out of the data available for that station. Enhancement as seen here is otherwise called “fudge”.
As for coastal v inland, I’m rapidly coming to the conclusion that a station can only ever be compared to itself and even then only if it is the same station. If my coastal station shows a -0.3°C trend and you decide to “enhance” it using a bit of “inland” warming, you have made “liars” of both stations. Try it with your bank balance if you don’t see my point. Leon Palmer (11:50:17) :I’m wondering if it isn’t recursive homogenization.
Was asking myself that very same question while reading the source article this afternoon. It is obviously a question that will take a lot longer to answer than most. It does seem plausible though. [for Australia] Target Alice Springs.. grab 5 stations … adjust Alice Springs. Move in any direction .. get a new target .. grab 5 stations [including Alice Springs] … adjust… Positive feedback is real. Ian B (11:49:49) :Is there some stated algorithmic, reproducible method for deriving an “adjustment curve”
Plenty. And that is the core of the problem. Pick an algorithm (or combination of) that when applied in a “one size fits all” fashion to every station on the planet gives you the answer you expect then simply quote the source paper. You too are now peer review proof. Station details? We don’t need no stinkin station details.Jack in Oregon (11:03:34) :I believe what we need to do now is to start a surfacestation type project that is focused on finding the raw daily data for the actual surfacestations
While I agree it would be nice and I would certainly get involved with a kind of “open source” project to do all that, I don’t know that it helps in this instance.
The problem (as I see it) here is that we have a series..
(A) 10,10,10,10,10,10,10,10,10,10,10,10
GISS take that series and come up with …
(B) 9,9.3 ,9.5,9.7,9.9,10,10.25,10.5,10.75,11,11.25,11.5
CRU (will we ever know?) come up with ..
(C) 10,10,10,10,10,10,10,10,10,11,13,15
It really doesn’t matter in this case that (A) is accurate. The real question is how did (B) and (C) using (A) arrive at their respective series?

“Some prefatory comments about adjustments to the temperature records are in order. The aim of adjustments is to make the temperature record more homogeneous,” i.e., a record in which the temperature change is due only to local weather and climate. However, caution is required in making adjustments, as it is possible to make the long-term change less realistic even in the process of eliminating an unrealistic short-term discontinuity.
Indeed, if the objective is to obtain the best estimate of long-term change, it might be argued that in the absence of metadata defining all changes, it is better not to adjust discontinuities.”
A closer look at United States and global surface temperature change
Hansen et al., 2001

Excellent idea of making this thread a sticky. It has been clear to me, recently, that disputing AGW on establishment-chosen battlefields is wasteful at best.
Too many ‘sceptical’ arguments are rapidly diverted and thus diluted by misdirection.
Take the recent televised SMc debate in which the other ‘sceptical’ contributer allowed himself to be diverted by his opponent.
That simply muddied the debate in favour of the official line.
Meanwhile, SMc stuck to the subject on hand. This is the strategy that will reap the greatest dividends. Chip, chip and chip away at the weakest points. Identify these weak points and focus on them and them alone.
This thread may turn out to be a weak point.
If so, pursue it, publicise it and push it to the public. Sow the seeds of doubt in one area, make sure they germinate and grow then move on.
If this study does, indeed, demonstrate a distortion of reported facts then curiousity will seek other examples.
AGW is a mighty fortress. It cannot be brought down at all if its foundations are ‘robust’. It may not even be brought down, if it’s foundations are shaky, if its besiegers spread their efforts too much.
Chip, chip and chip away but only at the weakest points.

Your “Giss” needs to be identified as to “which GISS”… It looks to me like the “As Combined” but “before homoginization” but you would need to go to their web site to sort it out. They offer 3 different levels of graph corresponding to the output of STEP0 (glue in Antarctica and USHCN), STEP1 (merge disjoint records, throw some out), and STEP2 (final homogenizing and UHI “adjustement” and throw out some more).
Oh, and they periodically change the thing, so you need a datestamp provenance as well. For example, a few weeks ago they finally put the USHCN Version 2 records in, so the USA from 2007 to date will all have changed.
So you have to watch both hands at all times to see which data ended up where… Care to play again? Ok, the data is going under this walnut shell in the middle, now, watch closely and place your bets….

“”” View from the Solent (11:04:32) :
Perhaps related, perhaps not. ‘.. something odd about Australia and New Zealand. …’ “””
Watch your language there; we Colonials from the crusty side of the pizza, take a dim view of being described as odd; specially by folks on the cheesy side of the pizza. Yes we pull each other’s legs a lot; but we stick together when either one is being attacked. So we don’t have as many thermometers as you folks up topside have; but then we don’t care about it as much as you do. All we have to do is go outside to see that the climate is just fine and dandy.
And yes Wellington does put Chicago to shame when it comes to wind; I was there for Christmas 2006 (we don’t have a winter holiday at year’s end like Y’alls does up here) so we call it Christmas instead. Our rental car almost got blown from the downtown motel parking lot onto the Ferry boat. Funny thing was once we got out of the harbour, Cook Strait was as calm as a millpond; which can’t be relied on to happen a whole lot.

australia features strongly in the report instantly covered by all MSM to show decade warmest ever and this year 5th warmest:
Decade shapes up as the world’s warmest ever
As the La Nina weather event, linked to cooler and wetter conditions in Australia, turned in June to El Nino, associated with hot and dry weather, Australia was one of the regions to experience “more frequent and intense extreme warm events”, the report said.http://www.theaustralian.com.au/news/nation/decade-shapes-up-as-the-worlds-warmest-ever/story-e6frg6nf-1225808422879
in the previous ‘darwin’ thread, chris gilham posted something very interesting, which should be looked at by Willis Eschenbach and other capable analysts to see if the ‘value-added’ temps for august in west australia could have provided the basis for the claims 2009 could be one of the warmest on record.
check gilham’s post for all the figures, but this was the reply he got from the met office when he pointed out the august temps had all changed in an upward direction:
‘I’ve questioned the BoM on what happened and received this reply …
“Thanks for pointing this problem out to us. Yes, there was a bug in the Daily Weather Observations (DWO) on the web, when the updated version replaced the old one around mid November. The program rounded temperatures to the nearest degree, resulting in mean maximum/minimum temperature being higher. The bug has been fixed since and the means for August 2009 on the web are corrected.”
I’m still scratching my head, partly because the bug only affected August, not any other month including September or October. There’s been no change to the August data on the BoM website since I pointed out the problem and they’re still the higher temps.’
please look into this, as australian data has seemingly played a part in the today’s alarmist alarm!

further to my previous post re gilham’s west australian met office figures for august, i just found that august is one of the months WMO is using for australia:
ABC: UN report singles out Australian weather
The WMO report said the heatwaves happened in January/February, when the hot
weather contributed to the disastrous Victorian bushfires, in August and
again in November..
The report is based on data from NASA and the US Government’s National
Oceanic and Atmospheric Administration, along with data from the UK’s Hadley
Centre and the University of East Anglia, the university at the centre of a
scandal involving leaked emails from climate researchers.
The data is preliminary as 2009 has not yet ended. It will be revised early
next year.
Mr Jarraud said the WMO issued its 2009 summary early so it could be
factored into discussions at Copenhagen on global warming.http://news.smh.com.au/breaking-news-world/un-report-singles-out-australian-weather-20091208-khtd.html

Ok, so I have some questions. We talk about temperature, raw, adjusted or homogenized. When a temperature is recorded from a thermometer at an airport or wherever, what is it? I am a BSME so my experience with thermodynamics relates to HVAC and combustion.
From a heating ventilation and air-conditioning viewpoint, temperature consists of two components, sensible heat and latent heat. We are generally concerned with changing a condition from one state to another and we use a change in an intrinsic property called enthalpy to do so. Enthalpy is basically the internal energy plus a pressure-volume term of a system whose units are kJ/Kg. The zero point for enthalpy is arbitrary, but a change in enthalpy is the change in internal energy (heat because we neglect work).
To change the temperature in a system we must reduce the sensible heat, which is measured by the temperature of the dry air mass. This is easy to do because it only requires a change of about 1 kJ/kg per degree C to do so; however, we cannot reduce the sensible heat until we reduce the latent heat. The latent heat is the energy tied up in the water vapor in the dry air and changing that can require several times the energy of dry air depending on the relative humidity.
For example, roughly estimating from a psychometric chart, the enthalpy of 40 degree C air at 20% relative humidity is the same as 25 degrees C air at 80% RH.
So my question is; are the temperatures recorded adjusted for RH and barometric pressure? If not, has anyone done so, because it is a rather simple calculation? My final question: Am I totally missing something where none of this matters?

I’ve often wondered whether i am worthy to post on you blogs anthony – with my limited knowledge of the science – and for that reason I have too often shirked away. Sadly, from my perspective, I am not highly educated in this field but I have known for a long time that there is definately something fishy going on – and climategate just confirms this. I notice you are far more willing recently (unsurprisingly) to put yourself in the conspiracy camp following the recent developments. Let me openly say that AGW is not the only conspiracy i believe in – and when will we have the courage to admit to ourselves, and to others, that that this misinformation is taking place across all aspects of the mainstream media. It was only a few months ago that you were mocking comments made on a subject to do with contrails/chemtrails. Lets think about this: with all the billions thrown left right and centre at proving unprecedented manmade warming, when there franlkly is’nt any, what’s to stop the orchestrators of this conspiracy to investigate highly sophisticated and covert means of manipulating the atmosphere to achieve the warming they so crave. This is really the greatest of all ironies – There’s nothing the alarmists want more than the reality of a runaway global warming catastrophe! What if military aviation programs are seeding cirrus clouds to trap heat, particularly to melt the arctic ice? Chemtrails cannot be ignored – nor can the many other conspiracies! Copenhagen will go ahead as planned and there’s nothing we can do about it – starting to have doubts yet?

Man is, perhaps, unique of all the Carbon-Based lifeforms on Planet Gaia by having the ability to Lie!
AFAIK, dogs, cats, begonias and, even, trees are stupidly honest. They can’t even spell the word, far less comprehend it’s meaning. Fair dinkums even the stupidest mutt may eventually work out that you’re not going to throw that stick. If you don’t know what that means then you’ve never had a dog as a pet.
But, that iota of canine understanding still precludes old four-legs from coming up with a Ponzi plan!
I think I’m getting a possible scenario for the anger shown by Prof Jones and his buddies. It was born of a frustration that emanated from the swinging sixties!
Trees are notoriously truthfull. To my knowledge no one has ever accused any of our aboreal cousins of untruthfull utterances. Apart from George III that is and, mayhaps, some of his double-helixed descendants.
Perhaps the humble tree may yet turn out to be the ultimate, a-grade thermometric device. The reported UEA divergence that highlighted the schism between human thermometry and tree-ring density et al characteristics may have an unexpected explanation.
It wasn’t that the trees stopped being less accurate proxies for temperature from the roaring forties until 1960, on the contrary. They diverged simply because the trees couldn’t keep up with the mathematical techniques and tricks needed to be part of the peer-reviewed literature.
Briffa and buddies were right all along but they couldn’t tell anyone. Now that’s frustrating!

yonason (15:23:02) :I repeat (a comment I made on another thread about Darwin), go to Wolfram Alphahttp://www.wolframalpha.com/
Enter the search terms darwin airport uk temperature
Then, from the popdown menu, select “All” to see that Darwin’s avg., temp has been a very constant 80DegC for the last 50 years.
well, it’s damn hot, but 80C? I hope that is 80F or they’re going to need a lot more beer…..

robr (17:09:03) :To change the temperature in a system we must reduce the sensible heat, which is measured by the temperature of the dry air mass. This is easy to do because it only requires a change of about 1 kJ/kg per degree C to do so; however, we cannot reduce the sensible heat until we reduce the latent heat. The latent heat is the energy tied up in the water vapor in the dry air and changing that can require several times the energy of dry air depending on the relative humidity.
For example, roughly estimating from a psychometric chart, the enthalpy of 40 degree C air at 20% relative humidity is the same as 25 degrees C air at 80% RH.
So my question is; are the temperatures recorded adjusted for RH and barometric pressure? If not, has anyone done so, because it is a rather simple calculation? My final question: Am I totally missing something where none of this matters?
Very sensible point, and yes – it all matters a lot. I shall have to add it to the list of things that make temperature readings difficult to use. Having said that, we are after trends, so it may not matter much, but is is certainly a factor.
For example, I have long been wary of averages – if the temp is 10C all day apart from 1 hour in the afternoon when the rain stops, sun shines, and it reaches 20C, does that make the average for the day 15C? Of course not. Also, the trend we are measuring is way lower than our error band anyway – how can that be accurate enough?

Can anyone point me to an instance where raw data was homogenized downward to account for increased urbanization at a site? I would have thought that would have been the most common form of homogenization process, but I’ve never actually seen one in action.
If so, we could check to see if any “steps to hell” emerge that would be the mirror of the “pyramid to heaven.” If they dont appear I’d think that would be pretty conclusive proof that what was going on was fraud, and not just an innocent result of regular corrections.

robr,
The actual completely raw readings from the observer’s forms have enough information to do the corrections you mention – which happen immediately and are part of the “raw” records before they are entered into the databases.
If dealing with wet bulb and dry bulb temperatures was the extend of the adjustments, I think I can safely say we’d all be dancing in delight at this point.

For example If 55% of the value added data is shifted up, vs 45% down, would that not prove that there was a bias in the resulting conclusions?
Well, I can say that US HCN1 adjustments were two thirds up and one third down.

JER0ME (17:35:13) :Having said that, we are after trends, so it may not matter much, but is is certainly a factor.
If the temperature trends up and the humidity trends down so the enthalpy stays the same, is there a heating trend at all?

Alan S. Blue (17:41:33) :
Thanks, I sort of expected that to be the case, I’ve been reading here and other places for about a year or so and never read it, so it has just been fermenting in my brain until I had to ask.

Look for Cop15 to come down to little more than transferring cash. The agreement will involve the US shifting enormous amounts of money to China and India among others. The EU will also be ponying up an enormous amount of cash. This is no more than a trade fair. India and China will agree to “cuts” in return for transfer payments.
It’s already done.

Make sure you’re comparing the same things. The BOM site has separate charts for mean maximum and mean minimum temperatures. The GISS site is graphing mean temperature, which I take to be the average of the maximum and minimum.
I took the two sets of data from the BOM site, averaged the annual figures and plotted a graph. It matched to the GISS graph pretty well for most of it, but for the last decade or so, the GISS data varied slightly. Most of the time it was less than the BOM data. That could be the quick and dirty way I did the calculation, though.

Alan S. Blue (17:41:33) :The actual completely raw readings from the observer’s forms have enough information to do the corrections you mention – which happen immediately and are part of the “raw” records before they are entered into the databases.
This just seems a little strange, is there a link to the method where the raw “raw” data becomes “raw” data. Thanks

Graeme W (18:30:40) :
I did the average max/min and I also saw that GISS was less than BOM. The really odd thing to do is plot JunBOM vs JunGISS and compare with DecBOM vs DecGISS
June gives me 0.8578x + 1.1415 and Dec gives me 0.6259x + 10.712.
I swear that the winter months have a different processing than the summer ones.
The GISS has about twice the temperature increase as the BOM; 0.0149x vs. 0.0086x.
Very very odd.

I’m now watching this thread as well, and will gladly answer questions. The question about humidity is a very good one. Basically, heat in the air exists in two forms — sensible and latent.
Sensible is heat we can sense. Latent is energy in the water vapor, which is released when it condenses.
I know of no one who has allowed for this in a global temperature database. It’s tricky because the records are much scarcer than temperature records.
This is another reason why atmospheric temps are a terrible measure of the state of the system. Ocean temps would be better … but we have what we have.
w.

I believe what we need to do now is to start a surfacestation type project that is focused on finding the raw daily data for the actual surfacestations around the world. Since we can not trust the data anymore, and we cant trust the custodians of the data, lets build an open source, station by station database of the RAW daily data.
A rebuilding of the US records data base with the original data is necessary. a PDF of the original data for each station with a txt file holding the same raw data, with a google map location for the station.
While some of the stations are available for free, some of them are behind government paywalls. That is not an issue for someone with a .gov email addy. I have one of those available, if we know which stations we want.
This would allow us to build a public wiki type project around the raw data. If we can get auditors for Australia and N.Z. locations to show what the known fudges are there. We can make this a global audit of public data base of raw temps.
I would suggest that any time a station is moved, we treat it like a new station. That is, each data series run is left to itself and designated as such. This way we can see what the *REAL* raw trend is.
Its time to FREE the DATA, MAKE it PUBLIC, and than we can argue about WHAT steps are applied to it.

Already started, as of last night. Email me at willis [at] surfacetemps.org if you want to volunteer.
w.

Gore Effect
weather by seablogger
Though there are disagreements, many models suggest polar easterlies will continue to move frigid air-masses from Greenland and the high Arctic down the middle of North America for weeks to come. As these Arctic impulses phase with a moist southern jet, they could cause more major winter storms for much of the US, especially the Midwest and Northeast.
Meanwhile a similar pattern is expected to settle into Europe during the next few weeks. Polar easterlies could bring Siberian air as far west as Spain, and the models of pre-Christmas snow cover for the continent are showing some remarkable depths. Nearly all of Europe could see a white Christmas.
East Asia has already experienced polar bouts, with unprecedented early snows in Beijing last month. The entire northern hemisphere is caught in atmospheric upheaval. Yesterday a low in the North Atlantic dug to a central pressure of 944 mb — equivalent to a major hurricane, but covering a much larger area. A similar North Pacific storm has generated the biggest swell in 11 years for Hawaii.
All in all, the world is seeing a real weather crisis — the worst case of Gore effect yet.http://www.seablogger.com/?p=18283&cpage=1#comment-164275

Willis Eschenbach (19:17:39) :
Jack in Oregon (11:03:34) :
“Since we can not trust the data anymore, and we cant trust the custodians of the data, lets build an open source, station by station database of the RAW daily data.”
This gives me a creepy sick feeling.
Thanks for your post Willis, very well done.

“Willis Eschenbach (19:17:39) :
Already started, as of last night. Email me at willis [at] surfacetemps.org if you want to volunteer.
w.”
I think I’d like to volunteer also for the Sydney, Australia region, and help out as much as I can. I will e-mail you Willis.
Cheers.

Mr. Willis Eschenbach
I want to let you know I thank you for putting in the time and effort to get the actual temps from Darwin and show how the data has been changed.
Have you contacted the Group that put in the changes to learn why they did this?
If you go on RealClimate, Gavin Schmidt does give some explanations. When I complained about temperature adjustments he sent me to a link explaining why they did this in the U.S. I did not find the reasons valid and it does not seem there are valid reasons for what you find.
But when I read RealClimate and the posts it does not seem like the players are involved in a deliberate hoax. From my own experience on this web page, it seems everyone is quite serious and actually afraid of the runaway Greenhouse that will kill off most species currently alive.

Hysteria (10:15:01)
See the answer on the first post.
Willis Eschenbach (19:16:09)
Turboblocke (04:33:51)
Someone else asked why start with Darwin? I guess all the ‘flat earther’ comments (including from our Dear Leader Macavity) made it a natural.
Really excellent work by Willis Eschenbach, thank you.

The 3 stations in north Australia being Darwin, Alice Springs, and “Yamba”. I just checked BOM file of weather stations in operation more than 50 years and found that Yamba (Pilot Station) at 29.4333 S 153.36 E started operating in 1944.
That’s right . 1944.
So they really only have 2 stations. Why do they lie?
There are many sites that have been settled for 150 years- do they have no records from them? Eg Cloncurry recorded Australia’s highest ever temperature in 1889. It would be 2000 km closer than Yamba to Darwin.
I would gladly help with auditing Australia- central coastal Queensland.

a small victory perhaps! cap’n’tax may be off the menu:
UK Telegraph: Copenhagen climate summit: UN pleads for investment deals to be done
The United Nations executive secretary has begged companies to “do some deals” behind the scenes at the Copenhagen climate change conference to encourage market investment, as the US leaned towards more regulation on industry emissions
Speaking about the US plans, Mr de Boer said: “We all know taxes and regulation tend to be a lot less efficient and much more expensive than market based approaches.”
“Please please, please, if you are a business man, do a deal in Copenhagen and please, please, please make it market-based. Because if we fail to get a market-based agreement, we will be forced to turn to tax and regulation.”
However, his defence of market-based solutions to funding climate change measures came as the UN admitted its administration of global carbon offsetting, the Clean Development Mechanism, had lost its way.
The UN, which unveiled an independent review of the system by consultants McKinsey, said they needed to get the system “back to its original intent” and acknowledged it suffered long delays, poor documentation and staff shortages.http://www.telegraph.co.uk/earth/copenhagen-climate-change-confe/6762729/Copenhagen-climate-summit-UN-pleads-for-investment-deals-to-be-done.html

I am seeing this correction more and more.
It starts as level, followed by a small drop and then level again, then a sharp rise. When I wrote the bit on the AGW Virus I did not expect that it would actually appear to be so prevalent world-wide. Seriously, many sites appear to have applied the same trick so that their corrections would all be in line with each other. It is a virus that has infected climate science.
The question in my mind is how much of climate science, and how many research papers can actually be traced back to those three lines of code.
Anthropogenic Global Warming Virus Alert.http://www.thespoof.com/news/spoof.cfm?headline=s5i64103

John Simpson (20:34:10) :
Yes, it is interesting. On the other thread, I put up this plot my own plot of the CRU data as posted, vs the GISS data, raw. In the overlap interval, they superimpose almost exactly. There’s no evidence of big adjustment there. Nor is there with GISS – from this GISS site here are the raw and homogeneity-adjusted plots. There’s no evidence that GISS or CRU adjustments make any big change. So the interesting question is, what’s different about GHCN/NOAA? Something to do with the fact that it’s a subset of the Darwin data?[REPLY – To the best of my knowledge, there IS no “raw” NASA/GISS data. GISS starts with already-adjusted NOAA/NCDC/HCN data, “unadjusts” it and “redjusts” the shattered remains. So if GISS “raw” data looks like CRU adjusted data, that would come as no surprise. ~ Evan]

You know what is funny?
In McIntyre’s Appeal of his FOI rejection, He asked CRU if they could at least release the information that was NON Confidential.
On Nov 13th in rejecting this appeal CRU argued that they could no segregate confidential data from non confidential data. It would take too long.
QUOTE:
Regulation 12(11) requires that we provide as much requested information as is possible outside the coverage of any applicable exception. After consultation with Phil Jones and other relevant staff in regards the nature and composition of the requested dataset, I have concluded that the data is organised in such a way as to make it extremely difficult and time-consuming to segregate the data in the manner that you suggest and would indeed, in our view, amount to an unreasonable diversion of resources from the provision of services for which we, as an institution, are mandated. Further, we would maintain that where no such segregation has, or will occur, we should not release any of the data for fear of breaching such restrictions as do exist.
OH REALLLLY. Let’s see on November 13th prior to the leak it was too time consuming to segregate the data for Steve McIntyre.. But after Nov 19th when they get hit with a crushing PR blow, they suddenly find the time to segregate the data.
I got two more FOIA into these guys.
Maybe I will add another FOIA. Did jones tell the truth when he represented that it would take too long? Did he merely say that to thwart access to the data?

The 3 stations in north Australia being Darwin, Alice Springs, and “Yamba”. I just checked BOM file of weather stations in operation more than 50 years and found that Yamba (Pilot Station) at 29.4333 S 153.36 E started operating in 1944.
That’s right . 1944.
So they really only have 2 stations. Why do they lie?
There are many sites that have been settled for 150 years- do they have no records from them? Eg Cloncurry recorded Australia’s highest ever temperature in 1889. It would be 2000 km closer than Yamba to Darwin.
I would gladly help with auditing Australia- central coastal Queensland.

This is a recurring problem. The data at Yamba goes back to 1880, see here for the raw data. However, for their own reasons the BOM cuts off the early part. One of the many reasons I want a public examination of all of this.
w.

Thanks for the question. You have downloaded the adjusted giss file, which is little different from the GHCN data, except it doesn’t go back further than 1963. I got my data where you got yours. Next try comparing GHCN raw to GHCN adjusted …

E.M.Smith (16:20:23) “So you have to watch both hands at all times to see which data ended up where… Care to play again? Ok, the data is going under this walnut shell in the middle, now, watch closely and place your bets….”
The homogenization I’ve looked at is a serious mess. Sorting it out will be very time-consuming.

Willis Eschenbach (21:58:01) notes that Yamba only started in 1944. The other interesting thing (as an Australian) is that the three stations are thousands of kilometres apart. Darwin is tropical, Alice Springs is desert and Yamba is temperate. How you can even average these disparate locations and come up with a single figure that represents Australia is beyond me. Darwin’s records run from 10.4C to 38.9C (28.5C range), Alice Springs’ from -7.5C to 45.2C (53C range).
Also when you look at the GHCN categorisation for each of these locations, they are pretty much meaningless. I assume that these land types are fed into their temperate adjustment generators. Darwin is listed as Sea, whatever that means. It is in fact one of the biggest (in land area) military/civilian airports in Australia.

Maybe this is not related, but maybe it is. Why not just take the daily temperature range (Tmax, Tmin) and plot that instead? Then you have a range instead of a point, which would be more useful. Wouldn’t the effect of CO2 be much more pronounced on Tmin than Tmax?

Slightly off topic; Australian Government Year Book 1980 edition shows average temperature for Adelaide (Capital of South Australia) as 17.2C between 1910-1940, which covers much of the period for the previous positive PDO. Average temperature for period 1977-2009 is also 17.2C.
If temperature data for Adelaide Airport (9.8 kms west of Adelaide) is preferred, the average for period 1955-2009 is 16.4C! Nothing unusual about recent temperatures here in Adelaide.

David (22:32:35) :Maybe this is not related, but maybe it is. Why not just take the daily temperature range (Tmax, Tmin) and plot that instead? Then you have a range instead of a point, which would be more useful. Wouldn’t the effect of CO2 be much more pronounced on Tmin than Tmax?
These, and other similar analysis show that there is no meaning in a global temperature and in a global temperature anomaly.
The physics is shaky all over and according to George E.Smith the statistics is absolutely inadequate too.(Nimquist or so).
Temperature measurements are inextricably attached to the location through the gray body radiation which varies from geographical spot to geographical spot. Temperature changes because of radiation, i.e. changes in energy input output ( and convection and evaporation and precipitation), but there is no algorithm to go simply from temperature to energy changes and vice verso, and it is the energy that is important and a constant of the equations and can be summed and averaged the world over.
The only use is as maps of temperature and maps of anomalies, that show heating and cooling quantitatively. Integrating over the globe is nonsense.
The effect of greenhouse gases and clouds shows at the low temperatures. Humid nights are warmer than dry ones. It would be interesting to see the high and lows of dry deserts . That is where the effect of CO2 could be untangled from the effect of water vapor, but I do not know of temperature sensors in deserts. Maybe could be done with the satellite records.

Nick Stokes (21:20:44) :
[REPLY – To the best of my knowledge, there IS no “raw” NASA/GISS data. GISS starts with already-adjusted NOAA/NCDC/HCN data, “unadjusts” it and “redjusts” the shattered remains. So if GISS “raw” data looks like CRU adjusted data, that would come as no surprise. ~ Evan]
Evan, I don’t think that is true. As I understand it, Gistemp works from v2.mean, which is where Willis took his unadjusted data from.
Anyway, the key complaint of this post is that Darwin has been adjusted to show a rapid rise in GHCN’s v2,mean_adj. However, CRU’s data, and GISS, show no rapid rise, but rather a downtrend, corresponding to the raw data in Willis’ Fig 7.
So why is GHCN “Darwin 0” different? I’m inclined to think now that it is relevant that “Darwin 0” is one of 5 duplicate files. As Willis says, the others do not show the same adjustment. At an early stage in processing these duplicates get merged, so the question is, how does this adjustment in Fig 7 come out after the merge with the five other duplicates?

As a follow-up to my message on the first thread re the Bureau of Met programming bug affecting how their website presented the August 2009 min and max mean temps in Western Australia…
On November 17, the BoM website had an approximate .5 degree C upward adjustment of August 2009 data that had first been posted on September 1 for Western Australia recording sites.
I asked the bureau why and received this response:“Thanks for pointing this problem out to us. Yes, there was a bug in the Daily Weather Observations (DWO) on the web, when the updated version replaced the old one around mid November. The program rounded temperatures to the nearest degree, resulting in mean maximum/minimum temperature being higher. The bug has been fixed since and the means for August 2009 on the web are corrected.”
The BoM has replied to my email in which I sought clarification as to whether the temps on their website before or after November 17 are the official temps.
Reply: “The bottom ones are correct. The bug only affected averages for August 2009.”
The answer relates to how I put my question and means the higher temps on the BoM website since Nov 17 are officially correct.
It also confirms an upward adjustment was only made for the August 2009 data, so researchers of Western Australia temperatures should make sure their databases are accurate if recorded from the BoM website before November 17. I don’t know if the bug affected only WA figures or all Australian figures.

This doesn’t have anything to do with Darwin but I have noticed what I believe are two major errors with NOAA in their “all time record highs” over the past few days.
First they report Pavillion, WY had a record high of 61 degrees on 2 December when I can’t find anything in the area above 20F. This looks like a possible transposition of 16 to 61 as Lander had a high of 18.
Then there is Samburg, TN which reported a record high of 78 degrees on 5 December when I can’t find anything anywhere nearby with a temperature above 40 for that day. And that can’t be a simple transposition of digits.
Looks like NOAA might be manufacturing Global Warming the “old fashioned way” … simply making temperatures up.
What catches my eye is when I see a map like the HAMweather records page and see a “record high” surrounded by “record low high” temperatures or “record low temperatures” in the same area.

Having seen the current edition of the Economist (“Stopping climate change”), I sent this letter :
“Sir,
A front page headline which reminds us of King Canute.
A leader, a 14-page special report. Several articles: Asia: back to basics, emissions trading row; Science and technology: what lies beneath, more climate change; Business and technology: Chinese wind, carbon management software, shale gas.
In all of this, no analysis of the CRU/Penn State etc. revelations. No analysis of the FOIA.zip file. No analysis of the e-mails (the “tiny scraps”). No analysis of the Fortran code (“fudge factor”??). No analysis of the data. No mention of the “Harry_read_me.txt” file.
No journalism. No more “severe contest”. Just “unworthy, timid ignorance”.
Shame on you.
Richard Barnes
Luxembourg”
I received the following reply :
“Dear Mr Barnes,
We ran a leader and a lead piece in our Science section on the CRU story the week before my special report. Nothing further had happened on the story, other than Rajendra Pachauri saying that the IPCC took it seriously, so there was no reason to return to it.
Best wishes,
Emma Duncan
Deputy Editor
The Economist
25 St James’s Street
London SW1A 1HG
020 7830 7046”
How reassuring to know that Mr Rajendra Pachauri is taking this “seriously”. His speech on the 7th of December suggests that Ms Duncan’s remark is correct.
Note that Ms Duncan says “my special report.” The Economist is put to bed on a Thursday, so the dates between which “nothing further had happened” are the 26th November to the 3rd December.
Please feel free to contact Ms Duncan on the number provided, or by email ( emmaduncan@economist.com ) if you are aware of any issues that arose between those dates.

Willis, here’s an odd thing. For Darwin 0, I subtracted the annual averages of v2.mean from v2.mean_adj. I did not try to calculate anomalies. Like you, I got a sequence rising by about 2.5C, although the adjustments in number terms are big (negative) at the start, down to about zero at the present. But instead of a long rise from about 1940 to 1980, I got four very distinct step rises (and one drop), each about the same amount. The graph is here.

Another oddity. If you look at a monthly plot of corrections (v2.mean_adj-v2.mean), there’s a marked seasonal oscillation in the corrections. This has uniform amplitude between the step changes – increasing as you go back in time. It’s not a perfect sinusoid; there can be two minima per summer. It’s as if they are correcting for an error depending on strength of sunlight, or maybe the wet season. Plot here.

Norman (19:52:56) :
“But when I read RealClimate and the posts it does not seem like the players are involved in a deliberate hoax. From my own experience on this web page, it seems everyone is quite serious and actually afraid of the runaway Greenhouse that will kill off most species currently alive.”
That’s how all good conmen operate. They seem just so sincere.
Once you can fake sincerity you’ve got it made.

Have already sent links to the BBC’s newsnight programmed pointing to the smoking gun, have also successfully had a letter published in the local paper and have complained to the PCC regarding the Sun newspapers useage of disinformation ( a polar bear with a dead half eaten cub in its mouth) and requested the RAW data from the met office amongst other things.
I advise all others to do the same and start to apply pressure, otherwise theyll think they can just sweep this under the carpet

Charles. U. Farley (04:22:39) : the Times runs with the polar cannibalism story too.
Do these people know or check nothing? On the basis of this argument – my cat who ate a dead kitten is doing so because I don’t feed her enough.

I took a look at the data for Ross-on-Wye from the GHCN database. I took this one because Ross is an old town with long records, untroubled by very heavy traffic and therefore any Stevenson screen there is unlikely to have been tainted by “adjustments”. This was the first station I looked at – NO GLOBAL WARMING!
I would give you a link but the GHCN database is down. What a curious coincidence….
Anyway, I’m looking in detail at the statistics for Stornoway now. A small island in the far north. Had hoped that the data would be untainted but it seems the Stevenson screen is next to an airport – bet that wasn’t there in the 30’s! Oh, and it now has electronic thermometers for remote reading – bet they didn’t have those in the 30’s either!
Anyway, the means for the annual distribution show a 0.35Celsius increase in the last 10 years compared to the first 10 years. Problem is the averaged monthly show a variation ranging from -0.03 to +1.0 Celsius taken over the same period. That tells you that you can’t rely on the annual means because they don’t necessarily reflect the means of the underlying distributions. Furthermore even in the monthly averages you can see a variation of 6Celsius over each January from 1931 to 2008. This is due to what is commonly referred to as weather. So the Climate change signal is 0.35Celsius hidden in a weather signal of 0.35Celsius. Is that statistically significant? I don’t think so.

The BIG business behind global warming/climate change is this:
1.-They (the beneficiaries of this swindle) will pay CARBON CREDITS US$3.- per hectare of third world forest.
2.-As each hectare of forest captures 5500 tons of CO2…
3.-They will sell to carbon polluters each forest / hectare at US$137500 (US$25/ton of CO2-reference EU price now-)
-They, of course, won´t care for the conservation or whatever of the forests.
-The big “spread”makes possible the bribing of any “objections” in between.
–It´s the perfect financial bubble ever conceived.
Now…my bet is that this “business” will fail before it really begins.
The best advice, is investment in land property. That makes people independent from the financial system, i.e. from them.

anna v (23:15:14) :The effect of greenhouse gases and clouds shows at the low temperatures. Humid nights are warmer than dry ones. It would be interesting to see the high and lows of dry deserts . That is where the effect of CO2 could be untangled from the effect of water vapor, but I do not know of temperature sensors in deserts. Maybe could be done with the satellite records.
I shall be looking at the Oz datasets soon (see my comment on the original post regarding how easily it can be done). I think Oz has more desert, or at least v low humidity, weather stations than any other country, even the US. I may be wrong, but as about 75% of Oz is very, very dry, I suspect not.

shouldn’t homogenization generally enhance a temperature trend ?
trends are known to be much stronger on land masses than on oceans.
in a network of land based temperature stations, coastal locations are peripheral, in land-locations are central.
shouldn’t that, in general, lead to a higher use of trend enhancing inland locations to homogenize others compared with coastal locations ? that should result in a warming bias.

Incredibly OT but I just went back over the code I was writing this morning (php, so not exactly rocket science) and noticed that I’d written a comment over a particularly hacky bit of code that said “Harry thinks this is a TERRIBLE HACK”. I don’t work with anyone called Harry – or anyone at all since I’m freelance. I think I may have been tired.
Makes me wonder if other developers have started referring to Harry in their code.

Here’s a letter to the editor from the Calgary’s The Globe and Mail, Tuesday, Dec 8, 2009. I have not edited this in any way.
As a scientist, I have never doubted that the carefully contrived natural balance developed over millenniums is being seriously impacted by the ceaseless activity of humans.
Exponential population growth is leading to an impasse and each individual’s footprint, be it carbon or waste, is depleting the delicate buffers of nature’s systems. Watching the debate, I am reminded of the fatal human flaw, the power of denial.
– Carolyn Inch, Ottawa

Cold air drifting down from the North, that can’t be accurate? The ice is melting on the poles as well as on Greenland, surely that must be warm air drifting down now, causing the recent snow storms in the western and central US of A?
i am confused

On the basis of these data, we’re supposed to reduce our food intake, and consumption (thus our carbon footprint) to an amount, in order to reach it, that will reduce our caloric intake down under 1100 calories a day, we’d have to sit in the dark, without heat, and we would not be able to travel by any means save on foot, we cannot eat meat, and most plants are off the menu, meanwhile reducing our population growth and existing population…
On the grounds that millions yet unborn may be harmed?
So we’re supposed to kill millions (or billions) currently living to save millions yet unborn that *may* be harmed in 50 years time, while reducing the population so they are not ever born?
Makes perfect sense to this total idiot…

A total idiot (08:22:45) :
Don’t worry, they won’t do that, that will diminish global markets.( their golden calf ) It is all about global power in their hands, global economy. Fossil fuels consumption will not change, price of fossil fuels will increase of course. Life as we know it will not change a bit..owners will change, and freedom…forget it!
However…they do not govern the universe, though they may have thought they do.
There is always a beginning and an end for everything…you know…what goes up must come down…
He who laughs the last laughs the better!…☺

. . . This scam was uncovered by John Daly in 1989 (follow the “Still Waiting of the Greenhouse” link).
John exposed the Darwin data that WUWT posted yesterday 9 years ago!
gh on December 9, 2009 at 8:55 AM

/Mr LynnREPLY: and the article on WUWT in fact credits Mr. Daly. Some people just can’t read. – A

I have never heard of any “proof” that homogenization can be done accurately. Couldn’t a reasonable test be devised for the purpose?
I would think that just plain common sense would lead one to assume that any temperature “adjustments” that don’t take into account such things for example as frontal passages, down-slope winds, precipitation, clouds and a few other things are absolutely ridiculous.

Anthony, Willis,
A Meteo France statistician Mr. Olivier Mestre is justifying extracting climate signal from temperature series through the following:http://www.math.univ-toulouse.fr/~prieur/meteo.pdf in English
His comment comes on http://skyfal.free.fr/?p=446 in french but the google translation can provide the gist of his rebuttal following the Eschenbach piece, suggesting it is misleading to use raw data.
Can Willis or other knowledgeable in stats comment whenever you have a chance…
Thanks

I have always been amazed how the temperature adjustments always increase the century long rate of increase (if any). This goes against all common sense. The UHI effect should have caused false increases in recent temperatures so recent temperature corrections should, on average of all stations, show a decrease from the raw temperature numbers.
It also seems that the surface station gallery shows mostly stuff that will give higher temperature readings such as pavement, air conditioning outlets, etc. The only thing I can think of that would cause reduced temperature readings would be trees growing up and shading the thermometers.
Is there a reference that I could use to judge the direction of the temperature error from the surface station gallery? The network rating guide does not seem to identify the direction of the error, only the magnitude. I would like to do a random survey of the surface station gallery to see if the likely error is generally positive or negative. One calculation would be: Sum{(station error sign) x (station error rating)] / number of stations. Since the pictures are recent, this number should reflect the recent adjustment to temperatures. If, for example, the number comes out as +2.8°C, then the average recent station adjustment should be negative.

anna v (23:15:14) “[…] Humid nights are warmer than dry ones. It would be interesting to see the high and lows of dry deserts . That is where the effect of CO2 could be untangled from the effect of water vapor […]”
Please let us know if you find anything further regarding this approach to untangling (i.e. for non-UHI sites!)

Nick Stokes (02:17:28) :
The only adjustment I know that takes into account the position of the sun
is the TOBS adjustment. Which according to my knowledge only gets applied to US stations ( I could definately be wrong about this one as other countries may also adjust for TOBS which was validated on a US dataset)http://ams.allenpress.com/archive/1520-0450/25/2/pdf/i1520-0450-25-2-145.pdf

I’ve been suggesting a transactional form of record for temperatures, following well-established accounting systems structures, to make the components of every temperature value (raw data, UHI/location/altitude/etc adjustments, overall adjustments etc) visible.
But reading Steve Mosher’s post above, a bleedingly obvious thought has occurred.What if the major temperature data series are already stored in this form?
What’s made public is the single figure for Tmin and Tmax per day. Could it be that the components of each are actually available anyway?
After all, a lot of the confusion about the temperature data is caused because that single figure is what is ‘disclosed’. But if, under that single figure, lies the sort of transactional series that I’m urging for any open-source replacement, we could all save a great deal of work.
Because one of the amazing things to be deduced from the harry_readme file, is that the CRU have simply no clue about data archiving or versioning, source code control, or in fact any of the things that capability maturity models have been banging on about for over a decade.
Harry was working with a plethora of text outputs, and this is what’s publicly disclosed every time an FOIA or similar request is lodged. But that doesn’t mean to say that, tucked away right at the base of the entire edifice, isn’t a proper transactional data structure.
Seems to me that a bit of digging under the FOIA or equivalent, to establish the core data structures used by the major series, would be very worthwhile indeed.

I posted the following text yesterday, believing it to be quite newsworthy for this site. I think the data had been published on the web only a few hours before. But I see no mention of it here or on CA. Or have I missed something?
Having looked briefly at it, I don’t know which is “true” raw data and which has been derived (using data which is now reported as lost). I think it is will be quite important to provide clear infromation to tell users which is which.
Here is my post:
“The MET office release of a subset of the HADCRUT3 data at the following link:http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html
From the Q&A:
“1. Are the data that you are providing the “value-added” or the “underlying” data?
The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.
2. What about the underlying data?
Underlying data are held by the national meterological services and other data providers and such data have in many cases been released for research purposes under specific licences that govern their usage and distribution.
3. Why is there no comprehensive copy of the underlying data?
The data set of temperatures back to 1850 was largely compiled in the 1980s when it was technically difficult and expensive to keep multiple copies of the database.”
This is not the end – looks like this is only the start of the homogenising debate.”

steven mosher (11:34:00) :
This looks to me like a correction based on the equipment, which is indeed not a normal GHCN issue – they say they don’t use the historical metadata. But the seasonal oscillation (2:17:28) of varying amplitude looks to me as if someone is correcting for imperfect shading, which underwent sudden improvements with equipment upgrades.

Lest we forget, the EPA has its own whistleblower, Alan Carlin.
————————————————————-
EPA whistleblower’s office on the chopping block
By Michelle Malkin • August 26, 2009http://michellemalkin.com/2009/08/26/war-on-watchdogs-epa-whistleblowers-office-on-the-chopping-block/
Following a whistleblower report that criticized a global warming rule, the Environmental Protection Agency is reportedly considering shutting down the agency office in which the critical report originated.
Dr. Alan Carlin, the senior analyst whose report EPA unsuccessfully tried to bury, worked in EPA’s National Center for Environmental Economics (NCEE). According to a story in last Friday’s Inside EPA, the agency is now considering shutting that office down.
CEI General Counsel Sam Kazman was sharply critical of the proposed EPA move.
“Economists are the most likely professionals within EPA to examine the real-world effects of its policies,” said Kazman. “For this reason, the NCEE is a restraining force on the agency’s out-of-this-world regulatory ambitions. EPA would love to get that office out of the way, especially since it has within it civil servants like Dr. Carlin, who are willing to expose the truth about EPA’s plan to restrict energy use in the name of global warming.”
Carlin’s study found that EPA failed to consider recent science data showing that global warming is not the problem the Administration claims. For example, the study found that ocean cycles, rather than anthropogenic carbon dioxide, appear to be the single best explanation of global temperature variations.
————————————————————-
The EPA Silences a Climate Skeptic
The professional penalty for offering a contrary view to elites like Al Gore is a smear campaign.http://online.wsj.com/article/SB124657655235589119.html

Great piece of work, Willis, and now at last it seems we’re getting down to the nuts and bolts of why CRU/GISS/IPCC took great pains to stop access to the raw data. This deception has been exposed in all it’s despicable glory.
3×2 (15:35:53) :
“…Enhancement is the problem. Homogenization [in this sense] should seek only to make a sensible series out of the data available for that station. Enhancement as seen here is otherwise called “fudge”.
As for coastal v inland, I’m rapidly coming to the conclusion that a station can only ever be compared to itself and even then only if it is the same station. If my coastal station shows a -0.3°C trend and you decide to “enhance” it using a bit of “inland” warming, you have made “liars” of both stations. Try it with your bank balance if you don’t see my point.”
anna v (23:15:14) :
“…These, and other similar analysis show that there is no meaning in a global temperature and in a global temperature anomaly.
The physics is shaky all over and according to George E.Smith the statistics is absolutely inadequate too.(Nimquist or so).
Temperature measurements are inextricably attached to the location through the gray body radiation which varies from geographical spot to geographical spot. Temperature changes because of radiation, i.e. changes in energy input output ( and convection and evaporation and precipitation), but there is no algorithm to go simply from temperature to energy changes and vice verso, and it is the energy that is important and a constant of the equations and can be summed and averaged the world over.”
Good points from both. Our chaotic climate is fractal and this means that it is only the exact point at which the measurement is taken that it has any information. Move a few hundred yards away, and you will see a different result.
When the supposedly best climate scientists in the world cannot see this, it just shows how bogus the who charade is. Only by summing the observed energy across the billions of three dimensional geographic micro-climate cells in real-time can an accurate climate diagnosis be made. Looking at trends based on the low granularity and the poor quality raw temperature data is a complete and utter waste of time.

This is well worth a look: http://www.youtube.com/watch?v=zZW-BF70TsI
“Criminal Fraudsters” … they can’t “hide the decline” any longer, they either have to put up their data to real peer review in front of a jury, or shut up and effectively admit they were committing fraud!
And that will bring down the whole Global Warming sham!

Tenuc (13:42:05) :
Great piece of work, Willis, and now at last it seems we’re getting down to the nuts and bolts of why CRU/GISS/IPCC took great pains to stop access to the raw data. This deception has been exposed in all it’s despicable glory.
This is totally muddled. Willis has queried GHCN adjustments to Darwin’s raw data (which have been available for years). CRU and GISS have published their data for Darwin, and it shows no significant adjustment to the raw GHCN data that Willis used. The IPCC used CRU data, not GHCN adjusted.

Professor Bob Carter uses the scientific method on the popular theory with global warming being linked to CO2 levels. He examnines the hypothesis and it fails the test. Inconvenient Truth author Al Gore would find his presentation contradicted by this presentation? Will kyoto`s greenhouse reduction goals be in vain?
AGW-RELIGION RULE ONE
NEVER discuss the science.
Attack the man.
Repeat the mantra
Climate Change – Is CO2 the cause? – Pt 1 of 4
Climate change – Is CO2 the cause? – Pt 2 of 4
Climate Change – Is CO2 the cause? – pt 3 of 4
Climate Change – Is CO2 the cause?- pt 4 of 4

re: 3×2 (15:35:53) : et al
Before we try our hand at the same pseudo-science (even though with more honorable motives) we should each try the following physics quizz!
1. Consider a perfect-insulator enclosing two chambers. Each chamber contains “air” and the two chambers are separated by a perfectly-insulating removable partition of negligible thickness. If the two gas filled chambers initially have the following parameters:
Chamber-1 Chamber-2
Volume V1 V2
Pressure P1 P2
Temperature T1 T2
with each chamber in a state of equilibrium, then, when we open the partition, what are the correct algebraic expressions for
a. Total Volume (TV) { = (V1+V2) will be accepted. }
b. Total Pressure (TP)
c. Total Temperature (TT)
2, Assume the same conditions as question 1 above, with the added condition that
chambers 1 and 2 each contain volumes of water W1 and W2 respectively, and the air above the water has humidity H1 and H2 respectively. Again give correct algebraic expressions for TV, TP, and TT. Don’t forget to account for latent heat due to evaporation or condensation or freezing or thawing or sublimation.
Note: You may assume P1 and P2 are in the interval (0.9, 1.1) atm., and T1 and T2 are in the interval (-25,+50) C., and W1 and W2 are sufficient to ensure a liquid (or solid?) water resevoir remains after the mixing.
3. In light of your answers to questions 1 and 2 above, justify the assertion that the “daily average” air temperature computed as follows Tavg = (Tmin + Tmax)/2 is a meaningful metric for the purpose of determining the global atmospheric temperature of “Water World” Earth.
————————————–
If we go to yesterday’s financial page in the local paper, we can add together all the closing currency exchange rates and divide by the number of currencies listed and get a number, but the number cannot be understood as an “average” exchange rate, it has no definable meaning.
robr (17:09:03) : is saying the same thing from a HVAC point of view.

Nick Stokes (14:16:26) :
[“Tenuc (13:42:05) :
Great piece of work, Willis, and now at last it seems we’re getting down to the nuts and bolts of why CRU/GISS/IPCC took great pains to stop access to the raw data. This deception has been exposed in all it’s despicable glory.”]
“This is totally muddled. Willis has queried GHCN adjustments to Darwin’s raw data (which have been available for years). CRU and GISS have published their data for Darwin, and it shows no significant adjustment to the raw GHCN data that Willis used. The IPCC used CRU data, not GHCN adjusted.”
CRU/GISS/IPCC scientist are the ones who lead the deception – I’m sure GHCN have to fall into line and ‘follow the consensus’.

So what about nearby stations in the NT. I did a test on a block of stations in the v2.temperature.inv listing, which are in north NT. I noted that wherever there was an adjustment, the most recent reading was unchanged. So I listed the adjustment (down) that was made to the first (oldest) reading in the sequence. Many stations, with shorter records, did not appear in the _adj file – no adjustment had been calculated. That is indicated by “None” in the list – as opposed to a calculated 0.0. Darwin’s 2.9 is certainly the exception.
In this listing, the station number is followed by the name, and the adjustment.
50194117000 MANGO FARM None
50194119000 GARDEN POINT None
50194120000 DARWIN AIRPOR 2.9
50194124000 MIDDLE POINT 0.0
50194132000 KATHERINE AER 0.0
50194137000 JABIRU AIRPOR None
50194138000 GUNBALUNYA None
50194139000 WARRUWI None
50194140000 MILINGIMBI AW None
50194142000 MANINGRIDA None
50194144000 ROPER BAR STO None
50194146000 ELCHO ISLAND 0.7
50194150000 GOVE AIRPORT None

Great piece of work, Willis, and now at last it seems we’re getting down to the nuts and bolts of why CRU/GISS/IPCC took great pains to stop access to the raw data. This deception has been exposed in all it’s despicable glory.

Janama, Willis:
Thank you, my apologies.
So the BOM list of stations has its own gaps, anomalies, and adjustments. Therefore in Australia if we want the correct info on weather stations we have to go to GISS?
OT, last year I asked BOM local office re the CSIRO/BOM report on worsening drought due to climate change. Their own maps and time series graphs show that the bulk of Australia is in fact slightly wetter than it was 100 years ago. The reply was that if you look at the trend since 1970 there is a clear decline. Of course, 50’s and 70’s were unusually wet decades. Trust us, we’re good at cherry picking.
BOM, CSIRO, and WMO all belong in the same bin. Show us the data!

dadgervais (15:04:33) :re: 3×2 (15:35:53) : et al
Before we try our hand at the same pseudo-science (even though with more honorable motives) we should each try the following physics quizz!
(…)
If we go to yesterday’s financial page in the local paper, we can add together all the closing currency exchange rates and divide by the number of currencies listed and get a number, but the number cannot be understood as an “average” exchange rate, it has no definable meaning.
robr (17:09:03) : is saying the same thing from a HVAC point of view.robr (17:09:03) :For example, I have long been wary of averages – if the temp is 10C all day apart from 1 hour in the afternoon when the rain stops, sun shines, and it reaches 20C, does that make the average for the day 15C? Of course not. Also, the trend we are measuring is way lower than our error band anyway – how can that be accurate enough?
It has taken a while to unravel all that and it would have been a much easier job had you simply quoted the relevant bit of whatever I said earlier and then made your point. I think I see your point though I’m still not sure how it relates to anything I had written at (15:35:53). help me out with a simple…
3×2 : (15:35:53)
(….whatever I wrote…..)
Followed by your comment/point/observation
Ta

As I predicted (and probably most here) this ain’t gonna go away. Its what we call positive feedback. I’d venture to suggest… say that in 2 weeks time AGW might in fact be stone dead… or at least that the 36% that believes its caused by humans in the USA will have reached maybe 15% and so on and on. At this stage the polticians will give it up by simply “forgettin bout it” LOL so will the newspapers and maybe even this site will slowly fade away as it will no longer be a subject of interest… AGW that is…

Willis Looking at the unadjusted plots leads me to suspect that there are 2 major changes in measurement methods/location.
This occur in january 1941 and June 1994 – The 1941 is well known (po to airport move) . I can find no connection for the 1994 shift
These plots show the 2 periods each giving a shift of 0.8Chttp://img33.imageshack.us/img33/2505/darwincorrectionpoints.png
The red line shows the effect of a suggested correction
This plot compares the GHCN corrected curve (green) to that suggested by me (red).
The difference between the 2 is approx 1C compared to the 2.5 you quote as the “cheat”.http://img37.imageshack.us/img37/4617/ghcnsuggestedcorrection.png

Why the mean anyway? Why not study the trend in the mode? Since you have a range large enough to go from freezing to scorching, why not simply study the trend in the mode? Makes sense in my head, is there a good reason not to do this?

Just did a major scan of all MSM in NA and ma finding very little prominent stories on Cope 15. Ant theories on this lack of coverage of what is supposed to be a major conference? Maybe I am missing something?

http://en.wikipedia.org/wiki/File:MonthlyMeanT.gif
Note how insanely bitterly cold the Antarctic is, yet the oceans play damper well during the SH winter. During the NH winter, the landmasses in the NH bear the brunt of absorbing the cold. This shows that atmospheric physics play a smaller than believed role in the climatic processes of Earth. The oceans are a very effective damper on temperature. This is because the oceans radiate heat much more slowly than the landmasses.
So, I really think that analyzing Tmax/Tmin would be helpful to debunking AGW. It seems that the time of day when energy escapes the system would be the more important way to look into the problem. Unfortunately, I believe that the best way to analyze this would be to take an entire years worth of data in a desert scenario, where there is very little water vapor, and track the hourly temperature to note the daily energy absorption and energy release phases. This should also be compared to jungle settings where the water vapor plays a dominant role. The changes have to happen on a daily basis because the system fluctuates between absorb and release on a daily basis. If radiative physics are the crux of CO2 causing global warming, then you have to observe each process to note what is different, right? This stuff about averaging Earth’s wildly variable temperature is nonsense as far as proving anything goes.
Why not study the absorption/release rates? Energy (temperature as a proxy here) should release more slowly and absorb more quickly if CO2 is preventing radiation from escaping, right? Why not study that for the so-called fingerprint? I am confused as to the general methodology, I guess. I don’t understand how averaging temperature over tons of different actual physical differences (jungles and plateaus of Tibet for starters) can be at all helpful. Why hasn’t this been done yet?

Sorry, I meant to delete ‘an entire’ in front of ‘an entire years’. I mean that it would take loads of actual hourly observations, noting all circumstances relevant to radiative forcing, to really determine whether or not radiative forcing plays an integral part in climate fluctuation. I may be wrong, but I would like to know how.

re previous SH v NH Ice (should have added temps as well sorry). I have observed this time map (days)http://www.esrl.noaa.gov/psd/map/images/fnl/sfctmpmer_01a.fnl.anim.html
regularly and have noticed that invariably/repeatedly, when “cold in SH pole”, “Hot in NH pole” then a flip/flop occurs “Hot in SH Pole and Cold in NH Pole”. Anyone noticed as well?. Probably described already.. Is this a mechanism of earth temp control to within ~14oC?

VG (22:40:41) :
I posted that link because it illustrates the silliness of averaging temperature. The Tibetan plateau is a great example. Note when it goes blue. What I think that illustrates is Milankovitch cycles and the tendency of water to damp temperature changes drastically. Notice that because the SH is mostly water, Antarctica is bitterly cold in the winter, but the Arctic is not. This is because of the landmass on the South Pole. There is no land mass on the North, but the landmasses in the NH lose their temperature much further south in the winter. I felt this illustrates how 1) it is silly to average temperature that includes SSTs and 2) that average temperature is essentially meaningless when you consider that temperature changes based on physical properties of the point in question. How can you add it all together without accounting for the physical differences of any given point?

Da Capo: . Is there a meaning to temperature measurements and predictions?
Absolutely yes for the locality. From airport conditions to planting conditions to dressing conditions, our lives improve by knowing the temperature,wind, humidity etc, and the predictions for the locality. When I go to my vacation cottage I do not look at the weather in New York, but at Corinth, Greecce. . Is there a meaning in trends of temperature?
Yes. Except they are not reliable for more than a few days presently. Almanacs (distilled folk wisdom from previous experiences?) have a use though, and farmers seem to rely on them. . Is there a meaning in averaging all temperature trends globally and projecting for a century?
A qualitative, yes. Qualitative because we do not know the underlying physics/mathematics/statistics to the detail needed for a rigorous summing and averaging to get one number. Qualitative also because the meaning is fuzzy: it might tell us that a little ice age is arriving , it might tell us a medieval maximum is coming, but not more.
There is no way to get from these numbers watts/meter^2 in a one to one fashion with reasonable errors. The equations needed are an enormous number and depend on constants that have not been measured, like the gray body constants that differ in microlocalities and make the microclimate , the wind turbulances, the evaporation sublimation, photsynthesis water use etc.etc. . This makes the problem intractable by even envisaged computer power for the future.
What the GCMs do is whitewash over the physics problems with approximations and assumptions that cannot hold water in a rigorous physics/statistics/mathematics analysis. This is evident from the weather predictions, that are given by programs with similar mathematics/assumptions/logic. The predictions fail after a few days.
At best, GCMs are models of an ideal world with ideal black body behavior that allows them to connect temperatures to watts/meter^2. This cannot apply to the real world, to the accuracy of 1watt/meter^2 that is claimed to boot. It is a video game after a few iterations.
In this whole mess how can the CO2 contribution be untangled except with handwaving by using the GCM virtual world as if it represents the real world? And the western world is asked to commit hara kiri on the basis of these video games.
This is an inherent problem that cannot be overcome with more computing power, as others have analyzed. The system is chaotic, it has deterministic equations but they have to be applied in too many three dimensional and fractal ( think of forests, think of craggy mountains, wavy oceans) localities and times to be able to process in a digital computer.
The only future for climate modeling lies in chaos and complexity techniques, which is a vigorous new field in all sciences growing by leaps and bounds. Tsonis et al have made a start, but it is only a start.

Nick Stokes (14:16:26) :
Tenuc (13:42:05) :
Great piece of work, Willis, and now at last it seems we’re getting down to the nuts and bolts of why CRU/GISS/IPCC took great pains to stop access to the raw data. This deception has been exposed in all it’s despicable glory.
This is totally muddled. Willis has queried GHCN adjustments to Darwin’s raw data (which have been available for years). CRU and GISS have published their data for Darwin, and it shows no significant adjustment to the raw GHCN data that Willis used. The IPCC used CRU data, not GHCN adjusted.
Nick part of the issue, if your read CRU responses to our FOI, was that they didnt have to make their data public because IT WAS ALREADY AT GHCN!
except for the bit held under confidentiality agreements.
At some point in this whole MESS, somebody needs to draw a diagram of what data goes into what index, what is raw what is adjusted.
For example GISS say they use USHCN raw, but raw means after USHCN adjustments.
At least with GISS I can look at the CODE, I can see what data file they ingest and I can then hunt that puppy down. Sadly with CRU we dont have that yet
In the end, when its all in the Open, these issues will become easier to sort out.

David thanks for explanation on SH NH Ice etc… The point in general too is I think that no one here is/was particularly interested in denying AGW. I certainly believed it when I saw the old hadcrut graphs on wikipedia etc..> After looking into it more carefully ie Svensmarks theory, SM analysis on tree ring data ect we started to look into and looking even at the cooked data it does not stand up. This affair climategate has now confirmed it further. I think this will sink in to the majority of people and scientists as they realize that since their childhood the climate has not changed.

I start to think that this kind of blog science is actually destructive. My guess is that a year from now hundreds of posts have been produced alluding fraud and incompetence, thousands of cheering comments have been written praising the free thinking scientist at WUWT, but in the end the sum of the nitpicks (some legit and some not) will probably have an impact on the global trend that is a full order of magnitude less than the decadal trends in the official record, that is, instead of being a nail in the coffin for the official record it will be an affirmation of the robustness of the official record. The problem is that no one will take a step back and contemplate this fact. The lasting impression will be that climate scientist are all frauds while the actual results of the blogposts add up to confirm the exact opposite. It’s kind of absurd and in my opinion destructive.

@ Fredrik
“The lasting impression will be that climate scientist are all frauds…”
No, you’ve missed the point. It’s not about the scientists as much as it is about the science. Most people here want the raw data, the adjustments made to that raw data, the reasoning behind those adjustments and the computer code making those adjustments.
If those requests are not met with, then of course there will be the accusation of fraud. Because there cannot be any other reason for withholding it. And if they continue to withhold, of course people are going on a quest to prove the fraud. It’s logical.
So the destructiveness that you notice is a direct consequence of the continued withholding of data and methods. And to be honest, anyone claiming to be a scientist who withholds data and or methods is not a scientist in my book, hence worthy of being destroyed. (not literally of course. let’s not resort to ‘Santer-ising’)

Fredrik (01:06:15),
Then what is your suggestion? What should we do?
If you haven’t noticed, things are improving. The very last thing these crooked CRU climatologists want to do is explain. Slowly but surely, they’re being forced to start explaining.
It’s entertaining to hear them try to explain that the emails were ‘taken out of context’. Here’s one example [out of many] of an email from Harry_read_me:“The 1990 – 2003 period is missing… what the hell is supposed to happen here? Oh yeah – there is no ‘supposed’, I can make it up. So I have.”
That is an admission of thirteen years of fabricated data. And that’s not all.
There are over a thousand emails, plus plenty of code comments and code showing they did ‘make it up’. They fabricated the record to show artificial warming. That has caused others to look at the record of rural stations unaffected by UHI, which show little to no warming.
The entire mainstream media is run by people who have a Leftist bias. They would prefer to never mention climategate again, but the very obvious fraud involved is forcing them to begin to cover it. They will print and broadcast one-sided propaganda when they can, but their bottom line is too important: if they don’t cover the story, their competition will — and the public is beginning to wake up and demand answers. When they see that the CRU data was fabricated, and that data is being used as the pretext to jack up everyone’s taxes, they will tune in to the news programs and papers that report the story.
The same thing happened in Watergate. At first, it was dismissed as “a third rate burglary attempt.” No major papers wanted to connect the dots, because President Nixon had just been re-elected by a huge majority. But eventually they had to cover the story. Best sellers appeared, like All the President’s Men. Watergate was lucrative for lots of people, and it brought down the President.
The same thing happened in the Monica Lewinsky story. President Clinton had just been re-elected, and who cared about rumors of his philandering? But then Drudge was tipped off about the blue dress, and Clinton’s carefully constructed edifice started to crumble, until the President was finally impeached.
You are correct in saying that this will taint honest scientists; the ones who don’t get million dollar grants bribes to ‘study global warming’, and who aren’t accorded rock star status in return for lying about the global warming edifice they have constructed out of their fictional climate data. As you say, that is destructive: honest scientists in other fields have already started to speak out. More will follow. Because their fields have been starved by the $Billions that have been diverted to ‘study global warming’.
The CRU emails are the 2009 version of Monica’s blue dress: proof that a relatively small clique of climate scientists, who wield enormous influence over the mainstream climate peer review process, are corrupt. And the U.S. ringleader of the Hockey Team is a 30-something upstart who is arrogant and steps on a lot of toes. Surely other scientists with sore toes won’t mind the opportunity to give a little payback.
When Clinton tried to explain Lewinsky’s accusations, he looked into the camera and said, “I did not have sex with that woman, Miss Lewinski… never, not a single time… and I never asked anyone to lie… .” That’s the stage we’re at now with Michael Mann, Phil Jones and some others. Climategate will play out more or less according to the same pattern. Human nature doesn’t change, and some enterprising reporter will get one of the players to spill the beans. Matt Drudge was a nobody before he scooped every major newspaper on Monicagate. Now he’s rich and famous.
The first reporter who gets a solid, revealing interview from one of the Team will get a big payoff, and so will the paper that prints it. Scooping the competition on a really big story like this, involving $Trillions and self-serving countries and politicians around the world is the dream of every reporter. Especially at a time when newspapers desperately need new readers. And if the mainstream media won’t report it, others will. Bishop Hill is already working on a book exposing the fraud. And the market will be hungry for more climategate tell-all books.
If you think this “blog science” is destructive, ask yourself: who is going to be destroyed? And who will turn state’s evidence? As Will S said, all the world’s a stage, and each must play his part.

Analysis of Some California Stations Adjustments – Forcing the fit
What I did:
• Went to GISS
• Picked out 10 Mid Northern California Stations &
Reno, Nevada. All had 100+ year records.
• Had GISS output the station data as text and
copied to Excel
• Used Data->Text to Columns Menu to convert.
• Did this for Raw and Homogenized data for each
station.
What I did #2
• Calculated the adjustment for each station to
homogenize for annual averages.
• Realized it appeared to be very manual, systematic and
in .10 of degree increments.
• Tested and confirmed it was the same adjustment for
monthly as for yearly.
• Noticed it was wildly different for each station, but the
dates and changes were not at all consistent to NCDC
station change histories. So this was excluded as the
reasons.
• The changes in the cities were not consistent with
discounting a Urban Heat Island effect, in fact most were
opposite with earlier data being changed more and
recent data changed less.
What I did #3
• Plotted for each station:
– Raw data
– Homogenized Data
– Adjustments made to homogenize
• Some adjustments were positive, some were negative, some were
big, some where small. There was no obvious formula or logic.
• They were not hiding a decline as some stations were up long term,
some were down.
• Some stations they enhanced a warming trend, other stations they
reduced it.
• What were they trying to do? The changes in all case were modest
incremental steps over time, just very different for each city.
• I played with plotting various averages
What I saw #1
• The San Francisco chart really had me perplexed. The raw data showed a strong
warming trend and they were seemingly reducing it significantly. Not for Urban Heat
Island, because they were increasing the past, not decreasing the present. Again, the
changes to the temps moderated over time and varied from early positive
adjustments to recent negative adjustments, to very recent positive adjustments.
• Then I realized what the adjustments were doing…..
• They were forcing all stations to conform to a similar curve…..and that curve told the
story they wanted to tell.
• I can’t yet figure out the exact formula used, but I’m getting close with a 6 order
polynomial fit from excel. The left scale is annual temp average
– The right scale is the homogenizing temp adjustment
– The raw and homogenized temps are plotted
– The curves are an Excel 6 order polynomial trend line
– The GREEN curve is the resulting polynomial trend after homogenizing.
• See what you think.
NOTE: I can’t upload the graphs to this comment thread..Anthony let me know how and I will. They really show what “homogenization” means in practice at least here in the US.
What I saw #2
• The story of the curve they seem to be forcing all stations to is:
– The 30s were warm, but not as warm as today.
– The past was more stable, the 90s climb is steeper and unusual
– There is a consensus story amongst the stations, rural or city.
• The raw data does not support their story, the “homogenized” does.
– If GISS releases the reasons for every single adjustment, there could be
other explanations.
– Adherence to the scientific method demands an explanation.
– Otherwise, Ocam’s Razor suggests this was done to force all reviewed
stations to a similar set of characteristics over time regardless of the raw
data.
• There is a trend to zero adjustment in the late 90s, early 2000s for
all cities. It is the past that is being adjusted making the current first
and second order temperature trends in comparison be interpreted
differently.

“The lasting impression will be that climate scientist are all frauds”: although not the end I hope for, it would certainly be one acceptable solution to this manufactured crisis. The people who have hidden the data and squashed all criticism have only themselves to thank if this happens.
Anything that serves to destroy the justification for further government action is a consummation devoutly to be wished.

You need a sticky for the one by the sixth-grader, too, just in case any talking heads or info-babes do a drive by of this site. It’s probably still way over their heads, but they might be able to get something out of it.

Great work Anthony. I’m seeing links to your research all over the web.
I’m also noticing folk talking more about CO2’s effect on ‘Ocean Acidification’, shifting the pH down from around 8.2. Lots of discussion on effects on plankton, shellfish, food chain, etc. My intitial research seems to indicate that ocean pH was probably about the same as many years ago when CO2 concentrations were much higher.
The key concern seems to be (relatively) big pH changes in the last 30 years (naturally, caused by man-made CO2) that may not allow sea life to adapt quickly enough. I get that life ALWAYS adapts, but perhaps not a great idea if we lose a big chunk of our food supply in the transition period.
As always, I’m skeptical of big headline threats and would appreciate if anyone could point me in the direction of the scientific equivalent of ‘non-homogenised’ ocean CO2 research.

WAG (07:48:36) :
“Do none of you get the hypocrisy of criticizing the RAW data for siting errors while simultaneously saying that adjusting for those errors constitutes fraud?”
Unlike WAG, I understand the meaning of conflate.
The siting issue is completely different from the raw data issue.

WAG,
Good engineering demands independent verification of the details. When I did my check of 10 california stations such as others did for Australia, there were no adjustments made when the sites had documented changes. Such changes would be warranted. There was zero correlation.
The 6th grader’s work on UHI states the obvious that IPCC seems to deny.
The adjustments in CA, like in Australia were very stepwise and incremental over time. They defy any common sense for UHI or any other known effect.
Siting change adjustments would be one time events. What I saw was clearly curve shaping where the raw data of all 10 stations were being reshaped to fit a model curve.
You don’t change the data to fit your model, you go back to find out why your model is broken. Scientists publishing papers may get away with this for a while. Engineers that must build stuff that actually works quickly lose their jobs.
A detailed review and criticism is standard procedure. If they can justify all these adjustments in Darwin, CA, Nordic, Alaska, and otherwise, then they should. In detail.
Hand waving and generalities don’t cut it when we want to reengineer the world economy. I’ll be green without being extreme..

Do none of you get the hypocrisy of criticizing the RAW data for siting errors while simultaneously saying that adjusting for those errors constitutes fraud
Do none of you get any red flags from warming adjustments being applied to stations whose bad siting calls for cooling adjustments?

Fredrik,
Your logic escapes me completely, if blogs like this did not exist there would be far less awareness of the fraud and fakery within the climate science cabal.
Its as if you are trying to say that the victims of the fraud are to blame for exposing the fraudsters?
Perhaps you are saying that the minority of fraudsters who have brought shame and dishonour to the whole scientific community should be protected in order to save the reputations of the majority of scientists who did not partake of fraud/rent seeking/manipulation/fakery and greasy pole climbing but stayed silent to protect their own positions? All it takes for evil to triumph is for good men to do nothing.
I

You need only one to wreck everything!
from the Times
“One scientist told The Times he felt under pressure to sign. “The Met Office is a major employer of scientists and has long had a policy of only appointing and working with those who subscribe to their views on man-made global warming,” he said. “

Like reports of Mark Twain’s death, predictions of the imminent demise of AGW are premature. The debate is only beginning.
Those of you making this claim are no different than those predicting a 3 foot rise in sea levels by the end of the century. The truth needs to be discovered. You can’t just throw mud at a theory and expect it to be disproved. This is something for the scientists to hash out. The good news is that the reins of the debate are now being freed from the internal control of the IPCC, CRU, the Met Office, GISSS, etc.
Now, on the subject of coercive implementation of policy, that is worth fighting tooth and nail against. No one here is more outraged than I by the recent EPA finding. Until the science is truly settled, nothing should be done to harm economies by imposing draconian solutions.

Cassandra King:
“Your logic escapes me completely, if blogs like this did not exist there would be far less awareness of the fraud and fakery within the climate science cabal.”
There is a post right now about Peter and his dad; they’ve found a huge UHI signal. This signal is due to a bug in peter and his dad’s spreadsheet. When this bug is fixed the result from the spreadsheet will be probably be in line with the scientific consensus: UHI has a minor impact. I mean, think about it, such a huge signal would have been picked up a long time ago by the math savvy people at climate audit if nothing else.
The problem is that an endless stream of accusations of fraud will be produced here about the official temp record. My guess is that the sum of the legit points will have about 0 impact on the actual records, and the majority of the accusations will have as much merit as the you tube film by Peter and his dad. So, we will have 100 accusations of fraud, and 0 impact on the official record. What would be a sensible conclusion? Well, maybe that there was no fraud to begin with, but It seems that no one here will ever reach that conclusion, instead they will look at the 100 accusation and conclude that the official record is falsified 100 times, while the truth of the matter is that the official record has been vindicated 100 times. That is a huge problem.

Fredrick wrote: There is a post right now about Peter and his dad; they’ve found a huge UHI signal. This signal is due to a bug in peter and his dad’s spreadsheet. When this bug is fixed the result from the spreadsheet will be probably be in line with the scientific consensus: UHI has a minor impact. I mean, think about it, such a huge signal would have been picked up a long time ago by the math savvy people at climate audit if nothing else.
——————————————
If you chose a specific location, say, a street corner in London, and had a temperature reading from say, Dec. 10th, 1880 and another one from this morning, how would you correlate them? How can you possibly say that conditions today would require a 1 or 2 or 3 degree adjustment with any level of certainty? And the same goes for the countryside where forests may have been cleared for development.
How can you possibly do this with any sense of certitude? Given that uncertainty, how can you possibly predict what temperature will be in fifty or a hundred years with any confidence whatsoever?
And, on the subject of “bugs,” what about the “bugs” noted by hapless Harry
in CRU’s code?
As for your “guess,” until all these issues are adequately explained, it’s as worthless as any 100 year prediction of imminent catastrophe.
Has temperature risen? Probably, although not even that is adequately proven given all the secrecy surrounding the data and code. Has it risen because of CO2 emissions? Correlation is not causation.
Given some of the monstrous solutions to solve a problem that has not adequately been identified, skepticism is the only wholly defensible position at this point in the debate.

Fredrik (10:07:39) :
Cassandra King:
“Your logic escapes me completely, if blogs like this did not exist there would be far less awareness of the fraud and fakery within the climate science cabal.”
There is a post right now about Peter and his dad; they’ve found a huge UHI signal. This signal is due to a bug in peter and his dad’s spreadsheet. When this bug is fixed the result from the spreadsheet will be probably be in line with the scientific consensus: UHI has a minor impact. I mean, think about it, such a huge signal would have been picked up a long time ago by the math savvy people at climate audit if nothing else.
________________________________________________________________________
And it was noticed a LONG time ago. Go look at the surface stations pages. The issues addressed by siting weather stations are a serious part of UHI (pavement, a/c systems, cars, buildings, land use).

I can see why there are reasons to adjust the temperature readings downwards (urban developments and proximity to other heat inducing constructions). I can also see that the use of lower level stations would indicate a heat reduction. What changes to station circumstances would make an upward adjustment in temperature plausible?

Fredrik,
Thanks for taking the time to reply to my post, you seem to think that the leaked emails and documents contain no proof of fraud and fakery yet the ‘Harry read me’ file is explicit and damning alone, the refusal to publish raw data for years and the collusion between US GISS and CRU is a fact.
The historical temperature record has yet to be verified, the raw data has been subjected to manipulation, it has become a synthesised product from the raw data that isnt available and the method of manipulation which is not available. In effect the CRU temperature series is a finished product using unknown ingredients and unknown manufacturing methods, this is a known fact BTW. Are we to blindly trust what feels like a scam, looks like a scam and uses the methods of a scam simply because those with most to gain and most to hide simply state that their data is correct and are we to trust the unverified word of those who have done so much to bring doubt and suspicion on their own heads?
If the CRU had nothing to hide then why on earth did they move heaven and earth to hide so much? If the CRU had nothing to fear then why did they try so hard to mislead and lie?
The huge and lasting damage to the scientific community is being caused by a minority of bad apples within a corrupted system NOT by those who seek the truth, most here only want the truth to emerge, we all want to trust scientists and we all want to know that the facts will be laid out for all to see, sunlight is the best disinfectant.
Frankly I would be happy if there were no fraud and fakery involved but until the day comes when we can know for sure then I continue to have my doubts.
You confidently state that there will be zero impact on the official record, I wish I had your confidence my friend, we neither of us know what the outcome will be do we? I suspect that if the MWP can be erased then there is more to this particular pudding than meets the eye, time will tell all.

You article on the Darwin station makes great reading. How possible would it be to simplify this so that a number of volunteers could so similar work to look at other station data? At least to the point where we could generate a graph showing raw data and the adjusted data and the adjustment amount?

Philip Madams (11:53:42) :
I can see why there are reasons to adjust the temperature readings downwards (urban developments and proximity to other heat inducing constructions). I can also see that the use of lower level stations would indicate a heat reduction. What changes to station circumstances would make an upward adjustment in temperature plausible?
———————————————————–
A related question would be: why would they adjust early temperature readings downward, which I’ve heard is also and adjustment device that they use? Like you, if I might presume to know the thinking behind your question, to me it seems counter-intuitive to adjust the present data upwards and the historic data downwards.

My Christmas present this year is Climategate. I don’t need or want anything else. I want to give as a present a DVD titled “ClimateGate: Everything They Didn’t Want You to Know” to everyone I know. When the ask, who are they? I’ll say, watch the movie.
I need you to help me with this. Can we put together a Climategate video we can burn to DVD from a torrent file of the information we have so far? The movie can always be revised in the future. Lots of snow blizzard scenes and stuff about the Solar Minimum in it. I need your feedback on this. Please address this comment directly with your feedback. Can we do this?
Thank You
MJN

Philip,
It is easy to check station and determine what changes they made, but not why they did it.
First, go to the GISS web site: http://data.giss.nasa.gov/gistemp/station_data/
First get the raw data by selecting: “after combining data at same location” option.
Then select an area on the map. I chose the area I have lived in for a very long time.
Pick the station you want to analyze.. click on it.
Near the bottom left is an option to “Download monthly data as text”
Click on this, it will display on the screen the data in monthly and annaul summary format.
Select all the data by CTRL-A on your keyboard. Copy it with CTRL-C.
Open Excel. Copy the data to Excel by CTRL-V.
Select in Excel on the DATA menu, TEXT TO COLUMNS. just follow the defaults…you now have the data in excel.
Now go back to the first GISS page with the map and selection “after homogenity adjustment ” option. Click on the map, find your station, and do repeat “Download Monthly data as text” and copy to Excel.
Then if you subratct the Raw temps from the Homogenized temps and plot you can get exactly the adjustments either by annual or monthly.
You can use the Excel charting tool to plot the raw vs adjusted and put in trend lines to see how they changed it or not. Linear trend for general direction. Polynomial for curve fitting.
The other thing you can do is look at NCDC for each station (use the id to make sure it is exactly the same one.) You can then see if the station history has any reason for the adjustments. I found zero correlations so far.
In my case for California, they took what was a mixed bag of up and down trends (no surprise the growing cities were up and rural was flat to slightly down). The end result was all stations suddenly conformed to a curve that look suspicuously like the main global trend curve.
They are certainly “homogenizing” in my area….. all the curves are being driven to the same story. That story is the main theme on the overall GISS pages. Then they go back and say all the data agrees with the summary. Nice circular confirmation.
I’m hoping others will be encouraged to the same so we can compare. I was expecting “hide the decline” I found instead “we are the borg and your output will conform” …:-).
When someone puts in unexplained data adjustments as large or larger than the change they are claiming to detect, and those changes cause the data to conform better to their predictions when before it did not full conform…..It is not a very convincing argument.
Good luck.

“Ripper (20:16:06) :
I will do the mean minimums later on when I get time”
Is there any possibility you could look for a discordance with respect to June and December in the two data sets. A truely sick individual like myself would ‘homogenize’ june differently from december to generate the largest max/min, but leaving the smallest foot prints. Indeed, just who is going to look for slight differences in months.
So pretty please; compare June and December; as well as the yearly averages.

Ian,
Graph the change values they made to the raw over time for the station you are interested in. My interest now is are they forcing all raws to a rough model curve. What has shown up for curve modification is a stepwise adjustment to stations that does not fit any UHI or station change reasoning that I can find.
That has generally meant at least in my region:
Make the 1900 to mid 60s area flatter. Sometimes they must push up sometimes they must push down. Make sure the 1930s are not well above the 90s. Ensure the late 20th century has a steep ramp. I want to know how widespread this theme is.
My point is…Hide the decline for overall temps is not the point. That was the divergence in the backward looking reconstructions. The question on the table is what are the “homogenizations” and the logic behind them. One theory evolving is they modify to fit the narrative. Other theories are these are necessary and innocent to meet changing conditions at that station, but so far not one has been able to reproduce the docs for this. Good science demands the ability to reconstruct the decisions. The facts will set us free….

–bill (02:16:14) :
steven mosher (23:49:41) :
I think if you look back at the FOI requests CRU very early on said that the free data was already available at GISS / GHCN. It’s just that no one believed them–
That’s not accurate. A bunch of free data has always been “available”. What has never been available, among other things, is what data out of that pile CRU actually used.

an B (16:40:28) :
–bill (02:16:14) :
steven mosher (23:49:41) :
I think if you look back at the FOI requests CRU very early on said that the free data was already available at GISS / GHCN. It’s just that no one believed them–
That’s not accurate. A bunch of free data has always been “available”. What has never been available, among other things, is what data out of that pile CRU actually used.

The gave Mcintyre (or group) a list of stations used some time ago. It is very easy subtract from this list the giss/ghcn stations. Those left are CRU only stations.
In fact McIntyre has done it already in October 2007:
Argentina: CRU had quite a few stations not at GHCN.
Australia: nearly all match, but two CRU stations didn’t match GHCN – Maryborough and Brisbane Airport. Why these?
Austria – quite a few stations not at GHCN
Bolivia – a couple didn’t match
Brazil – a couple didn’t match
Canada – quite a few stations not at GHCN. I noticed a duplicate GHCN for Parry Sound, which is near Toronto and which occurs in two alter egos in GHCN.
Chile – a couple didn’t match. A couple were called “UNKNOWN” in the CRU list. Perhaps they are connected to the UCAR “Bogus Stations”.
China – quite a few stations not at GHCN
Denmark – a couple didn’t match
Dominica – a couple didn’t match
Germany – one didn’t match
Finland – one (Kuopio) didn’t match
Greenland – possibly a couple didn’t match
Guinea – one didn’t match
Iran – one didn’t match
Ireland – one may not match (Phoenix Park)
Israel – a few don’t match
Italy – a couple don’t match
Kyrgyz republic – one doesn’t match
Netherlands – a couple don’t match
Norway – a few don’t match
Oceania — a few don’t match
Peru – one doesn’t match
Sweden – a few don’t match
Syria – several don’t match
Taiwan – quite a few don’t match
UK – a couple don’t match (Kirkwall, Wick)
USA – about 25 don’t match e.g. Moroni, Lahontan
Russia – a couple may not matchhttp://climateaudit.org/2007/10/03/a-first-look-at-the-cru-station-list/
So what is your beef now!

an B (16:40:28) :
–bill (02:16:14) :
steven mosher (23:49:41) :
I think if you look back at the FOI requests CRU very early on said that the free data was already available at GISS / GHCN. It’s just that no one believed them–
That’s not accurate. A bunch of free data has always been “available”. What has never been available, among other things, is what data out of that pile CRU actually used.
The gave Mcintyre (or group) a list of stations used some time ago. It is very easy subtract from this list the giss/ghcn stations. Those left are CRU only stations.
In fact McIntyre has done it already in October 2007:
….
So what is your beef now!

Don’t know what their beef is, but I can tell you what mine is. I had asked in the FOI request for a list of the stations used, and which data for each station they had used. They replied with a list of the stations but didn’t say where they got the data. Was it from the Aussies? From GHCN? From the WMO? Half an answer is no answer at all …

bill (02:16:14) :
Bill as one of the people who wrote an FOIA to CRU I will tell you that I asked for the data AS USED. Not a representation that they used GHCN, but rather the ACTUAL DATA AS USED. You see they could point me to GHCN, but I still need to verify that they used it. That they downloaded it properly. that they ingested it properly into their code.
See Willis’ responses as well. Basically I had seen GISS screw up data ingesting from USHCN.
Try this. Next time you get stopped by a cop and he asks for your license, give him the number of the DMV and tell him you got your license there.

We have 3 different sites for Darwin (oops, make that Wellington, NZ!). So why not simply take the three relevant intervals (excluding overlaps) and treat them as if they were three different stations entirely?
Look at the three different raw trends and combine them (on a pro-rated basis). It seems to me that combining the graphs first and then trying to dope out the trend is an utterly wrong way to go about it.
First figure the trend (as outlined above). Once one has figured out the composite trend, then, and only then, combine the graphs (color coding them for station moves) using the correct overall trend as a guide.
The result is a graph for data adjusted only for station moves. (Other adjustments might be called for.)

Nick from NYC,
I’ll reply more fully tomorrow. Did a first pass look. Not totally sure on the data sources for the AIS data you linked and how they relate to the GISS data that could be really important to resolve your question.
I’ll look at it some more before a better reply. My first impression is they are almost inverse between GISS and the AIS source. Interesting. Let me poke some more.
When I plot the GISS “raw” vs “homogenized” I again got data nudged towards a common curve type. But your AIS sources may say otherwise…the joy of open review!

Green RD Manager (15:12:22) :
Ian,
Graph the change values they made to the raw over time for the station you are interested in. My interest now is are they forcing all raws to a rough model curve. What has shown up for curve modification is a stepwise adjustment to stations that does not fit any UHI or station change reasoning that I can find.
That has generally meant at least in my region:
Make the 1900 to mid 60s area flatter. Sometimes they must push up sometimes they must push down. Make sure the 1930s are not well above the 90s. Ensure the late 20th century has a steep ramp. I want to know how widespread this theme is.
Chalk up another for the flattening the past theme. Here’s what Calgary Int’l looks like:http://tinypic.com/r/2isd3c0/6
I added a 15 year moving average to both the raw (in blue) and adjusted (in red) data and what they’ve done is quite striking. The warm “blip” between roughly 1915-40 has been adjusted down by as much as a full degree C while leaving the cold years in the late 1800’s and warm years post 1970’s more or less alone. Niiiiiiice.
I haven’t had any luck digging out metadata on the site or even spotting the weather station on GoogleEarth for that matter. The coordinates are listed as 51° 6.600′ N, 114° 1.200′ W according to Canada’s National Climate Data and Information Archive if someone else wants to give it a go. This plots smack dab on the road that loops right past the current terminal building. It’s a veritable sea of concrete and asphalt. Perfect site for a weather station. /sarc
The adjustment curve itself is an interesting shape.
• The early years are a constant -0.1C.
• 1902 is the start of the steep decline
• 1912-1919 – the greatest negative correction to the data set bottoms out at -1C over this period
• 1920-1985 – slowly bringing the adjustment back up to 0C
• 1986+ requires no adjustment
So UHI (or something) was apparently just dreadful in the early part of last century, but then whatever the problem was… well it seems to have gone away and requires no correction by the end of the (warming) data series?! Maybe GISS has a perfectly valid reason to do this, but it certainly makes me suspicious when basically *just* the inconvenient warm years require such a large downward adjustment and the endpoints aren’t shifted much. Meanwhile, the proliferation of airport tarmac through the 70’s and 80’s doesn’t require a correction? Go figure.
If you add up enough of these squirrely “homogenized” data series from around the world where the past has been artificially lowered, then no wonder we’re talking about tipping points, melting ice caps and crippling taxation. Talk about gaming the system. Oh and this station also apparently falls into the ‘dropout’ category since NCDC/GISS only have data up to 1990 while the weather station is indeed still up and running… It’s snowing. 😉

Green RD manager
California station adjustments
Good work. Those agw hacks who protest that when all the dust settles from accusations of fraudulent climate adjustment, the climate record will be little changed, are missing or obscuring the subtle nature of the fraud that really is happening. The most dangerous lies are those close to the truth. The ocean-driven multidecadal cycles of climate – that agw-ers contempuously and dismissively refer to as ‘natural’, have always caused osscillations in global temperature. This century roughly speaking, temps rose up to 1940, then fell till about 1970, then rose again till a little after 2000. This recent cyclical rise is being presented as co2 reated agw. It is important to understand that the massaging of both the temp data and – equally importantly – the setting of the goal posts and the temporal frame of reference of the debate, needed by the agw-ers to argue their case, is limited and subtle. They ride on the 1970-2000 atlantic multidecadal upswing. The 40s – 70s decline does require a little hiding, not too much of course, just enough to make it look more like a plateau. The highs of the 30s – in reality close to our current ‘warmest’ decade, need a little downward pressure. But only a little – this is the important point. And the frame of debate must be post 1900s of course, they really do need, as they so charmingly stated themselves, to ‘lose the MWP’. That part is easy – it is natural for attention to focus on living memory and the recent period with the technology available for accurate temp measurement.
Thus the importance of the careful detailed observations of Green RD manager and California station adjustments. The adjustments observed are not a crude blatant twist to hotter now, colder then. Just clever massaging to manufacture simple conformity to a plausible ‘scientific’ story – if not a ‘party line’ then a ‘party curve’.
To be honest, my instinct from the biomedical sciences where i work is to be appalled by any sort of quasi-arbitary data adjustments. In the drug testing clinical – pharmaceutical field it is the sort of thing that can land you in prison. Data is sacrosanct, and even if it is clear that a bias or problem exists, data are NEVER altered. If they are, they are no longer data.

Nick From NYC,
I dug some more on AIS. They claim to be using GHDC and HadCrut…;-). Selecting the GHDC data set is what I assume you did. GHDC claims to be an aggregate of numerous data sets (they had a long list on the website). Much of it appears to be “value add” but the list is too summary to state definitively for any given set.
When I follow your links and use AIS myself, I get what you sent in the links. So I can reproduce what you talked about. However I am unable to trace the data set down to the station and back to reconcile with GISS output.
So, while the AIS tool says raw vs adjusted GHDC, I cannot reconcile AIS to station data via GISS. Nor can I understand what is their basis for raw vs adjusted. I am not in any way claiming the tool is bad. I’m just saying I cannot follow the trail from claimed data set to their output to try to reconcile what I see using the GISS server.
One idea could be we are seeing homogenization and then further “value add”. Another idea is garbage in – garbage out. Or it could be I just don’t get it…:-).
I did plot that station Raw vs Adjusted on GISS. I again got a similar stepwise adjustment over time and the same polynomial curve shape. I’ve added it to my growing collection. The plots you sent also had the final same shape, but were overall higher. The huge difference was the raw data for AIS already had the “standard curve” so it did not make the dramatic transformation that GISS raw(basically a straight line) vs GISS homogenized (standard curve) did. Hence my (unconfirmed) thought that GHDC via AIS may already have some value add in their raw.
Sorry I could not do better. Maybe someone can help us on how AIS, GHDC and GISS sit in a relation to eachother and adjustments. We really should know the answer of how many and what value adds are being put in. Open processes demand this.

Deb,
I followed the link. Looks about the same!
I’m trying to get a standardized plot for apples to apples.
In Excel. I select the custom charts. I used 2 line with 2 axes. Left axis is the temps. Right axis is the adjustment. For the trend lines, I put in the polynomial curve with 6 degrees. I put the raw in Red for the curve and the adjusted in Green. For the temps I try to keep the range to about 5 degrees so the amount of adjustment can be seen…
This may change, as we want a useful presentation that does not distort anything.
Let’s see what others find.
Importantly, Nick from NYC highlights that the full linkage from raw to adjusted to IPCC output needs to be understood. But it seems some good folks have been trying to figure that out via FOIA for years. A fully open process where the key players were not making it deliberately hard to figure out (as the CRU emails showed they were) would answer this….:-).
If you do more sites or adjust that one, let us know.

The “Frontline World” adverisement on the front page must be removed at once. This is a link to a hysteric/green-globalist site. Exactly the opposite of everything this blog is about. I am flabbergasted. It has to be romoved quickly!!

Algore famously had a questioner’s microphone shut-off to avoid embarrassment.
Now, Journalist Phelim McAleer asks Prof Stephen Schneider from Stanford University an Inconvenient Question about ‘Climategate’ emails. McAleer is interrupted twice by Prof Schneider’s assistant and UN staff and then told to stop filming by an armed UN security guard.
Compare with the honesty and integrity of the prominent skeptics, who do not avoid confrontation.
How do we make it short? Bring in the muscle.
Armed Response to ‘Climategate’ question

Here is a simple way of framing the discussion on data corruption. Show two graphs from Willis’ example above.
1) Here is a graph of the raw temperature data.
2) Here is a graph of the same temperature data on drugs.
Just say no to drugs!
Shiny
Edward

Ed Scott: I was just about to post a link on Prof. Schnieder’s conduct. You beat me to it. This is disturbing if not at all surprising.
Please excuse the political reference. Green is but a means to an end. This walks like a facist duck and quacks like a facist duck, (yes the term is oft misused but this is accurate by definition).
1] Exalts nation [insert Green ideology for nation] above the individual,
2] Uses violence [ELF and ALF for example] and modern techniques of propaganda [Prophet Gore et al] and censorship [Denier!] to forcibly suppress political opposition,
3] Engages in severe economic [Cap & Trade] and social regimentation [EPA CO2 ruling, etc],
4] Engages in corporatism, [what corporation is not ‘going green’?].

Willis Eschenbach caught lying about temperature trends
Posted on: December 9, 2009 2:21 PM, by Tim Lambert
Remember how the New Zealand Climate Science Coalition made the warming trend in New Zealand go away by treating measurements from different sites as if they came from the same site? Well, Willis Eschenbach has followed in their foot steps by using the same scam on Australian data. . .

If, as was the case in the 1970s, the scientific consensus was that we were going to freeze to death,rather than fry, would we be talking about “hide the incline”?
I kind of suspect that it would be.
I also suspect that Carbon Dioxide would still be Public Enemy Number One because (a) it’s taxable (b) the climate models would incorporate a high negative feedback wrt water vapour (c) the physics would be unarguable and the Peer-reviewed literature r
irrefutably robust (d) lowering the temperature of the planet would depress the ability of the Biosphere to sequester Carbon thereby increasing atmospheric concentration of this “harmfull pollutant” refrigerator gas.
There’s a saying in Scotland that He who pays the Piper calls the tune. In other words provide the Carrot of funding to prove a point and that point will be proven.
No need for conspiracy, the carrot is enough for the willing. The stick is only required for those who refuse to toe the party line!

As a bonus, the UHI effect would give a sensible rationale to reducing recent temperatures while increasing earlier readings.
Unlike the current orthodoxy that does the opposite and seems to think that upside down is the same as the right way up – eg Tilander (sp) anyone!

bill (17:38:01) :
Willis you seem to have ignored my analysis so I will repost it. Please comment.
Yes, Bill, Willis has been rather quiet. Some cumulative questions I’ve posted:1. (16:48:04) 12/8 What is the relation between the IPCC pictures posted and the GHCN corrections to Darwin? Do you agree that IPCC is based on CRU, and the CRU set shows no such corrections to Darwin?2. “Sticky thread (15:53:24) 12/9 and here (05:11:33) 12/11 Any comment on the observation that the nearby stations do not seem to have big corrections?3. (04:34:46) 12/11 Any comment on Blair’s specific information on the Darwin site history, and particularly on the explanation for a big change pre 1941?
And perhaps to add one – any response to the rather detailed analysis of The Economist?

On the face of it, some of the temp graphs posted here suggest that the raw data would even seem to suggest that you don’t have to use “urban island effect”, as an argument (of course there is an UIE, but even without it it seems the data shows a cooling in many stations)). That is, the temps from 1880 seems to be have been much higher and have been deleted to show only a more recent warming?

I am curious to why Anthony would keep this posting on the top for so long as this has not been done before. To me it means that he is convinced of it and it is very important. As a meteorologist I assume that he has really thought hard and analysed the data on this one. I still would like to see a detailed response to thishttp://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists
and I am pretty sure there will be.

VG (02:43:01) :
“I still would like to see a detailed response to thishttp://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists and I am pretty sure there will be.”
I’m trying to be as objective as possible about the issue of the adjustments to Darwin Zero and the more I look into the issue the more confused I become. I followed the link provided by VG above (thankyou VG), and read the article, which is critical of Willis Eschenbach post. This part caught my eye.
“Mr Eschenbach complains about the GHCN’s adjustments, saying that there are too few nearby stations to make an adjustment: “The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period.” He’s talking about the Daly Waters Pub, but he’s inexplicably wrong. The GISS website he used for his own data shows two closer stations operating in 1941, Katherine Aer (272 km) and Wyndham Port (454 km). Both had temperature data series that ran for a long time.”
OK so let’s agree then that the Katherine Aer Station at 272 km is the closest. The author of the article makes some play about the intellectual authority of PHDs using supermaths to be able to make adjustments based on locality and he even sites a formula, which he admits he does not understand, to explain how clever these people are, even though the methodology behind the formula is rejected.
As a simple check as to what adjustments might be made to Darwin Zero as a consequence of the proximity of Katherine Aer I plotted the raw data for the two stations together, between the years 1941 and 1984; the full length of the Katherine Aer data.http://tinypic.com/usermedia.php?uo=t2QD%2BVcjymV7QaPNV9kaVoh4l5k2TGxc
Series 1 is Darwin, Series 2 is Katherine Aer
Why am I confused?
Well take a look and will someone please tell me what it is about the raw Katherine Aer data that could justify any change to the raw Darwin data. Or is it a case that the raw Wyndham Port Data is used to justify changes to the Katherine Aer data which has a “knock on” effect to the Darwin Zero data.

VG (02:43:01) :
‘I am curious to why Anthony would keep this posting on the top for so long as this has not been done before.’
“This isn’t from the GHCN, or from the East Anglia Climate Research Unit. It’s from the Australian Bureau of Meteorology (BOM). They conduct their own “homogenisation” of Australian temperature data sites, independent of the GHCN. And here’s the temperature plot that the BOM came up with for the Darwin site:” From the “The Economist”
My qauestion would be: Has “Australian Bureau of Meteorology” Adjustment method been V and V? Or are they using an off shoot of URC adjustment?

We do indeed need a detailed response tohttp://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientistsREPLY: I agree. Oddly there is no author name attached to the Economist piece last time I looked, which is strange and in my opinion, poor form. It appears that it was written mostly by Tim Lambert. Lambert of course deserves no attention, because he’s shown himself to be juvenile and rude by immediately claiming “lying” in his title. He works at a university. I suggest he look at their publishing standards.
The Economist piece however, makes no such juvenile claims and gets a wider audience, and is well worth rebuttal. Perhaps Wills will rebut, I’ll suggest. -Anthony

“Yes, Bill, Willis has been rather quiet. Some cumulative questions I’ve posted:”
Since you’re mentioning it, I’m still waiting for Mann/Jones to answer some cumulative questions posted all over the intarnet…

RoHa (07:13:26) :
We do indeed need a detailed response tohttp://www.economist.com/blogs/democracyinamerica/2009/12/trust
OK riddle me this because I’m still confused about the rational behind the arguments in the Economist’s article.
I am sure that the station change to Darwin airport in 1941 and the adjustments applied as a result, are a “red herring”. Although suspicious, it clouds an issue which I believe has a more significant impact on the reliability of adjusting temperature data.
In the article, three stations are quoted to refute Willis Eschenbach’s post by suggesting that the data adjustments applied to Darwin can be justified on the grounds that nearby raw station data (Katherine Aer-272 km ,Wyndham Port-454 km and Kalumburu-approx 500 km), demonstrates a warming trend that is not apparent in Darwin.
I would dispute this. For as long as the raw data are available, Darwin has an almost identical trend to both Katherine (1941 to 1980) and Kalumburu (1945 to 1992).http://i45.tinypic.com/30bg4xz.jpg
(series 1 is Katherine, series 2 is Kalumburu and series 3 is Darwin)
The odd one out appears to be Wyndham Port, which does display a divergence of approx 0.4 of a degree between 1941 and 1963. However, during this time the data adjustments applied to Darwin by GHCN increase the temperature by a whopping 0.6 of a degree. This excludes any possible issue relating to 1941, and shows that either the Wyndham Port data has a totally disproportionate influence on the temperature metric for the area or… I’m missing something obvious!
What’s up with that? You tell me!

I would love to see how the NIWA data has been “value-added.” I understand that the raw data in NZ shows no real trend, but when adjusted, suddenly the trend appears. I would love to know what percentage of data points have been adjusted to increase the slope vs those that decrease the slope.

i might add that if you want the data that Torok used and his methodology you can find it all here:ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/
“The files in this subdirectory are associated with the Australian
High Quality Temperature Data Set
The directory should contain these files
UNIX FORMAT
readme ‘This file’
method.utx ‘Outline of the method used to prepare the data sets.’
alladj.utx.Z ‘List of adjustments made to the data
and reasons for adjustment.’
finaln.utx.Z ‘Data file of minimum temperatures’
fianlx.utx.Z ‘Data file of maximum temperatures’
Files ending in .Z have been compressed using the unix compress
command. To uncompress them type uncompress FILENAME at the
unix promt once you have transferred the file to your system.
“

Is it just my imagination or are warmists actually saying that no matter what the emails say the north pole is still losing ice therefore AGW is still true, yet when ice cores show the MWP and warmer periods far into the past that is just regional weather and not climate?

So 0.6 and 1.0 are within the margin of error, as BOM sees it?
End of discussion, then, I’d say.
Even if Eschenbach is completely wrong and the adjustments are totally aboveboard and well thought-out (a big if), the output is hardly usable to look for a half-degree signal.
I believe the Economist guy when he says he’s bad with numbers.

Back to work trying to finish off the basic analysis and graphing of 190 years of high/low temp data for Minneapolis/St. Paul.
General feel to this point..NO SIGNIFICANT CHANGES..
Waiting to statistically prove that. (Or demonstrate it.)
This AVERAGE TEMPERATURE NONSENSE drives me nuts! It is UNREAL AND REPRESENTS USELESS DATA.
YOU CANNOT “AVERAGE” TEMPERATURES.
“Hotest October” in Darwin. SO WHAT. That proves NOTHING. It’s called WEATHER.
Max

Does this help? Quote from article.
‘The new station located at Darwin airport from January 1941 used a standard Stevenson screen. However, the previous station at Darwin PO did not have a Stevenson screen. Instead, the instrument was mounted on a horizontal enclosure without a back or sides.’
Incorrect. Go to:-http://www.warwickhughes.com/blog/?p=302
Scroll down to see a picture of a Stevenson Shield at Darwin PO in 1890. I don’t know about the back but it does have one side.

I just arrived in thread, and haven’t scanned it fully, but surely someone made the point to [i]The Economist[/i] that any corporation whose accountants revised long past revenue records downwards to create the false impression of impressive growth to boost its current stock price would find itself in deep, deep trouble.
All this makes me wonder whether weather station volunteers will be in increasingly short supply since their diligent efforts at accurately recording data each and every day produce a record that’s given less respect than toilet paper. If the data is going to be thrown away or distorted beyond recognition, why not just flip to the Weather Channel, pop open a beer, make a guess about local conditions, and write it down instead of getting up off the couch and heading out to the calibrated thermometer?

janama (15:16:22) :
I think it is most unlikely that GHCN uses BoM adjusted data. There’s no evidence that they do, and it wouldn’t be consistent with their methodology. And you can check the case of Darwin – BoM adjusted, like GHCN adjusted, has an uptrend – GHCN raw has a downtrend.

Will,
I normally just lurk and don’t post much because I have nothing of value to add in the science area, but I do have something of value to say about your rebuttal. First, well done on the merits of the argument. But second, your response is probably more likely to receive a favorable hearing and thus publication if you take out the emotionally loaded terms like scurrilous and the disparaging remarks about the writer’s math skills. Just rebut the arguments on the merits and leave the drama out. They won’t be able to hear what you are saying through the anger. Just a thought. Get a good night’s sleep and do a rewrite.

I actually think that the “emotionally loaded” language is somewhat appropriate, as the tone of the Economist article is condescending and the anonymous author, despite his lack of maths skills, bemoans the idea that he’ll have to personally debunk, at the cost of many hours’ work, future articles that reveal the bias of station temperature adjustments. Of course, he has no intention of doing this. I think it is fair to draw readers’ attention to his hypocrisy in slagging Eschenbach’s honest work, all the while appealing to the authority of established climate scientists and their peer reviewed work, due to his own lack of credentials in either stats or climatology. I am way out of practice, but do have the memory of the general content of a first-year science stats course, and agree that there’s no need for a Ph.D to do this stuff.

Mr. Cowardly Anonymous says:
““Average guys with websites can do a lot of amazing things. One thing they cannot do is reveal statistical manipulation in climate-change studies that require a PhD in a related field to understand.”
I would suppose that depends on the average guy. Regardless, there has been a lot of people with degrees in “related fields,” if not the field itself, uncovering a lot of “statistical manipulation in climate change studies.”
This we know for a fact.

Willis,
On your argument about five nearby stations – as I read the overview doc, they describe three techniques for diminishing homogeneities in the reference series. The second and third use five neighbors (the third selected three), but it doesn’t seem that you have to be able to use all three techniques.
In the 1941 change, there are more details in my post (05:11:33) 12/11 on the other thread. Apparently there were problems at the PO prior to the 1941 move with trees shadowing the measurement site, which caused cooling and accentuated the discontinuity of the move. The description there by the Australian scientist (Blair) is helpful.
As you say, GHCN does not correct on the basis of known events. But it tries to detect and estimate these events, and so one hopes they will show up in the corrections. The year may be wrong, but the general effect should correspond. As Ripper (21:00:43) pointed out on the other thread, BoM does correct using metadata events, which he listed. I commented further at (03:38:08) 12/12. The general pattern of their corrections is somewhat like GHCN inferred, although the years are fairly different.
But of course, the big issue is the analysis of Giorgio Gilestro.

The Economist has a running debate series. Anyone can participate. This week’s debate question:“This house believes that governments should play a stronger role in guiding food and nutrition choices”
Other debate questions are equally lame or ridiculous. Compared with the commenters on WUWT, the folks voting on the question and the posted comments seem to have been educated on television and not much else. It’s a shame, because the Economist used to be a very free market, libertarian-type of publication. A good antidote to rags like Newsweek. But then they became infected with the AGW virus, and it’s been downhill ever since.
Here is the comment that currently has the highest number of votes in this Economist debate:

Mogumbo Gono wrote:
Dear Sir,
This proposal does not go far enough. But it is a necessary start. And food calories are only half of the equation. The other half is the necessity of daily exercise. Fit people are happy people.
The government has to become involved, or some people will be tempted to back slide. The first step must be the creation of a Department of Obesity, which will add many thousands of high paying new jobs, thus also striking a great blow against unemployment.
The federal Department of Obesity should be a model for the States. Devolution of government power over obesity to the States is essential to combating local obesity, and also to provide additional public employment at the State level. City Councils should weigh in by forming obesity committees and sustainable local outreach programmes led by community organizers.
More employment will be created by hiring additional government workers to police highways, bridges and borders, to spot check vehicles in an effort to control the inevitable smuggling of contraband, such as ice cream sandwiches, mayonnaise [except for the light kind], and dark chocolate. Food addicts will be displeased, but they will eventually see that the government’s actions were put in place for their own benefit. Is it not the government’s duty to care for all citizens, and make them healthy? I say yes.
Of course farmers, who comprise only two per cent of the population, will be negatively impacted due to less food being consumed. Therefore, government aid to the affected farmers must be taken into account.
The payments to farmers for not growing food would easily be offset by the additional obesity employment at all government levels, as those new employees would pay new taxes. No one has such a job now, so the additional tax revenue from their salaries would make government food regulation a net benefit to the government, helping to reduce our national debt, while at the same time reducing our waistlines.
The question of daily exercise must also be addressed. I would propose that every healthy person under age 30 must run five miles in forty minutes each day, at least five days per week. Those between 31 – 40 must run four miles each day, and so on. Once the federal government takes over everyone’s medical care, government doctors should be authorized to grant exercise exceptions if warranted. Of course, care must be taken that those contributing to election campaigns are not given undeserved exercise excuses. Perhaps decals to use carpool lanes, but nothing more.
Finally, the government must eliminate all foods containing carbon, because the EPA has judged carbon to be a pollutant. The sooner carbon is eliminated from our environment, the sooner we will have defeated runaway global warming and climate catastrophe. Win-win!
With the benevolent government guiding our food and nutrition choices, and by following these recommendations, our great new society will achieve zero unemployment, high tax revenues, and a very healthy populace. Everybody will be happy, this I promise you.

“Because in order to judge that, you would have to have a graduate-level understanding of statistical modeling. … I don’t understand that formula. I don’t have the math for it.”
Sounds like Gavin’s work then.

This might make sense if there were any “dramatic change in 1941?. But as I clearly stated in my article, there is no such dramatic change. The drop in temperature was gradual and lasted from 1936 to 1940. The change from 1940 to 1941 was quite average. So that claim of yours is nonsense as well. In any case, the change in screening did not coincide with the 1941 move. In my article I cited a reference to a picture of a Stevenson Screen in use in Darwin at the turn of the century. Perhaps you didn’t bother to read that
Are you sure you are not looking at yearly data and filtered when you state there is no change in 1941 and that there was a slow change over a number of years?
My plots clearly show a 1941 January drop. There is also a June-July 1994 drop of similar sizehttp://img37.imageshack.us/img37/9677/darwingissghcn.pnghttp://img33.imageshack.us/img33/2505/darwincorrectionpoints.png
You claim that the stevenson screen was in place in 1880s
however John daly’s pagehttp://www.john-daly.com/darwin.htm
Says that the screen was installed in 1941.
Compare this photo on warwick hughes’ pagehttp://www.warwickhughes.com/agri/darwin1890.jpg
tohttp://img696.imageshack.us/img696/218/darwinpostofficebombed.png
War Correspondent Robert Sherrod, od Time magazine, in front
of the remains of the Darwin Post Office in June 1942http://www.sandgate.net/~dunn/darwin02.htm
These do not look the same construction of building
the 1880 building looks more modern than the 1942
the 1880 is single storey the 1942 has at least one first floor room.
The 1942 building has solid stone walls. The 1880s building has more glass!
Do you still think these are the same?

pby (A Catalina?)temperature does not equal heat.To me you would need a graph of temperature plotted versus time for a 24 hour day and the area under the curve would give a accurate value of what is really happening
I come from an engineering background and just want to roll my eyes at what they’ve been trying to get away with, from simplifying the shallow water equations which don’t even include a vertical dimension and violate even Newtonian mechanics (and which were written by a guy who wore a powdered wig and tube socks) to using the Navier-Stokes equation to model climate, even though Navier-Stokes is completely invalid when condensation or evaporation are occuring.
Then there’s the calculation of the Earth’s radiative balance in which they crunch through Stefan-Boltzman with the Earth’s average temperature of 287.15 Kelvin to get 385.5 W/m^2, whereas I can posit a planet with twice the sunlit temperature and absolute zero on the dark side, keeping the average temperature exactly the same, and get exactly 8000% more radiation output, assuming of course that both planets are blackbodies, which is itself invalid.
Then there’s the almost complete ignorance of what goes on in the thermosphere, which shoots up from four to six Earth radii throughout the day, falling back down at night, which indicates a massive amount of radiative output above the altitude of low-Earth satellites. Upper atmospheric lightning questions (sprites), the question of what the South Atlantic Anomaly (where the Van Allen belt touches the atmosphere) does to the charge distribution of the atmosphere and how that might feed back into cloud formation (and which should vary over the same timescale as the solar wind superimposed over the wandering of the magnetic poles).
There are so many fundamental questions in climate science that have been tossed aside to give the appearance of certainty about CO2 that the field is suffering.

Willis,
Don’t get all worked up about the “anonymous” author in the Economist. They are all anonymous and always have been. The economist never publishes the author’s name for normal articles. I suspect many are not written by an individual anyway.
Concentrate on the science involved. And the lack of willingness of the economist to use someone with even halfway decent maths to cover a science story. But not the name, as that is a non-issue.

Maybe it’s just me, but gg’s analysis looks more interesting if you blow it up a bit.http://img191.imageshack.us/img191/448/histogram.jpg
This has been batted around a little elsewhere (I think Ryan O first brought it up at the Air Vent, but RomanM also mentions it). A 0.017 deg/decade adjustment bias is not insignificant.
As I pointed out at gg’s site, HadCRUT3 has a 1900-2000 trend of ~0.65 deg/century.

I have pretty much given up on any science reporting in any of the MSM, but I’m a little shocked that even the vaunted Economist has come to this.
Willis suggests for the peer review:”So I, like you, support peer review. I just want a peer review system that works. It must be double blind during the review period, with neither the reviewers nor the author knowing the others’ names.”
This won’t help. In 4 out of 5 cases, I can guess who reviewed by paper, and I’m sure I could guess the authors of a “blindly” submitted paper with even higher accuracy.

Ellis: ” I just want a peer review system that works. It must be double blind during the review period, with neither the reviewers nor the author knowing the others’ names.”
Double-blind peer review has been experimented with in my field. To submit a paper we had to cut-n-paste all real names of authors, projects and institutions with pseudonyms. Even so, any member of the field still could easily recognize the source of the paper and the true names. The cites at the end are usually a dead giveaway.
In an ideal Utopia, peer reviews would be double blind. Perhaps in large enough fields it works. A clubby area like climate it is just not practical to hide the identities of the authors from the reviewers.

Willis,
Comming in late to this conversation so have not read all, but I note that you quote the Australian BoM:
“…The new station located at Darwin airport from January 1941 used a standard Stevenson screen. However, the previous station at Darwin PO did not have a Stevenson screen. Instead, the instrument was mounted on a horizontal enclosure without a back or sides. …”
Here is a link to a picture dated circa 1890 of the Darwin PO quite clearly showing a “Stephenson Screen”. It may not be in the position but…http://www.warwickhughes.com/agri/darwin1890s.jpg
(Courtesy of Warwick Hughes)http://www.warwickhughes.com/blog/?p=302

BonnieOK, who’s going to debunk this?http://www.startribune.com/science/79122057.html
The press is just waking up. Right now most are still in the phase of “Daily Planet’s in depth investigation finds that estranged ex-boyfriend is innocent of triple homicide based on our interview with estranged ex-boyfriend, who is amazingly cute, by the way.”

John M (18:47:25) :
0.017 C/decade is not so different from what was published. From the other thread:carrot eater (14:26:43) :
There is a such a comparison for max/min temp data, if not the mean, in Peterson and Easterling, “The effect of artificial discontinuities on recent trends in minimum and maximum temperatures”, Atmospheric Research 37 (1995) 19-26.
For the entire Northern Hemisphere, for max temps, the raw data give a trend of +0.73 C/century; the adjusted data give +0.92 C/century. So there is some difference in the max temps. But for the temperature minima, raw and adjusted are essentially the same, at +2 C/century.

So GG says ,017 C/dec; P&E say ,019 for max and 0 for min, average ,0095. Not huge, but not nothing.

Ah, the economist, seems the financial elite that run this paper don’t want to expose the carbon credit/global tax plans, cause many will loose billions in there environmental investments that rape countries globally.

Bill
The references you cite, specifically the letters to Daly, also blame the 1941 change in temperatures on all the smoke and dust thrown up during the Japanese bombing of Darwin. I’m still trying to wrap my brain around that one.
You posted a photo of the station from the 1880’s which shows a Stevenson screen, yet you don’t mention it, instead mentioning another e-mail John Daly got saying the screen was installed in 1941. How did a screen installed in 1941 end up in a photo in the 1880’s? If all of the e-mail received by Daly is gospel, what about the idea that the Japanese bombing shifted the temperatures?
The e-mail that says the change was due to the screen not being at the old, bombed post office also says that this change was responsible for the cooling from 1939 to 1941. How does installing (or re-installing) a screen after a Japanese bombing in 1941 change the temperature 1939?
Maybe I’m skeptical. Maybe I’m just not a complete idiot. Regardless of whethehr you think scientific certainty comes from scientific consensus, surely to God you don’t think it comes from a consesus of contradictory and factually inexplicable e-mails received by scientists.

Bill, I am dealing with GHCN data and adjustments. The GHCN adjustments do not match the GISS adjustments.
Regarding the war and the Stevenson Screen, both are red herrings. First, the decline in temperature started five years prior to the move, and six years prior to the bombing.
More to the point, GHCN does not use local changes in their adjustment of the temperature.
w.

*Sigh* I started reading The Economist in 1987 and I had not missed a single issue until a few years ago when they went through a metamorphosis of sorts. Haven’t bought another issue, even to read on the plane since then.
They are just being true to their new form. As an economist, I no longer think they deserve the name.

Sent to the Economist: “Why are Sceptics Demonized?”
Dear Editor,
Regarding your Dec. 11th piece Scepticism’s limits, I am compelled to write in complaint.
As I work in the field of scientific research directly, have read many, many of the “Climategate” emails in question, and have become quite familiar with the issue of temperature stations, whether MMTS or Stevenson screen, along with their respective siting and data management, I was quite disappointed in your piece.
I can’t imagine that you have actually conversed with Willis Eschenbach prior to publication, given your commentary. He clearly made some small errors that amount to typographic minutia when judged in the whole, and I’m sure he would have promptly and gladly corrected same.
Your commentary, however, is biased to the unacceptable. To the uninitiated, you would appear quite knowledgeable, and you clearly have a respectable level of comprehension, but that is exactly what makes your piece so unacceptable. You have not really done your homework. The factual basis for Eschenbach’s claims go far beyond a few weather stations discussed here. Even a weather station that I happen to live fairly close to and am familiar with, sited correctly and with a long and stable history has had it’s data manipulated in a fraudulent manner just recently. Orland, California, USA to be specific, and I have the original raw data to reference.
If I can’t trust you to report fairly and honestly, with proper journalistic due diligence on a topic I know, then I surely can’t trust to read reliable journalism on anything else you publish.
Not being able to trust science was bad enough, and now this.
As was your editorial,
Unsigned

John M (19:18:43) :
On NH/global, I’m just showing that GG’s figure isn’t out of line with what has been published.
Remember Willis’ claim, underlined, was:Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.
A finding of a 0.017C/decade rise is noticeable, but won’t sustain that sort of charge. Actually, the shift to automated stations is probably sufficient on its own to account for the rise.

Japanese bombing of PO was in 1942 the temp dip is jan 1941.
The 1880s photo of warwick hughes looks more like the Government House, not the “Old PO”
The bombed out photo shows signage on the wall and looks less up-market building. There obviously could have been reconstruction of the PO from 1880s to 1942 but would they rebuilt it so solidly ?

Willis,
You’re missing Bill’s bigger point. If the Japanese bombed Darwin with a retro-temporal climate bomb that not ony affected temperature throughout 1941 and back to 1939, the blast effects could well have knocked back temperatures five and six years prior, given my understanding of retro-temporal blast physics. The big question is why they didn’t use the weapon on Pearl Harbor and damage the US fleet back to the mid-30’s.
Remember, we’re not dealing with real science. We’re dealing with AGW warming science, which is kind of like Star Trek, Star Wars, or Stargate science without all the layers of fan-boy continuity checks, reality checks, and review.

What if an Internet tool could be created to break the barrier between divided Internet blogs talking about the same subject such as that of climate? That day has arrived.
Details here;
Tool to Break the Divide of the Left Right Paridigm.http://www.dailypaul.com/node/118836

Great response and original article, Willis.
The Economist has made their position clear so many times, that if AGW is not true, they have an awful lot of egg on their face. They really, really, want skeptics to just go away. Their attack does not seem at all surprising to me.

“I thought journalists were under an obligation to check their facts before making accusations …”
Since when journalists are supposed to accuse? If they were doing their basic journalistic job, they would at least, as a minimum, REPORT news, nothing less and nothing more.

Interesting Slashdot has been picking up on it.http://science.slashdot.org/story/09/12/12/2246208/The-Limits-To-Skepticism
While looking through the thread you may say that there is a lot of bad posts and moderating down of skeptics. It’s actually a positive if you look at it from a relative perspective. Just a year ago you wouldn’t have seen so many skeptics speaking up.
I think the main effect we are seeing post CRU e-mails is that the many skeptics that were already there are emboldened to speak up when previously they were not, it’s a paradigm shift.
And as always, great work Willis.

I subscribe to The Economist. In a few years their writing has degenerated from excellent economic and market analysis to a bizarre blend of pure economic fascism and climate panic. Measure and data mean nothing to the writers. Frequently misinformation, particularly about America is passed on as fact. In particular every topic concerning capitalism. gun ownership. free speech, and climate seems to bring out delusions and plain falsehoods. There is hardly a world pathology that is not America’s fault. Climate change is number one. Imagine that. Rather than The Economist, they should retitle the magazine The Elitist. And being wrong has no place in the vocabulary of these dullards.

As for the instant article, I commented previously. It is clear the data was tampered with because it did not fit the meme. It is simply way too scary for these “scientists” to deliver actual data and foot not observational infections. Instead they corrupt the entire field.
In order to save the village we must burn it. Apocalypse Now.

George Turner (19:04:04) :
“The press is just waking up. Right now most are still in the phase of ‘Daily Planet’s in depth investigation finds that estranged ex-boyfriend is innocent of triple homicide based on our interview with estranged ex-boyfriend, who is amazingly cute, by the way.'”
I hope you stick around here, George.CodeTech (18:37:15) :
“Smokey, if that article you quoted is anything other than sarcasm, then I think it’s time to openly panic in the streets!!!!”
The point was that it received the most votes of any Economist comment! Hard to explain, but there it is.
And you’re right, it had to be sarcasm. Otherwise, panic is surely on the way.

The system needs to be changed so that after the review, the [authors] sign their names to the paper as well as reviewers.
I agree completely, and in addition, the review(s) should also be published as supplementary electronic information.
The ‘double-blind’ requirement will not work, because in 90% of the cases the reviewer can tell who the author is, and in 60% of the cases, the author can guess who the reviewers were. Numbers reflect my own experience.

Find who owns this publication: Pearson PLC, owns the Financial Times and is based in the UK. Bet there is a friend of a friend who has interests in green stuff? Apparently they never sign their articles because the reputation of the paper should suffice. Well it doesn’t: who wrote this garbage?

Like posters above, I used to subscribe to the The Economist but no more. It seems to be increasingly written by newly minted BA grads with all the right credentials in conventional wisdom and political correctness but not a dollop of critical and original thinking among them.
It’s not much better than Newsweek.

Hey, I just congratulated the Economist for showing how Science does and should work!
Willis published openly, on a blog no less, an Anonymous reviewer [= anyone] responded in a non-Climate Science publication, and Willis has now rebutted that, the merits of each case standing completely on their own.
In contrast to the way the ipcc, its elite Climate Scientists, and certain other Cliimate Science Publications have tried to operate and argue science works, which is false, and which was one of the main criticisims of AGW Climate Science resulting in its problems to date.
The Economist, most likely inadvertantly just broke the whole thing wide open. I told them to “keep up the good work”.

bill (18:28:19):
You claim that the stevenson screen was in place in 1880s
however John daly’s pagehttp://www.john-daly.com/darwin.htm
Says that the screen was installed in 1941.
Compare this photo on warwick hughes’ pagehttp://www.warwickhughes.com/agri/darwin1890.jpg
tohttp://img696.imageshack.us/img696/218/darwinpostofficebombed.png
War Correspondent Robert Sherrod, od Time magazine, in front
of the remains of the Darwin Post Office in June 1942
bill (19:54:43) :
Japanese bombing of PO was in 1942 the temp dip is jan 1941.
The 1880s photo of warwick hughes looks more like the Government House, not the “Old PO”
The bombed out photo shows signage on the wall and looks less up-market building. There obviously could have been reconstruction of the PO from 1880s to 1942 but would they rebuilt it so solidly ?
It’s obviously the same building in both photos. The only difference is that the photo with the journalist is shot from around the corner from 1890 shot, If you zoom in on the area of wall between the veranda roof and the building roof in the Warwick Hughes photo, you will observe the same stone masonry construction as in the 1942 photo. This photo from the 30s posted by Corey abovehttp://www.territorystories.nt.gov.au/handle/10070/6065
shows the stevenson screen having apparently migrated somewhat, but also the row of small rectangular windows above the wraparound veranda, which evidently got obliterated in the bombing.

The mathematical skills of many of those scientists responsible for historic data records have already been assessed indepedently by a high profile team of statisticians (the Wegman commission) a few years ago:http://climateaudit.org/2007/11/06/the-wegman-and-north-reports-for-newbies/
Referring to the criticism of McIntyre and McKitrick of hockey-stick reconstructions, those professional scientists “tended to dismiss their results as being developed by biased amateurs”.
The Wegman commission, however, concluded differently:
“While the work of Michael Mann and colleagues presents what appears to be compelling evidence of global temperature change, the criticisms of McIntyre and McKitrick, as well as those of other authors mentioned are indeed valid.”
Their judgement about the quality of the professional scientist’s work was quite revealing:
“The papers of Mann et al. in themselves are written in a confusing manner, making it difficult for the reader to discern the actual methodology and what uncertainty is actually associated with these reconstructions.
It is not clear that Dr. Mann and his associates even realized that their methodology was faulty at the time of writing the [Mann] paper.”
“I am baffled by the claim that the incorrect method doesn’t matter because the answer is correct anyway.
Method Wrong + Answer Correct = Bad Science.”
“It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community.”
“A cardinal rule of statistical inference is that the method of analysis must be decided before looking at the data. The rules and strategy of analysis cannot be changed in order to obtain the desired result. Such a strategy carries no statistical integrity and cannot be used as a basis for drawing sound inferential conclusions.”
If you still think, that was just an “exception”, but at least their algorithms to compute the global temperature data have somehow been elaborated and implemented much more skillful and careful, have a look at their computer code and particularly the inline commments:http://wattsupwiththat.com/2009/11/25/climategate-hide-the-decline-codified/
“OH FUCK THIS. It’s Sunday evening, I’ve worked all weekend, and just when I thought it was done I’m
hitting yet another problem that’s based on the hopeless state of our databases. There is no uniform
data integrity, it’s just a catalogue of issues that continues to grow as they’re found.”
“Here, the expected 1990-2003 period is MISSING – so the correlations aren’t so hot! Yet
the WMO codes and station names /locations are identical (or close). What the hell is
supposed to happen here? Oh yeah – there is no ‘supposed’, I can make it up. So I have :-)”
“getting seriously fed up with the state of the Australian data. so many new stations have been
introduced, so many false references.. so many changes that aren’t documented. Every time a
cloud forms I’m presented with a bewildering selection of similar-sounding sites, some with
references, some with WMO codes, and some with both. And if I look up the station metadata with
one of the local references, chances are the WMO code will be wrong (another station will have
it) and the lat/lon will be wrong too.”
“I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as
Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO
and one with, usually overlapping and with the same station name and very similar coordinates. I
know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh!
There truly is no end in sight.”
“Wrote ‘makedtr.for’ to tackle the thorny problem of the tmin and tmax databases not
being kept in step. Sounds familiar, if worrying. am I the first person to attempt
to get the CRU databases in working order?!!”
and many more…

Bryn (14:03:11) :To see Darwin in a regional perspective, visit JoNOva for additional data across tropical northern Australia. It only reinforces Willis’s thesis.
——————————————-
No, no. no. This whole thing is being taken out of context. It’s simply just scientists using the data within approved scientific methods and coming to conclusions that you’ll need higher learning to understand. Just trust them. There’s a consensus.
Global Warming is doubtless happening. And it’s all our fault. You must send a good portion of your money to the victims of climate injustice.😉

My email to the Economist:
Anonymous Critique of Eschenbach Darwin Analysis Demonstrates the Scientific Method
[Or isn’t the Economist a serious Publication?]
Editors:
It has been clear for many years without any assistance from the leaked CRU emails and files that the ipcc and its elite Climate Scientists are simply not doing Science. The most obvious evidence involves the failure of certain critical papers necessary to establishing the Hockey Stick representation of “Global Mean” temperatures extending back to A.D. 1000 to have been presented along with the “materials and methods” supporting a paper’s results also having been archived in an easily accessable way and reasonably contemporaneously, so that anyone interested can get them.
And, obviously, the above problem involving the failure of the ipcc and the elite Climate Scientists to follow the Scientific Method is demonstrated quite graphically by what was discovered when an official ipcc Reviewer, Steve McIntyre, was fianally able to get the materials and methods upon which Michael Mann’s and Ken Briffa’s proxy reconstructions were based – in Briffa’s case this occurred 10 years after Briffa had first used the data in a publication, when the Royal Society made Briffa obey its correct scientific rules for publishing a paper in its own publication.
McIntyre found that Briffa’s method was quite inadequate to establish what Briffa thought he was establishing. Briffa’s analysis was even critically dependent on only one tree.
Without such a presentation of “materials and methods”, there are in fact no scientific results or conclusions of papers to be promoted, defended, analyzed, or criticised to begin with. Nor has “peer review”, limited to the review by a publishing entity’s chosen “peers”, ever been warranted to result in the “given truth” as per a paper’s results or conclusions. Never. For good reason.
No, the materials and methods must be presented as above, because the real “peer review” is known to occur only after a paper’s publication, which just happened right here within the Economist itself.
Because, somewhat ironically, the Economist’s Anonymous critique of Willis Eschenbach’s presentation was based upon this exact point – about a paper, no matter where published, even being able to have scientific results to either promote, defend, or critique in the first place, and that the real peer review starts after a paper is published, and that it can be done by anyone.
Willis Eschenbach included the “materials and methods” which supported his product. Or if he didn’t include enough, there’s no need to even “peer review” it in the first place as the Economist’s Anonymous reviewer did, because, once again, there would be no truely scientific results or conclusions reported to either accept, promote, defend or critique.
But the Anonymous reviewer apparently thought there was something scientific to critique.
Again, the Economist’s completely Anonymous review shows that anyone whosoever can review a paper properly supported, because the merit of the review stands or falls upon the actual analysis itself, to which Willis Eschenbach has now responded.
This is how real Science actually takes place.
Keep up the good work!

Can someone help me with this conundrum pls?
I’m a layman in every sense of the word regards temp. data. But I think I understand the gist of raw vs homogenised data. My problem is this. Thousands of stations around the world, thousands of problems with their accuracy irregardless of disagreements about specific problems at specific stations.
We are told global temp. increases in the order of 0.7c over the century. Lets say 1degC for sake of simplicity. Am I to believe data can be adjusted so well, so accurately, from so many stations located in so many different regimes (old soviet era, China Africa etc) that such a precise figure of 1degC can be confidently arrived at, thence extrapolated 100yrs into the future?
And all this covering 30% of the globe. The other 70% has worse problems with accuracy.
How confident should we be about historic data sets? How confident can we be with proxy data when real data is such a dogs breakfast?
I’m not even sure there is a problem (rising temps.) let alone the cause of the said problem.

Willis,
You assert:
“So while my statement about stations nearer than Daly Waters was wrong as you point out (there is one nearer station that covers the 1930 adjustment), my point was correct – there are not enough neighboring stations to adjust Darwin using the GHCN method. The first GHCN adjustment to Darwin was a single year adjustment in 1901.”
The paper by Peterson and Easterling (“Creation of Homogeneous Climatological Reference Series”) says the following on p. 673:
“We decided to use five stations for the reference series estimate for any
given year. …. Often, however, there are fewer than five appropriate
reference stations available for a given year. In such cases, we use all
available stations, to a minimum of two. We need at least two, or a
discontinuity in a reference station would have the same effect as a
discontinuity in the candidate station.”
There do appear to be at least two candidates for reference stations
available over part of the period in question: Wyndham Point (1898-1966) and Daly Waters Pub (1926-1986). So it seems that at least from 1926,
onward, there were enough stations for them to do what they say they
did. I haven’t checked whether there are any stations to replace Daly
Waters in the earlier period.

Reading further in the Peterson-Easterling paper, it seems that the choice of reference stations is considerably more complex than just choosing stations reasonably close to each other. Distance between stations is actually a relatively minor criterion in their approach: the only way it enters is that they restrict their search for reference stations to “the nearest 100 stations to each candidate station.” I haven’t yet absorbed what the method after that point, but this seems to show that an argument of the form “there are only X stations within Y kilometers of the candidate station” can’t be used to show that the GHCN authors couldn’t have done what they claim. I recall from the comments to the original Smoking Gun post that there’s evidence in the literature that stations as far apart as 1200 km can be well correlated

Dave Wendt (21:31:37)http://www.territorystories.nt.gov.au/handle/10070/6065
Good find – I agree your photo is of the same building
Corey (20:08:51) :
Description: Back view, Telegraph inspector’s residence also Post Office and observatory, Port Darwin. Shows buildings obscured by trees connected by covered walkway to stone building with louvre verandah. Stevenson screen (louvred box on stand) for weather recordings on left.
The visible building is not the PO (perhaps the telegraph inspectors residence?) I cannot reconcile the 2 photos at all! (note different overhan on roofs.
The PO could be behind the trees on the left. Note that the roof structure shown in the 2 photos is different PO is flat apex 1890s is single point apex.
However from the 30’s photo it is interesting to see that the stevensons screen has either dark louvres (why?) or no panels on at least 2 sides.
If they went to white paint or added louvres in 1941 this could account for the temp difference.

Nice response by Willis Eschenbach. The Economist sneering at oiky builders was particularly galling. This from a magazine that thinks the sun comes up in the west. Best to stay away from lawyers though I think.

This thread is getting bloated, but I have to repeat my nit pick to Willis on his phrase:
“…so that after the review, the reviewers sign their names to the paper as well as reviewers.”
Doesn’t that statement just go round in a loop?

Willis I’ve done interviews with the Economist. The guy I spoke with said anonymity was a policy. He argued that it gave them unique freedoms. Consulting with others, changing their minds later. Its possible in fact that its the result of more than one person working.

Willis,
You assert:
“So while my statement about stations nearer than Daly Waters was wrong as you point out (there is one nearer station that covers the 1930 adjustment), my point was correct – there are not enough neighboring stations to adjust Darwin using the GHCN method. The first GHCN adjustment to Darwin was a single year adjustment in 1901.”
The paper by Peterson and Easterling (“Creation of Homogeneous Climatological Reference Series”) says the following on p. 673:
“We decided to use five stations for the reference series estimate for any
given year. …. Often, however, there are fewer than five appropriate
reference stations available for a given year. In such cases, we use all
available stations, to a minimum of two. We need at least two, or a
discontinuity in a reference station would have the same effect as a
discontinuity in the candidate station.”
There do appear to be at least two candidates for reference stations
available over part of the period in question: Wyndham Point (1898-1966) and Daly Waters Pub (1926-1986). So it seems that at least from 1926,
onward, there were enough stations for them to do what they say they
did. I haven’t checked whether there are any stations to replace Daly
Waters in the earlier period.

The Economist, which has been around for well over 100 years, has a policy of anonymity, and always has as far as I know. Whether the journalists and bloggers are happy with this or would prefer to write under their own names is irrelevant; the choice has been made for them. I also doubt the ire over the URL is warranted, as I expect someone other than the journalist / blogger made that selection. Your article would have been stronger if you had refrained from these pointless criticisms.
Anyway, let’s see what emerges once the picture is clearer.

I have verified Giorgio’s calculations of the distribution of trend variations going from GHCN to GHCN adjusted. The histogram is here. I get the same mean, 0.0175 deg C/decade. and standard deviation 0.189 C/dec. I have tried to post the R script as a comment on his site; it hasn’t yet appeared. If it doesn’t, I’ll post it here.

bill (00:45:43)
The visible building is not the PO (perhaps the telegraph inspectors residence?) I cannot reconcile the 2 photos at all! (note different overhan on roofs.
The PO could be behind the trees on the left. Note that the roof structure shown in the 2 photos is different PO is flat apex 1890s is single point apex.
I think all three photos are of the same building. If, for the sake of discussion, we assume that the Hughes photo faces due North the SW corner of the PO is visible at the right side of the photo. In the 30s photo we see the SE corner. The flat roof top and added dormer windows suggest a renovation to add space on the upper level. The SS seems to have been moved about 75′ to 100′ [pure eyeball est.] to the NE and replaced by a different structure[cross braces on legs, top different]. I do think I see louvres, but they appear unpainted. The 1942 photo shows just the damaged East side of the building.

The Economist USED to be a good newspaper (as they like to refers to themselves as).
On their extremely slipshod coverage of the climate change issue I have already pulled my subscription and no longer read it.

Dear Willis,
Interesting article and follow up response.
Please continue as it is worthwhile reading.
Can you research and address the comments athttp://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists?page=3
(1)
“thefordprefect on Dec 13th 2009 2:42 GMT”
You claim that the stevenson screen was in place in 1880s ….. that the screen was installed in 1941.
(2)
And the response to your response by the original author of the piece
“sparkleby on Dec 13th 2009 4:57 GMT “
Thanks

Good work! A public debate about adjusting temperature data series. This will help to improve general awareness and understanding. Keep it going.
Although the unknown author of the Economist article is mightily peeved about being asked to justify the adjustments to allcomers, that just happens the be the way science has to work. Until, as you point out, we have a peer review system that is respected and trusted on all sides.
But what the author is trying to say is that s/he doesn’t mind “having a swipe” outside peer review when it suits, s/he will choose when to have a swipe.
I’d take issue with the closing remark in the Economist article:
“But if we want to get any information with regard to the climate over the long term, we have to make the most of what data we have.”
In fact we have two options..
Where we can justify adjustments from known events and with reasoned corrections, we can adjust the data. The adjustments should be associated with reasoned increases to the error bars of the individually adjusted series. Recognising the additional uncertainties.
Secondly, if there is nothing to support the ajustments, we can always discard the data for being unreliable. There are good examples of this being discussed above, where data in early periods has been discarded.
Either way, the error bars on the reconstructed global temperature series will widen. Perhaps even to the point where past and current temperatures are not statistically discernable (another way of saying “we really know and we need more information” ).

Just to clarify one subject, you shuld remove the “author who doesn’t sign” issue
articles at the economist are NEVER signed
that way authors are free from peer pressure (the big issue on climategate)
as for the position of “the economist” yes, I’m afraid they’ve taken an AGW stance, but you must remember it’s an economics magazine, they balance risks, costs and credibility. Even lukewarming has been so persecuted that it’s unfortunately compreensible that anyone outside the area will accept there is a real strong risk… that is enough for a position (specialy when it is a subject with major impact and cannot be ignored for analysis)

My email to Economist:
Dear Editor
I am disturbed by your bizarre article in “Democracy in America” of Dec 11th – it should be retracted and author identify himself and apologize.
Your anonymous author, viciously attacks a blogger on wattsupwiththat.com regarding a critique of temperature adjustments by GHCN, using arguments of authority, unrelated ad hominem attacks, self-confessed ignorance of basic mathematics and as it turns out incorrect data.
I have tolerated your uncritical propaganda of Anthropogenic Global Warming as running with the herd, but this shrill article is something else, something desperate, sinister and ironic, considering “Democracy in America” is denying a citizen the right to considered free speech to disagree with authority.
For the record, I am a medical research doctor / device manufacturer and I am astounded by the poor quality, absent traceability, corruption of peer review, lack of an adversarial review, group think, and finally convenient “loss” of data evident in climate research. This long preceded the ClimateGate and is frequently uncovered by outsider bloggers, whose career does not depend on AGW. If the IPCC was making an aspirin, it’s data would not get past the FDA’s screening stage, so it is staggering that we are contemplating changing the world on its say so.
I suggest an explanation is in order and I look forward to a modicum of critical thought on this topic from your otherwise good publication, so I don’t have to wonder if I should renew my subscription.
Yours sincerely

This must be an absolute record number of comments. Indicative of the outrage which many of us feel, I’d imagine.
Personally, as a lawyer, I believe that Willis should sue them – I believe that he would win without a shadow of a doubt, and that he would almost certainly be entitled to know the identity of his defamer.
How many more once-great publications will get dragged through the cesspits of cynical pseudo-science, before the revulsion is too much?

Many of you have rightly pointed out that the Economist has a policy of anonymity in relation to its articles. That is largely true although I do remember a large country survey by Norman Macrae which was attributed. The key point remains – don’t be surprised by reading an article in the Economist which has an anonymous author.

It appears from this that what we actualy need is to get our hands on the ACTUAL PAPER Documents before any data munging has taken place this will be the only way to get to the bottom and generate the correct data

“I have verified Giorgio’s calculations of the distribution of trend variations going from GHCN to GHCN adjusted.” – Nick Stokes
Have you tried weighting the adjustments by year, ” taking moments about the mid-point” as we would say in engineering? He seems to be using “trend” to mean simply temperature change divided by total period i.e. the change to the average temperature over the period, not the change to the slope of the line of best fit of adjusted annual temperatures.
If we pick a mid-point year and multiply each adjustment by the number of years from the mid-point (i.e. negative for earlier years) we’ll get a positive number if there’s an overall warming trend adjustment and negative is the overall adjustment is cooling.

Yep. A “global mean temperature” figure, by definition due to geography, requires deleting and sifting and adjusting and weighting…..You climatologist CHUMPS are just now beginning to see the overall conceptual problem: the utter futility and stupidity of arriving at a single “global mean temperature” figure that means anything at all. Do any of you have any control at all over how gullible you have allowed yourselves to become if you still believe this can be logically achieved?

BradH (04:46:45),
As I understand it, in the US the burden of proof is on the one supposedly being libeled. In the UK it is reversed; the one making the libelous statement must prove that it wasn’t.
So the Economist has the burden of proving that what its blogger wrote was entirely factual; to prove that Willis lied. Willis writes: “How dare you accuse me of lying? As I said, I went to AIS to see if what Professor Karlen had said was true. I called up a list of all of the stations in Australia that covered 1900-2000. Darwin was the first on the list, so that’s the one I looked at. Try it and see. You accusation is both wrong and totally unfounded.”
It appears that the Economist’s blogger has deliberately libeled Willis. Forcing them to issue a written apology along with disclosing the writer’s identity could probably be achieved by simply sending a letter to the Economist from a UK based law firm.
But perhaps a public trial would be better, since many of the actors in the AGW issue could be deposed under oath. Not being a lawyer, no doubt I am unaware of many of the legal technicalities. But I do know that UK law is extremely friendly to plaintiffs, making Great Britain the location of choice for “libel tourism.” Neither the plaintiff nor the defendant need reside in the country. All that is necessary is that the publication is circulated in the UK. The Economist certainly qualifies.
As we’ve seen, nothing much can be accomplished in the general AGW debate without the credible threat of the decision being taken out of the hands of the alarmists and their pet media, and being publicly decided by a final authority.
In Willis’ place I would certainly prefer to stick to the science, and forget about legal recourse. The legal profession in general has become a pestilence on society. But the Economist did publicly accuse Willis of lying in their internationally circulated blog, based on nothing more than their blogger’s feelings and false assumptions.

TBH, I’m really surprised and appalled at the conduct of the Economist – to publish this and make no contact with the original author is stunning.
They’ve parked their opinion all over Willis’ reputation without giving him a rebuttal.
I’m not one of their readers but I ALWAYS held their journalism in the highest regard based on years of reputation for integrity. Clearly that was naive of me.

Can anybody point me towards an explanation of which temperature datasets are which, and how they relate to each other?
It seems that the data takes a rather tortuous path before being used.
It seems like there are at least 3 different versions of data at GHCN.
GISS (I think) uses one of these 3 versions, and then adds additional corrections.
CRU uses data from many locations, but a main source is one of the GHCN versions.
The Met Office now has data posted a subset of the CRU data online, but they themselves say it is not clear what adjustments have been made.
The truly raw files — pdfs of the coops station reports for some US reporting stations — are available online somehwere.
===========================
I’m truly confused and would greatly appreciate it if someone that has already figured this out would come forward and put together a tutorial.

Dear Willis,
You said:
“The real problem is the 1920 GHCN adjustment. To cover that a station had to start by 1915. Here’s what I get, comma delimited format. As you can see, the second station on the list is already a thousand km from Darwin.”
As I mentioned above (Kevin (23:08:19)), the Peterson-Easterling paper indicates that the only way in which distance between stations enters into the choice of reference stations is by restricting the search to “the nearest 100 stations to each candidate station.” As mentioned in the comments to the original Smoking Gun post, the 1987 Hansen-Lebedeff paper reaches the conclusion that there can be reasonably strong correlations between stations as far as 1200 km apart; I think you indicated your agreement with that conclusion. It seems that Wyndham Port, Derby, Burketown and Camooweal all fall within 1200 km of Darwin and are available from 1915 onward, with many others just a bit farther. I don’t see that you’ve eliminated the possibility that there are at least two reference stations available to explain the 1920 adjustment.
Thanks,
Kevin

who cares (03:23:39):
“as for the position of “the economist” yes, I’m afraid they’ve taken an AGW stance, but you must remember it’s an economics magazine, they balance risks, costs and credibility.”
If the Economist were so expert at Bayesian decision making, why did that organ spend the 1980s telling us that Mrs Thatcher’s remaking of the UK economy was doomed to fail?
I think we should be told (preferably without misplaced apostrophes).

Phil A,
I had the same question when I looked at the GG post, does the statistical analysis take into effect when adjustments are made(?) As pointed out, station drop out is another issue all together as is UHI… then add to that the uptrend in solar intensity (you can see it on NOAA’s new climate.gov site, where they say it can only explain 10% of observed warming) and I’d be very curious if the unexplained warming was even in the margin of error of the instruments.
CodeTech and Smokey,
Whoever that poster on the Economist was with the Department of Obesity stuff.. it was sarcastic. In the comments section of the article we’re discussing the same guy posted a remarkably eloquent skeptical statement. We should get him to come visit here

“Jack in Oregon (11:03:34) :
I believe what we need to do now is to start a surfacestation type project that is focused on finding the raw daily data for the actual surfacestations around the world. Since we can not trust the data anymore, and we cant trust the custodians of the data, lets build an open source, station by station database of the RAW daily data.
A rebuilding of the US records data base with the original data is necessary. a PDF of the original data for each station with a txt file holding the same raw data, with a google map location for the station….
This would allow us to build a public wiki type project around the raw data. If we can get auditors for Australia and N.Z. locations to show what the known fudges are there. We can make this a global audit of public data base of raw temps….
Its time to FREE the DATA, MAKE it PUBLIC, and than we can argue about WHAT steps are applied to it.”
Brilliant idea.

>>The Economist, which has been around for well over 100 years, has a policy of anonymity, and always has as far as I know. Whether the journalists and bloggers are happy with this or would prefer to write under their own names is irrelevant; the choice has been made for them. I also doubt the ire over the URL is warranted, as I expect someone other than the journalist / blogger made that selection. Your article would have been stronger if you had refrained from these pointless criticisms.
Nonsense. If you are going to publish an article in a widely distributed paper accusing another of being ignorant and deceitful then you should at least have the stones to put your name to the piece. Hiding behind a convenient “policy” is no defense and Mr. Eschenbach’s fact filled rebuttal loses nothing with the addition of some well placed barbs.
As Mr. Eschenbach points out the pro-warming crowd has been hiding their data, their techniques and their review process for years. Were this just an academic argument I and probably most wouldn’t care. But they are using their claims in an attempt to alter the world economy in mindboggling ways, affecting all of us. A little righteous anger, hell, a lot of righteous anger at this entire process is way overdue.

Manfred (21:43:57) :
“The mathematical skills of many of those scientists responsible for historic data records have already been assessed indepedently by a high profile team of statisticians (the Wegman commission) a few years ago:http://climateaudit.org/2007/11/06/the-wegman-and-north-reports-for-newbies/
Referring to the criticism of McIntyre and McKitrick of hockey-stick reconstructions, those professional scientists “tended to dismiss their results as being developed by biased amateurs”.
The Wegman commission, however, concluded differently:
“While the work of Michael Mann and colleagues presents what appears to be compelling evidence of global temperature change, the criticisms of McIntyre and McKitrick, as well as those of other authors mentioned are indeed valid.”
Their judgement about the quality of the professional scientist’s work was quite revealing:
“The papers of Mann et al. in themselves are written in a confusing manner, making it difficult for the reader to discern the actual methodology and what uncertainty is actually associated with these reconstructions.
It is not clear that Dr. Mann and his associates even realized that their methodology was faulty at the time of writing the [Mann] paper.”
“I am baffled by the claim that the incorrect method doesn’t matter because the answer is correct anyway.
Method Wrong + Answer Correct = Bad Science.”
“It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community.”
“A cardinal rule of statistical inference is that the method of analysis must be decided before looking at the data. The rules and strategy of analysis cannot be changed in order to obtain the desired result. Such a strategy carries no statistical integrity and cannot be used as a basis for drawing sound inferential conclusions.”
If you still think, that was just an “exception”, but at least their algorithms to compute the global temperature data have somehow been elaborated and implemented much more skillful and careful, have a look at their computer code and particularly the inline commments:http://wattsupwiththat.com/2009/11/25/climategate-hide-the-decline-codified/
“OH FUCK THIS. It’s Sunday evening, I’ve worked all weekend, and just when I thought it was done I’m
hitting yet another problem that’s based on the hopeless state of our databases. There is no uniform
data integrity, it’s just a catalogue of issues that continues to grow as they’re found.”
“Here, the expected 1990-2003 period is MISSING – so the correlations aren’t so hot! Yet
the WMO codes and station names /locations are identical (or close). What the hell is
supposed to happen here? Oh yeah – there is no ’supposed’, I can make it up. So I have :-)”
“getting seriously fed up with the state of the Australian data. so many new stations have been
introduced, so many false references.. so many changes that aren’t documented. Every time a
cloud forms I’m presented with a bewildering selection of similar-sounding sites, some with
references, some with WMO codes, and some with both. And if I look up the station metadata with
one of the local references, chances are the WMO code will be wrong (another station will have
it) and the lat/lon will be wrong too.”
“I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as
Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO
and one with, usually overlapping and with the same station name and very similar coordinates. I
know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh!
There truly is no end in sight.”
“Wrote ‘makedtr.for’ to tackle the thorny problem of the tmin and tmax databases not
being kept in step. Sounds familiar, if worrying. am I the first person to attempt
to get the CRU databases in working order?!!”
and many more…”
Thanks Manfred – lest we forget…

Also, I think it reveals the depths of your disingenuousness (is that a word?) when you accuse the Economist of publishing an “unsigned article.” All articles in the Economist are unsigned. You know this of course, but are simply using inflammatory rhetoric to make the magazine seem cowardly.REPLY: Newsflash there anonymous “WAG”, it IS cowardly to publish public opinion unsigned. Words and actions have consequences. Just saying it is “policy” to write this way is no defense for ducking responsibility.
At least your cowardice is slightly less than that of the Economist, you have a handle, so regular reader at least know what your body of commentary sums up to. Economist readers have no such ability to evaluate the writers. – Anthony Watts

The Economist! What do they know about economics where were they when they were needed, of course they predicted through thorough investigative journalism the demise of the Sub-Prime market and the global banking crisis?
The Economist! What do they know about climate, I suppose soem of their journos get wet in some places and hot in other places.

WAG,
How would you feel if a major international publication had named you personally and accused you of being a liar, for something you wrote not in the Economist, but on a website in another country? Would you just ignore it, thus appearing to confirm their false and widely publicized accusation that you are dishonest? Would you not mind that hit to your reputation?
This isn’t about WUWT, this is about deliberate mendacity. Why are you being the Economist’s apologist? Do you condone their actions?

“First, in my proofreading I did not catch that I that I had written “the 1941 adjustment” when I meant the 1930 adjustment. That should have been obvious to me, because there is no 1941 GHCN adjustment. My bad.”
Sorry the original graph shows a 1941 adjustment.
Why does the graph on this page show a different set of adjustments?

Willis Eschenbach (01:19:41) :
That’s outstanding, janama, a key piece. Where did you get it?
Willis this is the data that Torok used back in 1999 and it states his methodology.
you can find it all here:ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/
“The files in this subdirectory are associated with the Australian
High Quality Temperature Data Set
The directory should contain these files
UNIX FORMAT
readme ‘This file’
method.utx ‘Outline of the method used to prepare the data sets.’
alladj.utx.Z ‘List of adjustments made to the data
and reasons for adjustment.’
finaln.utx.Z ‘Data file of minimum temperatures’
fianlx.utx.Z ‘Data file of maximum temperatures’
Files ending in .Z have been compressed using the unix compress
command. To uncompress them type uncompress FILENAME at the
unix promt once you have transferred the file to your system.
All the data goes back to the late1800s and up to 1993.
here’s the Darwin data taken from the files.http://users.tpg.com.au/johnsay1/Stuff/Darwin_T.png
“

One of the commenters mentioned a “most important posts” sticky. You could probably post that in the right sidebar somewhere. Maybe a “Are you new to this blog? Check here to get up to speed” section or one for the layman, “If you’re new to all this, start here:”
As for the unsigned article, well, it was probably a staff article and was, therefore, probably more or less official Economist position on the issue. You’ll note that in your local newspaper those aren’t signed either. Only the guest articles/editorials are signed.

Also, I think it reveals the depths of your disingenuousness (is that a word?) when you accuse the Economist of publishing an “unsigned article.” All articles in the Economist are unsigned. You know this of course, but are simply using inflammatory rhetoric to make the magazine seem cowardly.

In addition to Anthony’s comments on your posting, I have one myself. I am astounded at how many people out there claim to have insight into my mental processes. No, I don’t “know this of course”, that is your perturbed fantasy about my lack of knowledge. I have read the Economist for years, but never noticed that the articles were unsigned … so sue me. When you assume that you have god-like omniscience regarding what others know and don’t know, you just make yourself look foolish.
In any case, my issue is not cowardice. It is the lack of fact checking in the article. A simple email to me would have cleared the air immensely.

“First, in my proofreading I did not catch that I that I had written “the 1941 adjustment” when I meant the 1930 adjustment. That should have been obvious to me, because there is no 1941 GHCN adjustment. My bad.”
Sorry the original graph shows a 1941 adjustment.
Why does the graph on this page show a different set of adjustments?

The graph above is Figure 9 from the original article. Not sure which “original graph” you are talking about.
w.

Regarding Giorgio’s histogram, why is there this implicit assumption that a number near zero somehow validates the GHCN adjustments?
Personally, I would have thought that the UHI (or Airport HI as per Chiefio) should mean we should be getting a negative number per decade.
Anyway, regardless of my own views, the question remains. Why and on what grounds does an average of zero provide evidence that there is no bias or error in the adjustments?

Phil A (05:23:59) :
Have you tried weighting the adjustments by year,
What I did, and what I believe Giorgio did, was linear regression, which really does just that. This histogram is a diagram of the slopes.
The R code has now been posted at Giorgio’s site.

Robinson (15:51:31) : “The mentalists have taken over Richard Dawkin’s forum.”
True. The volume being posted is astonishing, and has completely drowned out any sane ideas.
I used to post there (as “egrey”), but recognise that it is now pretty much a waste of time. Robinson has been trying valiantly to inject some sanity but the AGWers simply will not engage with the scientific arguments.Robinson (16:04:46) : “Sorry mods, can you edit my post above with the new Richard Dawkins link: ….. ”
The volume of posting is so great that the link Robinson provided is already out of date (every 40 pages, they start a new thread). I doubt there’s much point in my providing the new link, it will probably be out of date before anyone uses it!

Willis Eschenbach (12:09:58) :
The graph above is Figure 9 from the original article. Not sure which “original graph” you are talking about.
Well its different to the one on both the economist and Wattsup. To be honest with you, the most obvious difference by some margin is the 1941 change on the blue graph. It stand out by a mile. The other graph with the 30s adjustment looks like ‘tortured data’ to me.
But I dont really swallow the Wattup house position, whatever it is this week. I am sure you all know you are RIGHT and everyone else wrong. 😉
There was no change in 1941, its all an illusion.

dorlomin (13:13:53) :
youre looking at different graphs… ive had a good look, and this one is the same, (the darwin zero temp homogenity adjustment…on this one the adjustments immediately underlay temp adjustments, on original there was a larger margin, but that is all ) youre talking about the raw temp airport graph from the other article…. with the 39-40 step adjustment?

I sent this (in a drunken fit last night!)
“I used to subscribe to your magazine, but gave it up because you obviously knew f*ck all about economics, having singularly failed to predict the current predicament we are all in. Now it turns out that Global Warming is possibly the greatest hoax of all time and yet you stand idly by. Shame on you and all newspapers of your ilk. When the truth comes out, we will remember who stood up for truth and who signed up for green totalitarianism.”

Willis Eschenbach (12:09:58) :
The graph above is Figure 9 from the original article. Not sure which “original graph” you are talking about.
I think dorlomin has a point. There doesn’t appear to be a figure 9 in the original post and the graph at the top of this post has the Amount of Adjustment line shifted 1 degree upward from figure 8, the last one in the original.

Relevant to the Darwin finding, there was a challenge authored by Giorgio Gilestro (http://www.gilestro.tk/2009/lots-of-smoke-hardly-any-gun-do-climatologists-falsify-data/) to Willis’s premise that adjustments could be bisasing the temperature record.
Using all GHCN data and adjustments from 1700-2000, he did an analysis showing only +0.017C/ decade effect of all adjustments in GHCN. His analysis is technically correct, but obscures an important fact.
Roman M’s analysis (http://statpad.wordpress.com/) shows that the pattern of adjustments over time creates a downward sloping adjustment line from 1700(?) to about 1910 and then an upward sloping adjustment line from 1910 to 2000.
Since AGW theory is based strongly on showing increases from about 1940 to 2000, it would be instructive to know the deg C/ decade increase in that portion of the adjustment curve. The question that needs to be answered, and has not yet been answered directly by GG or Roman M as yet is whether the adjustments from 1940-2000 could explain a significant amount of the warming seen in the final GHCN product used by research, the MSM, and the IPCC.
FWIW, the adjustments between 1940-1990 appear to be almost dead-flat linear (with an upward sloping trend) with some strange “wiggles” between 1990-2000 — hard to believe those adjustments represents a process of station-by-station adjustment-for-cause.

My, this debate has gotten fiery. It is always exciting when scientific debates get fiery… but, then again, excitement is not always the best attribute to objectivity. Thus, I am fascinated by both the science and the emotion here, though in principle I should be just be fascinated by the science.
And to get to that. The big overarching problem I have with this debate between Willis and the Economist blogger is that the whole thing could erupt into a strawman debate, as many of the debates around around climate change tend to get into. E.g. Just cause some emails show some bad data does not mean all the majority of climate change research is false, nor does one controversial station at Darwin. Thus, it is good to debate the Darwin data, and perhaps many other scientists involve may contribute. But the only thing that can be resolved from this is Darwin data. And, it will take a lot more than one data point and the East Anglia emails to refute anthropogenic climate change is not real.
These debates have been going on amongst scientists for decades now. But now, these debates have gotten so popular, journalists and bloggers are getting in the game. There are several peer-review feedbacks were many such points have been raised. Seeing this, I consider that it could be beneficial to have the peer-review process open to the public after the article has been accepted! That way, the Economist and Willis could refer to earlier anonymous scientific discussions about this.

Ah, I love the rampant paranoia here. Not one of your anti climate change points hold wateer, you are just a bunch of ignormuses trying desperately to discredit the real experts. The emails mean nothing,
one notes only less than 5% that support the denialsts got leaked, odd eh? Not really.
Ignorant oiks like you should avoid revealing your ignorance and listen to the real experts, not a bunch of fright wing fruit loops.

As an ignoramus, I’m forced to ask how and why an adjustment because of the lack of a screen or other such thing would be appied to a station’s data, since some of us are more concerned with plotted night time temperatures, lows being a bit less flaky than high temperatures.
If a station change is short and local, not involving changes in altitude or local climate, the slope of the lows should be unaffected, and the spread between highs and lows should show a slight change in spread, restored to what one would expect.
Basically, there’s no valid reason to adjust a station’s lows unless the move was very significant, so in most cases applying such an adjustment to the lows instead of just the highs s just trashing the data. I’m wondering an analysis of the spread will reveal a fingerprint that can more easily flag bogus adjustments.

“Phil A (05:23:59) :
Have you tried weighting the adjustments by year,
What I did, and what I believe Giorgio did, was linear regression, which really does just that. This histogram is a diagram of the slopes” – Nick Stopes
If that were so, then how do you reconcile that with the distribution of adjustments calculated by Roman M? Or does the regression only measure the magnitude of the slope, not where it was applied? Because I can’t see how you can apply a v-shaped adjustment pattern like that without creating a warming trend in the 20th century (and suppressing one in the 19th). And if the analysis is not showing that then his method is either flawed or deliberately irrelevant or both.

closet_economist (14:59:46):
“Just cause some emails show some bad data does not mean all the majority of climate change research is false…”
Question for you: forgetting the emails for a moment, do you have any empirical evidence that anthropogenic global warming even exists? Any at all?
I’m not saying that AGW doesn’t exist. It may, or it may not, or it may be so insignificant that it can be completely disregarded. But as of now, there is no credible real world evidence that AGW exists.
Computer models are cited constantly as evidence. But GCMs are not evidence. They are simply a tool, and not a very good one at that since they can not even predict today’s climate by hindcasting using all known historical climate data.
And the papers that are hand-waved through the climate peer review process by pet referees with a wink and a nod, so long as they posit AGW in some form [and where the referees and the writers switch roles in a mutually beneficial circular grant gravy train dance], well, those are not empirical evidence, either.
Actual, real time, temperature readings using a calibrated thermometer are an example of empirical evidence. But as we’ve seen, the papers alleging warming don’t seem to use the the numbers from, for example, the original signed and dated B-91 forms that recorded the actual past temperatures; instead, they all use heavily massaged numbers. Again, those peer reviewed papers are not empirical evidence of anything except a key to the grant trough.
And when you dismiss the reams of “bad data”, and then say that “it will take a lot more than one data point and the East Anglia emails to refute anthropogenic climate change is not real”, you fall into the widespread but false conclusion that skeptical scientists must refute AGW.
That is wrong. The scientific method requires that the proposers of a new hypothesis such as AGW must provide full transparency of the data and methods they used to arrive at their conclusions, so that skeptical scientists can replicate and test their experiments and validate their assumptions. If a hypothesis can withstand all attacks by skeptics, it is [to employ an overused term] robust.
The fact that the AGW proponents refuse to cooperate with requests for that data and the specific methodologies used not only voids the scientific method, but it leads skeptics to conclude that the AGW believers have plenty to hide.
So prove me wrong: provide solid, empirical [real world], measurable and testable evidence showing that X amount of human emitted CO2 causes X amount of global warming.
But no such evidence has been produced, and because the purveyors of AGW hide their data and methods, their hypothesis becomes simply a conjecture; an opinion based on observation, and nothing more. And the observation? It is that CO2 has risen coincidentally and temporarily along with temperature. But so have other things.
When actual evidence of AGW is provided, get back to us.

Willis Eschenbach (12:09:58) :
The graph above is Figure 9 from the original article. Not sure which “original graph” you are talking about.

I think dorlomin has a point. There doesn’t appear to be a figure 9 in the original post and the graph at the top of this post has the Amount of Adjustment line shifted 1 degree upward from figure 8, the last one in the original.

You are right, it is figure 8 in the original. I changed the scale, but in either figure you can see that there is no adjustment in 1941 to the Darwin Zero data.

JP Miller (14:36:54) :
I answered that question on Giorgio’s thread. If you exclude all data pre 1941, and then exclude fragments with less than ten years data, you are left with 6552 stations, and the nett adjustment rise is .0238 C/decade.
I think one reason for the positive bias is the changeover to automated stations in the early 90’s. This led to a positive slope adjustment for almost all stations. Restricting to post-1940 data amplifies it a little.

dorlomin (17:18:43) :
Isn’t the black line the adjustment?
What adjustment are you referring to? All I see are some wiggles in the black line in the early 40s.
Is it possible you’re referring to one of the other figures in the original?

Deep thought (15:28:56) :
I have to congratulate you on the magnificent irony of your choice of pseudonym. I would suggest though that if in the future plan to declaim on other people being “just a bunch of ignormuses” whose ideas don’t “hold wateer”, you might want to spellcheck your comments before hitting the submit button. The rest of your comment was so inarticulate that I don’t believe there’s a software program available that would be of any help.
The emails mean nothing,
one notes only less than 5% that support the denialsts got leaked, odd eh? Not really.
Huh?
As to defering to the “real experts” I would like to suggest you review this work from some “real experts” in Canadahttp://ams.confex.com/ams/Annual2006/techprogram/paper_100737.htm
Evans and Puckrin 2006 Measurements of the Radiative Surface Forcing of Climate
The link is to the abstract, you need to click the Extended Abstract link to access the pdf of the whole paper.
The experiment in the paper utilized the spectral analysis technique to measure the downward LW radiative flux to the surface of the various GHGs. The authors of the paper were so proud of their work that they chose to end the abstract of their paper with the following
In comparison, an ensemble summary of our measurements indicates that an energy flux imbalance of 3.5 W/m2 has been created by anthropogenic emissions of greenhouse gases since 1850. This experimental data should effectively end the argument by skeptics that no experimental evidence exists for the connection between greenhouse gas increases in the atmosphere and global warming
If you bother to look at the data they actually collected a different picture emerges. Particularly their Tables 3a and 3b which list respectively their seasonal observations for winter and summer. The readings for the cold dry air of winter show downward LW flux to the surface from CO2 at 30-35W/m2 and from H2O at 95-125W/m2 in line with the approx. 25% contribution of CO2 to the greenhouse effect. What’s shown in the summer readings is what has had me thinking. The H2O numbers went up to 178-256W/m2 in the warm humid air of summer, but the CO2 numbers went down, not just in relative terms but in absolute terms to 10.5W/m2, a third of the winter rate, and bringing the contribution of CO2 to the total GE to 3-4%, a much smaller value than I’ve usually seen quoted. this phenomenon was so obvious that even the clearly warmist authors commented on the higher H2O flux suppressing the flux from the other GHGs. Since this study was done in Canada and most heating of the planet occurs in the Tropic and Subtropic latitudes, which are presumably warmer and more humid than even the summer in Canada and would probably have even higher levels of H2O flux, if this phenomenon is real and consistent it would seem to me to indicate that the contribution of CO2 at those latitudes would be an even smaller percent.
Since downwelling LW is pretty much the heart and soul of the supposed “greenhouse effect”, it seems to me that if this experimental technique is valid and if it could be broadly applied, especially in the 40N to 40S latitudes, we would have a fairly definitive measure of the contributions of the various atmospheric components to global warming. Also, since most estimates of DLW in tropic and subtropic latitudes are at least 100W/m2 higher than those measured by the Canadians, if the suppressive effect of H2O was demonstrated to be real, it would make Plimer’s much maligned assertion that H2O accounts for 98% of the greenhouse effect look quite reasonable.
The logical implications of their measurements leapt off the page, even for a pure amateur such as myself, but the “real experts” who produced this study were so focused on supporting AGW orthodoxy that they decided that the 3.5W/m2 difference between their modeled numbers and their measured observations, all of which occurred in the dead of winter and would not seem to be bad news for Canadians, was the most significant aspect demonstrated by their work. I can’t place a lot of confidence in that kind of thinking.

************8SidViscous (20:12:58) :
Interesting Slashdot has been picking up on it.http://science.slashdot.org/story/09/12/12/2246208/The-Limits-To-Skepticism
While looking through the thread you may say that there is a lot of bad posts and moderating down of skeptics. It’s actually a positive if you look at it from a relative perspective. Just a year ago you wouldn’t have seen so many skeptics speaking up.
I think the main effect we are seeing post CRU e-mails is that the many skeptics that were already there are emboldened to speak up when previously they were not, it’s a paradigm shift.
And as always, great work Willis.
*************
I agree, great work Wilis. The people at Slashdot seem to be way more on the warmist side. They, and the Economist, seem to also have a “smug” problem.http://www.southparkstudios.com/clips/104282/?tag=Smug

Ooops. Jim Hansen can provider ‘robust’ correlation from 1200km away. The man’s got a gift I tell you.
“Our rationale was that the number of Southern Hemisphere stations was sufficient for a meaningful estimate of global temperature change, because temperature anomalies and trends are highly correlated over substantial geographical distances….The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.”

George Turner (20:11:06) :
David Wendt,
That’s an interesting paper. It says the downward greenhouse flux is 150 W/m^2. Since it’s thermal emission, it means solar cells should work pretty well at night, too. 🙂
To me what it shows is that the whole notion of CO2 being the prime driver of climate change is a beautiful hybridization of a crocodile with an abalone i.e. a crocabalone. The “greenhouse effect” contribution of marginal changes in the nonH2O GHGs and often their total contributions are swamped by random or non random changes in the H2O numbers, just as many skeptics have argued from the beginning. The limited spacial and temporal range of the study means it doesn’t yet rise to the level of “smoking gun” proof, but I suspect if you put together a grant proposal to do as I suggested and replicate this experiment broadly across the Tropics, you wouldn’t get a warm reception from the Climate establishment, even though it would seem to offer the best opportunity I’ve seen to resolve definitively the question of CO2’s role in the climate.

Goddard studies show that reference series can be constructed using distances as large as 1250 km. I don’t know where you got the 500 km number but it seems to me that you chose it arbitrarily. It certainly doesn’t cohere with the available literature.

really stupid question. but since we are attempting to measure global temperature anomaly why do we need even need to adjust when moving stations? couldn’t we just take the stations in time period x and the stations in time period x+1 that are the same and produce an anomaly from this. i guess one problem is we lose any change in the area when a station moves. but if we have lots of stations this shouldn’t be significant…

David Wendt,
I agree with you there. The results might make a few people uncomfortable, as neglecting the effects of water vapor seems to be routine, with the argument that man can’t affect water vapor and in any case its half-life in the atmosphere is too short. I argue that without changes in water vapor, the climate can’t actually change, since the weather is being held static.
But what shocked me about your link is their citation of an apparently accepted value of 150 W/m^2 due to the greenhouse effect. That’s insane. If true solar cells would still put out serious amounts of power (about a forth of their daylight average) when Jay Leno is on.
I don’t think they were measuring what they thought they were measuring. I wonder if, during their carefully prepared experiments, they noticed a giant ball of infrared emitting thermo-nuclear fire slowly moving across the sky.
*guffaw*
To beat it all, somehow their paper passed multiple layers of “peer-review.”
Good thing they didn’t measure the Earth’s re-emission in the blue end of the spectrum or they’d have really calculated a bizarre radiation budget. It’s like the whole daytime sky is lit up with high-energy photons, obviously emitted by superheated commercical jet exhaust, or something. Hrm – Or perhaps that thermo-nuclear fireball thingy in the sky.
All I can say is, people who can’t read a thermometer the same way three days in a row should not be given access to sophisticated optical equipment.

Zosima,
Here’s another one to ponder. If station A can have its measured temperatures shifted according to readings at station B, 1250 km away, and station B can have its measured temperatures shifted according to readings at station C, 1250 km further distant, and station C can have its readings scaled to station D, 1250 km more, etc, then in 16 steps we can readjust all the world’s stations to agree with the measurements at Greenwich, England, greatly simplifying this whole endeavor.

You make a big deal out of the unsigned blog entry on The Economist. There is a history of why The Economist’s articles and blog entries are unsigned:
You can read it all here, http://www.economist.com/help/DisplayHelp.cfm?folder=663377#About_The_Economist. Below is the relevant bits in regards to the signing of the articles:
Articles in The Economist are not signed, but they are not all the work of the editor alone. Initially, the paper was written largely in London, with reports from merchants abroad. Over the years, these gave way to stringers who sent their stories by sea or air mail, and then by telex and cable. Nowadays, in addition to a worldwide network of stringers, the paper has about 20 staff correspondents abroad. Contributors have ranged from Kim Philby, who spied for the Soviet Union, to H.H. Asquith, the paper’s chief leader writer before he became Britain’s prime minister, Garret FitzGerald, who became Ireland’s, and Luigi Einaudi, president of Italy from 1948 to 1955. Even the most illustrious of its staff, however, write anonymously: only special reports, the longish supplements published about 20 times a year on various issues or countries, are signed. In May 2001, a redesign introduced more navigational information for readers and full colour on all editorial pages.

It’s disheartening to see so many people so deeply in denial. I live in a mountainous country where the glaciers have been receding and disappearing in the past 15 years. This is happening all over the globe, faster than anyone believed possible.
You can ride on the Darwin data all you want. Climate change is here, whether you want to believe in it or not.

Source:http://www.economist.com/help/DisplayHelp.cfm?folder=663377#About_The_Economist
More relevant bits on the reason for anonymity of the articles on The Economist:
“Why is it anonymous? Many hands write The Economist, but it speaks with a collective voice. Leaders are discussed, often disputed, each week in meetings that are open to all members of the editorial staff. Journalists often co-operate on articles. And some articles are heavily edited. The main reason for anonymity, however, is a belief that what is written is more important than who writes it. As Geoffrey Crowther, editor from 1938 to 1956, put it, anonymity keeps the editor “not the master but the servant of something far greater than himself. You can call that ancestor-worship if you wish, but it gives to the paper an astonishing momentum of thought and principle.”

“At least your cowardice is slightly less than that of the Economist, you have a handle, so regular reader at least know what your body of commentary sums up to. Economist readers have no such ability to evaluate the writers. – Anthony Watts”
Uh, I think regular readers of The Economist are quite aware of what its “body of commentary” sums up to. It has been a steady, reasoned, pro-market, and (mostly) politically conservative voice for over 150 years. And it’s no-byline policy is one reason for its uniform high quality, which eliminates the incentive for individual reporters from attempting to build their personal brand by bucking The Economists consistent authorial voice. All of which is to say, Mr. Watts, that you were smacked down by The Economist itself, not by a single journalist.
REPLY That’s B.S. Read the first line. “I woke up this morning…” Note “I” not “WE”. Cowardice Ross. I don’t tolerate it – A

If this is why you’re concerned about knowing who wrote the article, perhaps I can help:
“At least your cowardice is slightly less than that of the Economist, you have a handle, so regular reader at least know what your body of commentary sums up to. Economist readers have no such ability to evaluate the writers. – Anthony Watts”
Regular readers of The Economist are quite aware of what its “body of commentary” sums up to. It has been a steady, reasoned, pro-market, and (mostly) politically conservative voice for over 150 years. And it’s no-byline policy is one reason for its uniform high quality, which eliminates the incentive for individual reporters from attempting to build their personal brand by bucking The Economists consistent authorial voice. All of which is to say, Mr. Watts, that you were smacked down by The Economist itself, not by a single journalist.

Why is The Economist article not signed?
We all understand the concept of unsigned newspaper editorials but this is clearly not one. It starts on a very personal note: “I woke up the other morning to find that I would have to confront yet another headache …”
Obviously, it’s not The Economist who woke up—not that it would be a bad thing—but rather some guy who is annoyed that he has to spend time doing things that seem to be part of his job. He’s not the only person in the world with such feelings but why does he not tell us his name? He goes on with:
“I’ve now spent many hours over the course of two days …”
” I don’t understand that formula. I don’t have the math for it.”
“… I can dismiss Mr Eschenbach”
“But what am I supposed to do the next time I wake up …”
I think The Economist needs to decide. If their writers want to discuss their work problems, lack of education and other gossip, they should sign it so we know who they are and make arrangements next time we encounter their product. If the writers were in fact asked to create an unsigned editorial representing the paper, then the editors should instruct them to leave out their personal problems and adopt the appropriate “we believe” editorial style.

Sam (22:11:39) :
“You can ride on the Darwin data all you want. Climate change is here, whether you want to believe in it or not.”
Sam, you’ll find that climate change has always been here. On glaciers, Alpine glacier retreat today is uncovering plenty of evidence that those glaciers went through a similar retreat in the not-so-distant past.
In case you have forgotten, the MMGW question is: whether our CO2 emissions are causing WARMING, and whether the WARMING will cause a catastrophe. (I emphasise warming, because that is all that radiative forcing can do.)
The reason why people here are talking about Darwin and other sites is because they are part of the body of evidence that has been put forward to claim impending catastrophe. Surely that has to be worthy of scrutiny – no?
“It’s disheartening to see so many people so deeply in denial. ”
Ugh – nobody is in denial of anything. In my experience, most people here are in search for the truth. The fact that you may disagree with them or their findings does not justify the word denial.
On the question of Darwin and other sites, I have been coming to the view that the right thing to do is to discard ALL unreliable data. What better measure of unreliability than arbitrary adjustments which turn an apparent cooling trend into an apparently warming trend.
Discard the data I say. If that means we need to wait for better data, then so be it.

Nick Stokes (23:16:31) :
Nick Stokes (21:20:44) :
[REPLY – To the best of my knowledge, there IS no “raw” NASA/GISS data. GISS starts with already-adjusted NOAA/NCDC/HCN data, “unadjusts” it and “redjusts” the shattered remains. So if GISS “raw” data looks like CRU adjusted data, that would come as no surprise. ~ Evan]
Evan, I don’t think that is true. As I understand it, Gistemp works from v2.mean, which is where Willis took his unadjusted data from.
GAK! I’m in the horrid position of needing to say you are both right and both wrong!
GIStemp takes in the GHCN “unadjusted” (that may or may not have significant adjustments in it, but being a monthly average is certainly not “raw”) and it takes in the USHCN with adjustments. FOR THE USA ONLY it does the “unadjust USHCN” but via a somewhat broken method, then blends that with the GHCN “unadjusted” that isn’t raw… THEN all this gets sent onward to be adjusted (in the case of Rest Of World) or ‘re-adjusted’ in the case of USA and in any case comes out maladjusted. So the Darwin data, being non-USA, ought to skip the ‘unadjust’ phase… but still be maladjusted in the end. (What a terrible place to be maladjusted 😉

What is it with these conspiracy theories?
It’s easy to just eyeball that raw data and see that the change before and after 1941 looks much more plausibly a step change from moving the instrument + associated normal variation between 1936-1941. That the author tries to pin his argument on the lack of a change between 1940 and 1941 specifically, when the difference is well within annual variance, suggests that the claims of his amateur level statistics ability may not be far from the truth.
And while GHCN may struggle to be accurate given the distances, non-GHCN adjusted data is meaningless, for all the reasons the Economist points out. So if you reject GHCN you have to reject the data.
To be honest though, a psychological investigation of the denialists would be more interesting, as debunking their carefully data-mined facts, as the Economist points out, is just too time-consuming when it never leads to anything. If there was a smoking gun, it would most certainly have been found by now, without a global conspiracy between all the world’s scientists and Icke’s reptilian humaniods that rule the world.

What I don’t understand is…
One would think, with ongoing enhancements in technology, if sites are going to be moved or otherwise altered in terms of shielding or recording devices, that the most accurate, calibrated devices available would be used, possibly in duplicate, in accordance with current standards. I would expect that there would be documented standards for elevation, calibration and shielding, etc. together with standard validation procedures to ensure that the collected data is as representative as possible. We should expect, for our taxpayer dollar, that temperature readings we take now should be able to be considered truly representative of the location without the need for any adjustment or ‘homogenization’.
Given the expectation that the most recent data should be likely to be the most accurate, why is it that the largest adjustments are being made to the most recent data and not the hundred-year-old data, collected with instruments hardly comparable to modern devices in terms of accuracy or precision?

As a faithful reader of the economist for 10 years, I am disgusted by their biased coverage of this entire debacle. The following comment exemplifies this bias to the fullest:
“Your understanding of statistics is as poor as your understanding of chronology. The statistics used by GHCN are average college level tools. You are dazzled by the fact that you don’t understand them, so you make the incredibly foolish assumption that no one without “a PhD in a related field” can understand them either. Some of us actually paid attention in class, you know.”
Since we are on the topic of statistics, perhaps these people should actually ask the opinion of statisticians on not only the data adjustments, but on the whole methodology of the IPCC’s temperature forecasts.
It is however clear why they don’t, because real statisticians will invariably dismiss their ‘models’ and ‘predictions’ as pseudo-science.
Here’s an abstract of an article, published in 2007, doing just that. I read it, from a statistician’s point of view, and their assessment matches with what I already suspected: the IPCC’s forecasts are JUNK.
(I left this abstract on the Economist’s blog as well)
I invite anybody who has any faith in statistics to read not only this abstract, but also the entire article. It pretty much gives a qualified statistician’s take on the whole matter.
————————-http://www.ingentaconnect.com/content/mscp/ene/2007/00000018/F0020007/art00009
Energy & Environment, Volume 18, Numbers 7-8, December 2007 , pp. 997-1021(25).
Global Warming: Forecasts by Scientists Versus Scientific Forecasts
Authors: Green, Kesten C.; Armstrong, J. Scott
————————-
Abstract:
In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no”.
To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making.
The IPCC WG1 Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references in the 1056-page Report to the primary sources of information on forecasting methods despite the fact these are conveniently available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical.
The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
————————-

I attach the text of a letter to The Economist:-
“Your recent, unsigned, attack on Willis Eschenberg seems poorly researched. His own rebuttal (see http://www.wattsupwiththat.com, Dec13) makes that clear. I would like only to add that the continual use of ‘scientists’ when referring to global warming enthusiasts, but of ‘sceptics’ or ‘contrarians’ for those opposed is odious. Both sides of the dispute have their share of scientists (and, of course, unhinged nutters). Particularly beyond the pale is the use of ‘deniers’, echoing as it does the totally unrelated subject of Holocaust denial. Some GW fans openly call for GW denial to be a crime, as in Germany for Holocaust denial. Prominent GW spokesmen, such as Dr Jim Hansen of NASA, refer to trains carrying coal as ‘death trains’, deliberately sending the same message.
It is beneath contempt for The Economist to stoop to that level, especially in unsigned attacks on individuals.”
Tim Johnson

Phil A (16:21:04) :
The analyses of Giorgio and Romanm are consistent – in fact, just two different ways of presenting the same information. There is a set of adjustments over years and stations. One analysis, Roman’s, averages over stations, and presents the distribution over years. Giorgio’s calculates the trend over years, and shows the distribution over stations.
Giorgio finds a mean adjustment trend over stations of 0.0175 C/decade. That’s close to the slope of the rising section of Roman’s graph. In fact, if you compute the regression slope of Roman’s whole graph weighted by the number of stations in each year, the result is 0.0170 C/decade – extremely good agreement.

djrabbit (22:18:03)
“All of which is to say, Mr. Watts, that you were smacked down by The Economist itself, not by a single journalist.”
Willis Eschenbach’s all too easy rebuttal (demolition) of the Economist’s woefully ill-advised venture into climate science, revealed the magazine’s criticism of Willis’ research on the Darwin temperature record to be flawed, lazy and superficial, a mere half-hearted veneer over underlying political predjudice. Its rather obvious you have not read the rebuttal which is the subject of his thread.
To use a classic House-of-Commons quotation from the British politician Dennis Healey when responding to an “attack” from Geoffrey Howe, a scientific attack from the Economist is rather like “being savaged by a dead sheep”.

“On Dec 11th, the Economist published an unsigned article…”
And also on Dec 10th, 9th, 8th… oh dear, it’s been publishing virtually all its articles without bylines for the past 160 years.
At least you can rest assured that you haven’t been singled out.

aidlife (00:15:07) : What is it with these conspiracy theories?
You dont need conspiracy theories. The billions of dollars being poured into man-made Global Warming grants is enough. This provides the motivation for a scientific conclusion, which looks like collusion to a bystander.To be honest though, a psychological investigation of the denialists would be more interesting, as debunking their carefully data-mined facts, as the Economist points out, is just too time-consuming when it never leads to anything. If there was a smoking gun, it would most certainly have been found by now, without a global conspiracy between all the world’s scientists and Icke’s reptilian humaniods that rule the world
Dozens of guns smoke away as the warmist spin machine rapid fires its alarmist myths. Its like fighting a religion. In Indonesia natural disasters were attributed to humans, because of their immorality, being punished by god. The warmists also attribute hurricanes and droughts to humans and their immorality in driving cars and using electricity in their homes. The religious myth only slightly less ridiculous than the warmist one.
The warmists – we cant think of any other explanation for the warming from 1950 – so it must be AGW.
Here are a few alternatives for you to chew over:
1. The same causes that caused the exactly similar warming of the previous 50 years?
2. The same causes that caused the warming of the Medieval warm period? (Oh sorry that doesnt exist – Mann hockey sticks, Briffa admitting privately it was – but that cannot be taken as evidence), Roman WP, WP of 3K ago, holocene optimum, Optimum? how can warm be optimum?
3. Solar, AMO, PDO, Niniya, Niniyo, Niniyum – take your pick.
There are other theories other than AGW which explain the temperature variations better than AGW. Here is mine:
Global temperatures from 1998 will be the same as 1998 +/-0.3C
Compare this with the AGW theory of IPCC: Global temperatures from 1998 will increase by 0.035C to 0.06C every year (+/- 0.01C)
Admittedly the IPCC AGW theory has been worked out on supercomputers, using complex GCM’s, and mine off the top of my head, but mine beats the IPCC’s hands down so far.
Now check back after 5 years to see my theory still winning (and hopefully by then some climate scientists being prosecuted for fraud).

Richard (03:53:38) :
Excellent, excellent!
Do I sense a Paul Ehrlich-John Holdren-John Harte/ Julian Huxley moment??
So, what is the game and what are the stakes? The price of a carbon credit? I’ll be making book on side bets….

Yes, yet another anonymous coward but that issue I explained earlier for The Economist and in the AGW debate I seldom post, mostly read and study.
Mr. Watts maybe you won’t like how I see things, but I’m betting that you’ll take any critics present as a proof of my sincerity and I do think you do a useful job and in the current context maybe you’re a much needed counterpart to the likes o RC (who in my humble opinion are way more biased and present way more “trash” than you do i.e. shock/caricatural )
I usualy come here to find leads for my own “research and opinion” on the climate issue (BTW my personnel opinion is that we are within normal climate for millenium but may have 0.2 human impact)) and I read from both sides, be it CA or RC, air vent or tamino. That usualy gets me labeled as a “denier” in most discussions in real world.
I just wanted to share my complaint to The Economist (copy from complaint through site)
Best regards
I subscribed to the Economist for many years and continued reading it for in your site (but still bought most issues at the newsstand more often than not)
Through personnel curiosity, though my degrees are in economics and an mba, I took an interest in the AGW issue. After reviewing data and studies I have come to be what is normaly called a “lukewarmer” and was not particularly pleased to see how stong a stance The Economist was taking in the debate.
Nevertheless it was a position I could understand, given the uncertainty and zeitgeist involved.
A completely different issue is joining the desinformation and ad hominem attacks on skeptics (even those who have a less than pristine selection of “news” often point to useful information and many times point valid critics.
WUWT may have certain shortcommings but a potentialy libelous , and certainly biased and in my opinion wrong, article without due dilligence is not what The Economist has made me expect.
Best Regards

aidlife (00:15:07) :
“To be honest though, a psychological investigation of the denialists would be more interesting, ”
Err, no thanks. I prefer evidence-based enquiry. Analysis of motives are for those who have run out of ideas.
“debunking … as the Economist points out, is just too time-consuming when it never leads to anything.”
Which only goes to show that the Economist (and you) have no concept of the scientific method. If you are too impatient to allow scientific enquiry to progress, your propositions will be continually dragged down by their uncertainties. And that will just lead to public demonstrations of your frustration. Just like that shown in your post.
“So if you reject GHCN you have to reject the data. ”
Now you are starting to make more sense. If the underlying data series need to have cooling trends arbitrarily converted into warming trends, then I say reject the data and start again. Next time do it better.

***********
Ricky Oberoi (22:12:55) :
“Why is it anonymous? Many hands write The Economist, but it speaks with a collective voice. Leaders are discussed, often disputed, each week in meetings that are open to all members of the editorial staff. Journalists often co-operate on articles. And some articles are heavily edited.
****************
It’s scary to think that a group of people, the ones who wrote the article (or maybe it was just one – how could we know?), could be that ignorant!!

@ Green RD Manager (13:20:56)
Just took a look at the data for Central Park in NYC. In 1895 they adjusted the temperature down by 1.5 degrees. By 1998 the adjustment was reduced to 0 degrees.
So lets see if we can understand their methodology here:
– UHI increases as the size of the city increases
– UHI will mean a station will show higher temps as the area around it becomes developed
– To adjust for the effect of UHI, show that the older temps were cooler than measured
– To further adjust for UHI, newer temps should be shown to be equal to or higher than the measured temps
I might not be the brightest guy, but that just seems backward to me. If you know the current temperature is reading higher due to known UHI effects, wouldn’t you correct the newer temperatures down while not adjusting the older temperatures?
– Charlie K

@ djrabbit (22:18:03) :
“All of which is to say, Mr. Watts, that you were smacked down by The Economist itself, not by a single journalist.”
—
Yes, Sir. Many of us commenting on the sneaky nature of the attack understand it. We are smacking down not just this anonymous fellow and his emotional rant. We are smacking down The Economist itself.
I started reading The Economist about 22 years ago and I know their policy. I am ashamed to admit that in my youthful vanity I was then much impressed by their pompous style and it took me quite a while to figure out what they were doing. I say now that while their style was always peculiar it has become outright ridiculous in the digital age. How do I respond to them? “SIR: I hear that you woke up the other morning to find that you would have to confront yet another headache …” or “SIR: May I suggest what you are, like, supposed to do the next time you wake up in an irritable disposition …”?
Michael Lewis said that The Economist “is written by young people pretending to be old people. If American readers got a look at the pimply complexions of their economic gurus, they would cancel their subscriptions in droves.”
Well, they aren’t even pretending to be old people anymore, are they? At least some of them are just arrogant, overbearing kids who not only know how to speak the part but are encouraged to show it.
The Economist has beclowned itself again but I don’t expect them to change their policy as long as it sells subscriptions. It’s part of the racket.
Reference:http://jamesfallows.theatlantic.com/archives/1991/10/the_economics_of_the_colonial_cringe_about_the_economist_magazine_washington_post_1991.php

Willis Eschenbach
“Finally, the Economist did not contact me before publishing an article full of false accusations, incorrect assumptions and wrong statements … looks like peer review is not the only system in trouble here. I thought journalists were under an obligation to check their facts before making accusations …”
Hmmm just rereading the original smoking gun story on wattup. I am missing the bit where Willis Eschenback contacted the relevant people at GHCN, Darwin or the Autralian Met Office.
Id appreciate if someone could cut and paste that bit of the article as I really cant find it.

Dear Willis,
Like Zosima (20:58:47), I am puzzled about your use of the 500 km limit. I can’t find that in the GHCN papers; in fact, my reading of those papers indicates no such limit. Can you point to a reference for the this?
Thanks,
Kevin

dorlomin (10:57:25) : Hmmm … you may have a point there. But Willis Eschenbach has answered the criticism. Thats the nub and GHCN, Darwin or the Autralian Met Office should answer his too. Dont you think?

@dorlomin (17:23:42) :
There are several records of adjusted temperature data for Darwin station in GHCN data set. The earliest or “zero record” stops at 1991. From the initial post with my emphasis:
“Intrigued by the curious shape of the AVERAGE of the homogenized Darwin RECORDS, I then went to see how they had homogenized EACH of the INDIVIDUAL STATION RECORDS. What made up that strange average shown in Fig. 7? I started at ZERO with the EARLIEST RECORD.”

djrabbit (22:18:03) :All of which is to say, Mr. Watts, that you were smacked down by The Economist itself, not by a single journalist.
Mr. Watts was smacked down and by the Economist itself? I don’t think that happened, djrabbit. Are you sure you read anything Anonymous wrote? I read it all by Anonymous, but didn’t see that. Also “sparkleby” took credit for the critique in the comments, though I didn’t read all of what sparkleby said there.
I thought the Economist did a better job of engaging Willis’ science by way of the Scientific Method, than the ipcc and its elite Climate Scientists have done, certainly in some critical cases where they didn’t engage the Scientific Method at all.

Richard (14:25:35) :
But Willis Eschenbach has answered the criticism.
I’d like to see an answer to Giorgio’s demonstration that the GHCN adjustments, of which Darwin’s is just one item, have a distribution of trends which is almost symmetric, with adjustments almost as frequently trending down as up. Darwin is in the upper trend tail.

dorlomin (10:57:25) :
“I am missing the bit where Willis Eschenback contacted the relevant people at GHCN, Darwin or the Autralian Met Office.”
This may be a fair point, but it cuts both ways. Steve McIntyre, for example, can’t be faulted for any lack of willingness to contact or engage with climate scientists and institutions. Yet climate scientists have been slow to commend him for this; there has even been some suggestion that his approach was unwelcome or in some way inappropriate…

anonymist (23:18:13) :
Nick Stokes (22:07:48) :
Roman M. has produced a response to this. I can’t vouch for it myself.
Yes, it’s a response. But if you read through the comments, I think you’ll see it agreed that it is another way of looking at the same thing, and not a refutation. My version of that is upthread (01:49:34).

I was very impressed by the Darwin post until I read the Economist counterpoint that “He makes it sound as if he’s just happened to stumble across this one site whilst perusing a debate over climate change in northern Australia. But as his link to that conversation from 2000 makes clear, Mr Eschenbach is already aware that climate change denialists have been trumpeting the apparent anomalies at Darwin for nine years.”
Of course – I thought – Darwin must be an outlier – maybe the most extreme, so I’ve just checked using the GHCN v2 data available via Realclimate.
I ranked the stations by the average annual difference between raw and adjusted data (sqrt((raw-adjusted)*(raw-adjusted)/years of data)
Darwin is 6th out of c2600, of which about 1440 have some adjustment – Darwin has an average adjustemnt of 1.6 degrees (median 0.3 amongst all adjusted stations) – but another 10 stations have average adjustments of more than 1.5 degrees. Amongst these 10 only 1 – CHUL’MAN RUSSIA – shows upwards adjustment.
The same signature profile of adjustent as Darwin – with the gradually decreasing downweight up to about 1980 – can be clearly seen in the data for Poona, Sulina and Lankaran. The others apart from Chul’man and Haripur all also show heavier downward adjustement in the earlier data series, reducing over time. These are:
JAKOBSHAVN
PRINCE ALBERT A SASK. CANADA
KRASNOVODSK
VESTMANNAEYJAR ICELAND
TANANA

I went to check the economist answer AND same answer Peterson and Vose…
let’s just see if we can make it clear IT’S THE ADJUSTMENTS STUPID
as another paper sums things up
Substantial effort has been devoted to developing improved global temperature datasets from surface observations (Hansen et al. 1999; Jones et al. 1999; Peterson and Vose 1997; Vinnikov et al. 1990; Jones et al. 2001; Folland et al. 2001); rocketsondes (Keckhut et al. 1999); and the microwave sounding unit (MSU) on the National Oceanic and Atmospheric Administration (NOAA) polar-orbiting satellites (Christy et al. 2000). These efforts focus on adjusting, or homogenizing, the data to remove both gradual and abrupt artificial temperature changes that might result from station moves, instrument and procedural changes, and urbanization (in the case of surface observations), or changing orbital configuration, instrument drift, and differences between instruments on different platforms in the case of satellites.
Anyone notices the usual suspects?
BTW taken from this article where, probably a newbie, is too candid on the radiosonde adjustments, think it’s worth reading and then be used to explain WHY data set agree… high temperature pasteurizationhttp://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2F1520-0442(2003)016%3C0224%3ATHOMRT%3E2.0.CO%3B2&ct=1

Willis,
An aspect of data capture which has not been commented on is the move to real time computerised temperature collection. Compared to hourly recordings they must also contribute to higher temps. Hourly readings have a high chance of missing the peak which is an appreciable difference especially for the max temp. I have seen up to 1.5C difference between hourly figures and max from continuous measurement.
I have plots of 2 stations in Australia, from Sydney (Observatory Hill) and Newcastle (Nobbys). They are about 100 km apart but Nobbys is on a small peninsula away from the city of Newcastle. Up to about the sixties the diff was always negative. It grew positive during the 60-70’s indicating a change taking place in either data or Sydneys UHI. But also it may coincide with the move to computerisation and real time data.

Nick Stokes (22:07:48) :
Richard (14:25:35) :
But Willis Eschenbach has answered the criticism.
I’d like to see an answer to Giorgio’s demonstration that the GHCN adjustments, of which Darwin’s is just one item, have a distribution of trends which is almost symmetric, with adjustments almost as frequently trending down as up. Darwin is in the upper trend tail
Nick Stokes Giorgo says GHCN Darwin adjustments trends have a normal trend. Thus the negative and the positive should balance each other out. RealClimate says QED end of story. Not so.
This is the claim made of all adjustmenst, homogenisation, pasteurisation of the records.
Bollocks! It only applies if the adjustments are random.
In the case of Darwin – they are not!
Have a look here: The fallacy of the “Giorgio’s demonstration” / argument clearly shows up when you plot the adjustments OVER TIME. A clear trend shows up.

Sorry that was sent in haste. Here is the same thing with spellings corrected and the link to the trendsNick Stokes (22:07:48) :
Richard (14:25:35) :
But Willis Eschenbach has answered the criticism.
I’d like to see an answer to Giorgio’s demonstration that the GHCN adjustments, of which Darwin’s is just one item, have a distribution of trends which is almost symmetric, with adjustments almost as frequently trending down as up. Darwin is in the upper trend tail
Nick Stokes Giorgo says GHCN Darwin adjustments trends have a normal trend. Thus the negative and the positive should balance each other out. RealClimate says QED end of story. Not so.
This is the claim made of all adjustments, homogenisation, pasteurisation of the records.
Bollocks! It only applies if the adjustments are random.
In the case of Darwin – they are not!
Have a look here: The fallacy of the “Giorgio’s demonstration” / argument clearly shows up when you plot the adjustments OVER TIME. A clear trend shows up.http://statpad.wordpress.com/2009/12/12/ghcn-and-adjustment-trends/

Richard (14:38:53) :
Richard, read the comments on that thread. You’ll see that there’s no claim that Giorgio’s result is a fallacy. Both approaches show that the average effect of adjustment is a mean rise of about 0,0175 C/decade (GG’s figure). The max slope of Romanm’s curve is about 0.023C/dec. Noticeable, but nothing like Willis’ figure for Darwin, which is in the tail of the distribution.

“Finally, the Economist did not contact me before publishing an article full of false accusations, incorrect assumptions and wrong statements … looks like peer review is not the only system in trouble here. I thought journalists were under an obligation to check their facts before making accusations …”

Hmmm just rereading the original smoking gun story on wattup. I am missing the bit where Willis Eschenback contacted the relevant people at GHCN, Darwin or the Autralian Met Office.
Id appreciate if someone could cut and paste that bit of the article as I really cant find it.

I made no attempt to contact them. Science is built on and depends on transparency. I am tired of being turned down, or pointed in the wrong direction, or put off for another day, or flat out lied to when I ask climate scientists to live up to that requirement. See my story here. So instead, I analyze the scientist’s results to see if they contain errors.
An international and respected news magazine such as the Economist, however, has a long standing and well-respected requirement to verify its facts. As such, accusing me in public of being a liar, without contacting me, is a breach of journalistic ethics. People believe what they read in The Economist, because they believe that The Economist journalists have checked their story and made sure it is true.
But in my case, The Economist journalist did not check their story. They simply lied about what I have said and done. That is unconscionable and despicable.
PS – Why would I contact the Australians or the met office folks in Darwin? My article said nothing about what they’ve done to the Darwin record, that’s a whole other subject entirely.

Excuse the length of this post, but perhaps the suggestion will be of value going forward.
During MIT’s “The Great ClimateGate Debate” (http://mitworld.mit.edu/video/730) Lindzen said he was very surprised about the recent interest in science by the general public, and even named Watt himself and his temprature station project.
I think that with projects like Watt’s ground station inventory, McIntyre and McKitrick’s double checking of fishy peer reviewed work, Willis, RomanM and others double checking the raw temperature data, I think it is clear that amateur climate science has reached a new maturity that calls for a formal pro-am collaboration forum — a “Pro-Am Climate Science Institute”.
For those who think this is a radical idea it is not new and below are examples from two sciences where I know there is extensive and valuable pro-am collaboration. It is an obvious solution for those sciences that study parts of the natural world where the sheer volume and/or vast geographic distribution is just too great for professionals to do alone. Climate science is an area where this is the case. The opportunities from just the above projects have enough work to keep an army of volunteer amateurs involved indefinitely, along with new projects (like perhaps establishing a network of amateur temperature instruments).
Having a formal pro-am collaboration forum will also help neutralize critics that ignore people like the Watt himself by giving volunteers credibility as they contributing as equals. Jones wanted to “redefine what peer-reviwed literature was” (e-mail 1089318616.txt) so perhaps we can do it for him (he seems a bit busy right now) A pro-am forum also helps legitimate scientists in the field (ie the non-warmists) to tap into an army of helpers when needed. And probably it will bring many other benefits. No doubt the idea will be scoffed at by the warmists but I think those who are already doing this level of pro-am work should consider forging forward with this as over time it will be accepted.
————————————————–
Amateur astronomers have many, many formal pro-am collaboration relationships and organizations that are valued and embrased by professionals. Amateurs have even detected planets oribiting other stars, a feat that has only just becaome possible for professionals a decade or so ago. Here’s are a few examples of the sophisticated level of collaboration taking place, although in the early years it was tough going as pros were reluctant partners, but now about 5 years later its 180 degree turnaround.
Pro-Am collaboration to unveil the atmosphere of Venushttp://www.europlanet-eu.org/demo/index.php?option=com_content&task=view&id=46&Itemid=41
Results from an ongoing collaboration between amateur astronomers and the European Space Agency to support the Venus Express mission will be presented at the European Planetary Science Congress in Potsdam on Wednesday 22nd August. Silvia Kowollik, from the Zollern-Alb Observatory in Germany and one of the participants in the project, said, “This is the first time there’s been a European collaboration between amateur astronomers and scientists. In the United States, they have a long tradition and a lot of experience in this kind of work. In Europe we are just starting.” “There have been huge advances in relatively cheaply available equipment, which means that amateurs can take images in wavelengths from infrared through to ultraviolet with impressive accuracy and content. These amateur observations are the last link in a chain that starts with Venus Express and continues with the professional ground-based activities. When joined together, all these observations will all help to peel back the atmosphere of Venus and reveal her mysteries.”
Amateur Discovers Gamma-Ray Burst Afterglowhttp://www.skyandtelescope.com/news/3307961.html?page=1&c=y
Last July 25th Berto Monard of Pretoria, South Africa, became the first amateur astronomer to discover a gamma-ray burst’s fading visual afterglow. Dubbed GRB 030725, the burst was first spotted by the NASA/Massachusetts Institute of Technology High Energy Transient Explorer-2 (HETE-2) spacecraft, which immediately relayed its approximate coordinates to astronomers worldwide.
Gamma-Ray Bursts monitored by Amateurshttp://www.pbs.org/seeinginthedark/astronomy-topics/gamma-ray-bursts.html
The afterglow starts to fade right away, so the earlier you catch the explosion, the more likely you are to be able to photograph the afterglow. Astronomers have sent x-ray telescopes into orbit to scan the sky, to catch the first x-ray afterglow of a burst as soon as possible. The latest of these satellites, called Swift, has been making a slew of discoveries since its launch in November 2004. True to its name, the Swift satellite instantly dispatches e-mail alerts to amateur and professional astronomers around the world. Many observers alerted by Swift will be on the wrong side of the world or under cloudy skies, but a few have a chance to catch the visible afterglow. As there are many more amateur than professional astronomers, and many have the kinds of telescopes and CCD cameras needed to measure the fading brilliance of a gamma-ray burst, the odds are that the discovery will be made by an amateur. As described in our film, the amateur astronomer Michael Koppelman captured just such an image, and it turned out to be one of the most distant gamma-ray bursts ever observed—its distance a staggering 11 billion light years. One day when the different kinds of gamma-ray bursts are better understood, it is likely that the amateur astronomy community will have played a significant part in exposing the secrets of these violent events.
Astronomers Launch Pro-Am “Registry” (check out the amateur’s observatory)http://www.skyandtelescope.com/news/3309186.html?page=1&c=y
The past decade has seen an explosion in the number of backyard observers using high-end equipment and sophisticated software to record faint asteroids, discover supernovae, and even detect extrasolar planets. So it’s not surprising that many accomplished amateurs yearn to contribute directly to scientific research. Among the dozens of practicing astronomers attending the AAS special session was Alan Harris (Space Science Institute), who points out that amateur observers have determined about a third of the 1,500 known rotation periods for asteroids. “Amateurs are always thanking us for the interest we professionals show in their work, but I think it’s going in the wrong direction,” Harris comments. “It’s we who should be thanking them for their involvement.”
—————————————————
Another example is in ornithology
Association of Field Ornithologistshttp://www.afonet.org/about/index.html
The Association of Field Ornithologists (AFO) is a membership organization dedicated to the study and conservation of birds and their natural habitats. The AFO prides itself as serving as a bridge between the professional and the amateur ornithologist. The organization’s membership and governing council consist of both amateur and professional ornithologists, in recognition of the contributions that both make to ornithology.

If transparency from climate scientists is expected … ???
The fact that there is an “if” in that sentence is very discouraging. It is absolutely expected that climate scientists will reveal their data and methods, as it is in all parts of science. This is neither novel nor restricted to a particular branch of science. It is fundamental to science itself.
In the current case, I have already given the names and locations of the files in the other thread. However, despite your accusatory tone, I’m happy to give them again.
The files are from the GHCN website. The current versions are available athttp://www1.ncdc.noaa.gov/pub/data/ghcn/v2/
They are named
v2.mean.Z
and
v2.mean_adj.Z
To make Figure 8, I extracted the Darwin Zero record from the adjusted and the unadjusted files, and subtracted one from the other.
w.

I do hope you will offer a response to the Economist’s counter response:http://www.economist.com/blogs/democracyinamerica/2009/12/climate_manipulation_gun_still
Even with a Ph.D. in physics, it is still too difficult (or tedious) to figure these things out for myself. I am grateful for the work you do, but have been frustrated by a lack of honest debates between qualified parties. I’d pay good money for a ring-side seat to McIntyre vs. Mann, but that isn’t going to happen.
The economist seems willing to engage in such a debate by posting a response to your response. I would love to see this play out further. I learned a great deal watching the old Firing Line debates on PBS. I hope something similar might play out here.

I do hope you will offer a response to the Economist’s counter response:http://www.economist.com/blogs/democracyinamerica/2009/12/climate_manipulation_gun_still
Even with a Ph.D. in physics, it is still too difficult (or tedious) to figure these things out for myself. I am grateful for the work you do, but have been frustrated by a lack of honest debates between qualified parties. I’d pay good money for a ring-side seat to McIntyre vs. Mann, but that isn’t going to happen.
The economist seems willing to engage in such a debate by posting a response to your response. I would love to see this play out further. I learned a great deal watching the old Firing Line debates on PBS. I hope something similar might play out here.

I have written a reply to The Economist which will appear very soon here on WUWT. Thanks for the encouragement.
w.

Willis,
Making the point that weather stations are important for aviation and that aviation sources can be of help, here is a 2006 air photo of the SE part of Darwin airport, annotated to show a station shift of unknown date, of 800 m. (Other lats and longs put the temp station as the white dot about 100m to the N-E of the end of the green arrow, which is where it plots on the current Google Earth photo. But I do not know the date that these lats and longs were officially recorded – this was their position in April 2007 BOM records, updated ???).
I’m trying to get more info on station shifts. This photo is fact that at least one happened. It less less clear whether different locations needed, or had, adjustment, and if so, why, when and in which sense.
Will keep you posted. Happy Christmas Geoff.http://i260.photobucket.com/albums/ii14/sherro_2008/DarwinAirportpost2006a.jpg?t=1261621917

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!OkPrivacy policy