Saturday, August 11, 2007

Hansen On McIntyre

James Hansen is the man who, arguably, sounded the alarm on AGW way back in the 1980s. This makes him Public Enemy #2 in the Climate Change Denialist community, Al Gore currently holding the #1 spot. He has been the target of much blogosphere venom after S. McIntyre uncovered a minor error in the GISSTEMP temperatures for the U.S.A., the care and upkeep of GISSTEMP being one of Hansen's responsibilities. Here is his response to the latest controversy. My favorite bit is the conclusion:

The flaw did have a noticeable effect on mean U.S. temperature anomalies, as much as0.15°C, as shown in Figure 1 below (for years 2001 and later, and 5 year mean for 1999 andlater). The effect on global temperature (Figure 2) was of order one-thousandth of a degree, sothe corrected and uncorrected curves are indistinguishable.

Contrary to some of the statements flying around the Internet, there is no effect on therankings of global temperature. Also our prior analysis had 1934 as the warmest year in the U.S.(see the 2001 paper above), and it continues to be the warmest year, both before and after thecorrection to post 2000 temperatures. However, as we note in that paper, the 1934 and 1998temperature are practically the same, the difference being much smaller than the uncertainty.

Somehow the flaw in 2001-2007 U.S. data was advertised on the Internet and for twodays I have been besieged by rants that I have wronged the President, that I must “step down”, orthat I must “vanish”. Hmm, I am not very good at magic tricks.Other responses to the GISSTEMP glitch can be found here, here, and here, where McIntyre tries to cool the rhetoric and again invokes some of the weird, rural good/urban evil (Red State vs. Blue State?) political symbolism that has been creeping into Denialist rhetoric over the past several months.

43 comments:

TCO
said...

Hansen just needs to disclose his methods. No vanishing need. No drama. No wrapping himself in the Daily Kos flag. No crying about how he's being silenced (as he does 40+ interviews in a total cult of personality).

Supposing Hansen did write out his methods in words of one syllable. Would Stephen McIntyre the former mining executive understand them? Does he have the training and experience to know what to do with them?

Do you, tco, or are you just a true believer mindlessly repeating McIntyre's mantras?

And what's this about Hansen wrapping himself in the Daily Kos flag? The man is a top scientist; he doesn't need a blog/forum to give him credibility - just the respect of his peers, of which you ain't one, tco.

What is it exactly that Hansen is supposed to be concealing? "The code" is a bit vague to me. The realclimate people essentially say everything Steve needs is pretty much out in the open, and he does have a habit of seeing a scandal whenever people don't answer his emails in good time.

Good grief - never mind the scientists, the deniers and the continuous arguments and attacks and just take a look at the world today (weatherwise) and it doesn't take rocket science to see something is happening.

Look up the phrase "intellectual integrity". I know it's a difficult concept for you denialists to get your heads around, sort of like Voldemort trying to figure out what "love" means. But try, anyway.

I know what Steve said he had to do, "reverse engineer" and etc. I know what the term means in its normal context. But look at how Steve uses the term "Y2K bug". Can you tell me that he is not using the former term in some weird private sense as well?

The methods are not all that obscure. If anything, they're beyond the ability of *Hansen* to evaluate! Steve M is more qualified than Hansen to evaluate the statistics involved. That's something you, and many others, fail to acknowledge.

People act as if multidisciplinary expertise is useless or worse -- as if it takes a climate scientist to understand every aspect of the work involved. Even though much of the work is statistics, computer engineering, etc etc. How about letting statisticians evaluate the statistics, computer engineers evaluate the software, and so forth? No dice up to now.

Here's a little truth that we use in the world of software development: if your code is so complex that others can't understand it... then YOU won't be able to understand and debug it a year from now.

The analysis methods should be clear, simple, transparent. They should follow proven statistical methodologies. They should be checked and verified by (skeptical!) outsiders. That's what science is all about.

Nobody is trying to "get" anyone. The science is simply useless unless and until it has been proven in the crucible of attempts to falsify the claims.

MrPete: "The methods are not all that obscure. If anything, they're beyond the ability of *Hansen* to evaluate!" Those two sentences together make no sense. They are contradictory.

"Steve M is more qualified than Hansen to evaluate the statistics involved. " That is simply a lie. What degrees has McIntyre earned in any branch of science? What experience does he have? It's not all that easy matching numbers crunching to reality, especially when reality is a complicated as the earth's climate systems. It's not as easy as playing with your software.

What is important is for climate scientists to evaluate the work of other climate scientists. They have done so and have found that Hansen's work stands up.

Gavin is busy saying that NASA doesn't need to turn over the code. That everything one would need to know is published. And when challenged where, he gives a laundry list of papers. Shyeah...roight! What a goober.

Any data set large enough and complex enough can be made to reach any conclusion about a target variable. This is especially the case when bad data is included, data is not uniformly corrected/re-coded, and when modelers with an agenda are not allowed to be cross-examined (by those provided with a full explanation of their modeling process). The significance of McIntyre's finding is in the indication that Hansen's team was not at all careful with the quality of their data. Essentially, I am implying that far more problems will be found if they are looked for.

Unrelated: Does anybody talk about the lack of predictive power in any /all climate models? I can build a model with a ("pseudo") r^2 of .04, and it may be useful if no prior model existed... but I wouldn't trust the damned thing to be at all close to reality. If I am wrong on this, or if there is a site/blog where this is deeply discussed, I am kindly requesting a link.

I spent a few minutes noodling around the net on McIntyre, and it turns out that a) his retirement in 2002 was perhaps less than voluntary and b) "failed mining speculator" might be a more accurate appellation than "retired mining executive." See here, here and here for details. (I suspect these documents may age off the Dumont Mining site after five years, i.e. very soon, so make copies now if you want them for reference in the future.) Anyway, them were some dry holes back in ought 2! This bit of history does make me wonder a bit more actively precisely what his source of income has been these last five years. He *says* he's not being paid to be a denialist, but who has audited the auditor? Also, at some point in the past apparently he worked as an advisor to both the federal and provincial (I assume Ontario) governments. Doing what, do you suppose, and for which regimes? Say, bcl, you're there in Toronto...

You're confused, anon 3:16. The temp data doesn't involve a model. I suggest you read Gavin's RC comments about science replication as she is done south of the border. The "auditors" have access to the data along with a clear description of what was done to it. They can work up their own results and compare. Only if there's a significant discrepancy is it time to start looking over code details. IOW, it's time for the "auditors" to do some science. No fair skipping the hard part.

Bloom, Hansen's papers, let's say Hansen 2001 for example, does not contain the information one needs to check the work. For example,page 7. Hansen writes:

"The strong cooling that exists in the unlit station data in the northern California region is not found in eitherthe periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the sameperiod, illustrated below, has strong warming along the entire West Coast of the United States. This suggests thepossibility of a flaw in the unlit station data for that small region. After examination of all of the stations in thisregion, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities withneighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 forElectra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted)"

First, my point here is about method, documetation and reproducability. NOT about the truth of AGW.

1. What kind of homongeity test was performed? Where are the statistical results?2. this "small region streches from Sacramento to Oregon. A list of the actual stations used to compare against these five is lacking. For example, Electra is 400 miles from Crater lake ore. Both are in the list of 5. The simple question is Hansen said all the sites from "this region" did he use sites north of crater lake?how far north? south of electra ca?how far south.

Again, the issue is the lack of transparency.

Finally, even if he did describe the method in the text, one still needs to verify that the code matches the text. For example, Hansen very nicely describes that they "ingest" USHCN data. And he describes the QC tests. But, as we all now know, something was amiss in that process.

Crater Lake IS colder that the surrounding area. It gets huge amounts of snow. So Hansen throws out the data from this rural cold area because it happens to be cold and assumes something is wrong with the station? Sounds like cherry picking.

That's interesting thanks. That Nickel company looks like a "shell". Stock promoters have a bad habit of wanting to recycle their fun and games using these shells. I don't know that much about Canadian mining penny stocks, but they have long had a reputation for being dicey both in risk and in shenanigans. It's always bugged me that Steve held out business audits and practices as a model for science, given the decades-long reputation that Canadian stocks have. (But maybe I'm out of date. Certainly that is how they were percieved 30 years ago.)

Actually, I'd think that McIntyre's departure from the company (which seemed dormant) is probably a good thing. Unless it was a falling out of miscreants.

Oh...all that juicy speculation said, Steve has always seemed to be a gentleman in the interactions I've had with him.

He found a discrepancy, wondered about the way the data was being filtered (and wondered why NASA didn't specify)before and after the discrepancy, asked NASA, and NASA identified and corrected a bug. Since the gap was not a result of different filters being applied, but a goof, there is nothing that they were withholding IN THIS CASE from him.

Of course I can't get to CA again to confirm this, but giving it a quick read through last night, that was what I concluded.

NASA was able to find the exact source of the problem, once a problem had been shown by the individual station data analysis (the big jumps at 2000). Since Steve did not have the exact code/algorithm, it was a bit hard for him to figure out exactly how NASA had goofed! But he was able to deftly find the goof (which existed undetected for many years) and after that it was easy for NASA to re-examine their formula and find the source of the flaw (in days).

I think NASA is not being open by not sharing the exact code/algorithm. I don't think they are hiding goofs or deliberately creating goofs. Just that they don't want to show exactly what they are doing. Lots of reasons for that (competition, stubborness, not liking having their work checked). But it's NOT OPEN!

"Dear Sirs,In your calculation of the GISS “raw” version of USHCN series, it appears to me that, for series after January 2000, you use the USHCN raw version whereas in the immediately prior period you used USHCN time-of-observation or adjusted version. In some cases, this introduces a seemingly unjustified step in January 2000.

I am unaware of any mention of this change in procedure in any published methodological descriptions and am puzzled as to its rationale. Can you clarify this for me?"

I have read your defenses of Hansen's refusal to release his data and computer code. It appears that you are missing the point.

The point is that substantial and material adjustments are being made to the temperature “record” relied on by climate scientists, the adjustments appear to support the hypothesis of AGW, and the adjustment team doing the adjustments refuses to disclose the data and calculations which underlay and allegedly justify the team’s adjustments, essentially declaring, we’re experts, trust us.

In law, this would not be allowed. Let me quote from Florida Rule of Evidence 90.705 (actually Section 90.705, Florida Statutes):

90.705 Disclosure of facts or data underlying expert opinion.–

(1) Unless otherwise required by the court, an expert may testify in terms of opinion or inferences and give reasons without prior disclosure of the underlying facts or data. On cross-examination the expert shall be required to specify the facts or data.

(2) Prior to the witness giving the opinion, a party against whom the opinion or inference is offered may conduct a voir dire examination of the witness directed to the underlying facts or data for the witness’s opinion. If the party establishes prima facie evidence that the expert does not have a sufficient basis for the opinion, the opinions and inferences of the expert are inadmissible unless the party offering the testimony establishes the underlying facts or data.

The Federal Rule is similar. In other words, Hansen’s “adjusted” temperature “record” would be inadmissible in a court of law almost anywhere in the United States, as would any evidence based on his temperature “record”, unless and until the data, and the computer code used to process the data, were fully disclosed.

BCL, the problem with Hansen is that he will not disclose his work. Let's ignore his current mistakes and take a look at an even bigger one that has a much larger impact.

Hansen et al (2001) is the source of all the off-sets used to adjust individual stations prior to processing them for GISTEMP. It has been shown by surfacestations.org that many stations do no meet site guides. Now, what does do to Hansen's work. Well hansen says in (2001):

One reason to be cautious about the inferred urban warming is the possibility that it could be, at least in part, an artifact of inhomogeneities in the station records. Our present analysis is dependent on the validity of the temperature records and station history adjustments at the unlit stations.

Now it appears he is right about this. Why, because he further said:

We are implicitly assuming that urban (local human induced) warming at the unlit stations is negligible. We argue that this warming can be, at most, a few hundredths of a degree Celsius over the past 100 years.

However, NOAA/CRN says:

not be subject to local microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks.

shows that 1 - 5 degee C variability based on siting. This far exceeds the few hundredths of a degree C over the past 100 years that Hansen assumes for his UHI off-set work (2001)

Now this is about the difference between the rural (Hansen's lights = 0) stations and urban stations. It is not about trends, if the actual temperature readings are wrong, then UHI off-set is wrong.

We already know of NOAA/CRN's work that it takes 300 stations within the USA to give a 95 percent level of confidence. With Hansen only using ~250 stations, the confidence is less which means even if the stations are right, there is more error not less being injected.

If Hansen would release his GISTEMP process it would go along way to making AGW more transparent. He does not and problems like this keep popping up.

Oh and if you did not get it, the UHI off-set is applied to every station prior to processing. This means that a bias is being injected into data that cannot be corrected for.

Just wondering, why is it that if you post something that shows everything is not wonderful with the CO2 AGW theory, it does not get posted?

Like the fact that all the claimed over-sampling is a myth? The facts are that all stations that are not rural (Hansen (2001) lights=0) have their trends adjusted to match the lights = 0 stations. There are only ~250 stations that are used for this in the USA. Per NOAA/CRN it takes 300 stations that are sited correctly to a 95 percent confidence. 250 stations that have problems meeting the site guidance is not enough to over sample. That is in the USA where most the ‘rural’ stations exist. South America, for exmaple only has six Hansen rural stations, half of which are on islands, which are used to adjust the trends of all the stations in SA. It is not any better with the rest of the world. Actually, this raises questions, is there enough stations for minimal sampling, much less over-sampling.