Warmest in a Millll-yun Years

We’re going to be hearing more about this – see press release here for example. I’ll add to this headnotes, but, for now, I’ll post up some URLs that some of you may find handy.

The underlying article by Hansen et al is at PNAS here. They thank Ralph Cicerone for his review comments. The article itself is a bizarre and undisciplined hodgepodge in which they discuss Hansen’s congressional testimony in 1988 for a while, an ocean sediment record in the Western Equatorial Pool, then sea levels and species extinctions, musing on Dangerous Anthropogenic Intervention and the Framework Convention – which Stephen Schneider set out as an objective some time ago. (Ross twigged to the increasing mentions of Dangerous Anthropogenic Interference – the trigger phrase for the Framework Convention – to which the U.S. is a party.)

171 Comments

The graph of Western Equitorial Pacific temperatures on page 14291 is interesting, in that the x axis is logarithmic, however it does not use the convention that I have always seen in log graphs whereby the factors of ten are shown on the axis. Surely they did not choose their time intervals for emphasis, but when I see unconventional methods, it does draw my attention.

I also find it interesting that there is no uncertainty shown on this graph. One would expect that temperatures based on million year old proxies would have significantly greater uncertainty than do recent instrument roecords. It would be encouraging to see uncertainties plotted on this graph.

“bizarre and undisciplined hodgepodge” eh? That’s an interesting attack on a paper written by one of the leading scientists of our time and personally reviewed by the head of the National Academy of Science. Either they’re both idiots or you’re missing something. The evidence would seem to favor the latter.

It was only a month ago that the NAS was saying uncertainties were such that it was only possible to say that temperatures were the warmest in 400 years and that the evidence was not strong enough to claim that temperatures were the warmest in 1000 years. Yet now, just by asserting it, we are the warmest in 12,000 years – maybe even 1,000,000 years. Why was this apparently conclusive evidence not presented to the NAS last month?

Re #3: The Hansen et al paper states that we are “approximately as warm now as at the Holocene maximum” and within 1C of the maximum temperature of any interglacial. Strictly speaking, this doesn’t contradict a claim that the MWP may have included some years that were as warm or even slightly warmer then the present. What it does do is make clear that such short-term temperature trend comparisons are unimportant relative to the larger issue of us being headed toward temperatures that are incompatible with the climate regime of the last million years and probably the last several million years.

I don’t know a whole lot about the fineness of the foraminifer paleo record, but note that something like the Holocene maximum (which lasted many centuries) would be far more obvious in the record than the much shorter MWP.

Making the claim that we are 1 C from the maximum of any interglacial implies that we can use proxies to estimate any interglacial temperatures precisely enough to make such a claim. The NAS panel had difficulty finding anyone (except Mann) who would agree that proxies could be this precise 1,000 years ago. Here we are a few weeks later with Hansen claiming that he is confident that proxies going back 1 million years are more precise than the participants in the NAS panel beleived 1,000 year old proxies to be (excepting Dr. Mann of course).

Absolutely fiasco. We have hundreds of proxies showing MWP was likely 1 to 1½ C warmer in Europe than current warm period. Sweden, Norway, Far Island, UK, Finland, Germany, France, The Alps, Spain and Portugal. We have also, as Jan Asper said in Finland last week, that western Himalaya proxies gave same kinda results. We have Loehle proxies using data of Keigwin & Holmgren (Sargasso& S-Africa).

This is a common logical error, so common that it has a name. It’s called “the fallacy of the excluded middle”. From the land of Wiki:

The logical fallacy of false dilemma”¢’¬?also known as falsified dilemma, fallacy of the excluded middle, black and white thinking, false dichotomy, false correlative, either/or dilemma or bifurcation”¢’¬?involves a situation in which two alternative points of view are held to be the only options, when in reality there exist one or more other options which have not been considered.

Earth has been warming at a rate of 0.36 degree Fahrenheit a decade for the last 30 years, according to the research team, led by James Hansen of NASA’s Goddard Institute for Space Studies in New York.

After a recent holiday in Alaska I have no faith at all in tree lines (at least altitudinal ones). On many slopes it is impossible to estimate the tree ‘line’ because there is not such thing and isolated outlier trees can occur well above the general area of trees.

Re #2
Not sure it is fair to equate publishing an “undisciplined hodgepodge” with being an “idiot”. Most people who can get away with this are fairly skillful & accomplished. That doesn’t mean everything they say is true.

This report (it is not really a study is it) is all about “correcting” ocean temperatures. For some time, the ocean temperature record has lagged the increase in the land temperature. The satellite trosphere record lagged the land and ocean temperatures but they found a way around that by fixing “some” problem that now may the satellite temperatures agree. This report is to bring ocean temperatures back up so that all three datasets now agree (and Hansen is one of the maintainers of those datasets.)

The report also includes enough crytic languages “if temperatures increase a further 1C, it will be the warmest in a million years.”

Just so that today the media headlines will be “Global warming nears million year high” like the headline in NewScientist this morning.

Hansen has said some pretty interesting things latley. Consider this quote from Roger Pielke Sr.’s site from Hansen considering the climate “skeptics”

“Some of this noise won’t stop until some of these scientists are dead,” said James Hansen, head of the Goddard Institute for Space Studies in New York City, and among the first to sound the alarm over climate change.”

It is really astounding that anyone could claim that is warmer now than it has been in a million years. On my bookshelf, I have a copy of Croll’s book, Climate and Time, published in 1875. He discusses some of the fossil findings of previous interglacials. Some of his comments:
1. In Cromer near Kessengland, England, fossils of hippopotamus, rhinoceros, and elephants are found. This indicates that a much warmer climate than today. Today, the last interglacial is called the Eemian and in subsequent years, similar fossils have been found in Germany.
2. In England, he also reports that Searles Wood found shells that “indicate, as is now generally admitted, a comparatively mild condition of climate.”
3, He points out that fossilized shells of Cyrena flauminalis are in European rivers, whereas today the same species cannot be found only in the Nile and African rivers.
He has more evidence of the same kind and this was all known 130 years ago. It seems to me that Hansen and others are trying to re-write history. They seem to be cherry picking some data that they think supports their viewpoint and ignoring any evidence that they don’t like.

Correction, in my last comment should have said:
3. He points out that fossilized shells of Cyrena flauminalis are in European rivers, whereas today the same species CAN be found only in the Nile and African rivers.

I agree with Steve Bloom (post 2), notwithstanding the fact that Willis’s only defense of Steve McIntyre is to resort to the customary (and very tiresome) “logical fallacy” argument (post 11). Having observed climateaudit for a while I can only say that I am sure psychologists would have a field day analysing the way Steve M interacts with others. On the one hand, Steve M describes the work of scientists, who most people would regard as highly competent, as “a bizarre and undisciplined hodgepodge” – and this is but a tiny sample of the vehemence that pours from this site in the direction of other climate scientists (especially if they are anywhere near the “Mann camp”) – is it any wonder that so many scientists want to have nothing to do with him? On the other hand, we have Steve’s descriptions of himself at meetings with other climate scientists – in these word pictures, Steve is an urbane, fair and polite fellow, eager to share a drink with his opponents. Isn’t there a bit of a disconnect here? Something doesn’t quite gel …..

And Willis – calling someone’s work “a bizarre and undisciplined hodgepodge” is somewhat akin to implying that the person is an “idiot”. If the work is not in fact “a bizarre and undisciplined hodgepodge” (i.e. the worker is not in this case and “idiot”), then I think that might possibly suggest that Steve is missing something …. no, hang on, there is another option – Steve KNOWS that the work is NOT “a bizarre and undisciplined hodgepodge” (so the workers are NOT idiots) – Steve is just trying to deceive us. Which option would you prefer?

This article releases the claim that GISS instrumental says that 2005 is the warmest instrumental year; purports to vindicate Hansen’s 1988 congressional testimony while contesting its characterization in State of Fear, launches into a comparison of instrumental data with Mg/Ca sediment data over the Pleistocene, then marches into an invocation of the Framework Convention based on sea levels and species extinctions. What else can you call this article but a hodgepodge. Now to call it a hodgepodge doesn’t mean that Hansen’s an idiot. It only means that the article’s a hodgepodge.

it would be nice not to get personal here or even initiate a flamewar. Steve is certainly not the only one who thinks that the recent paper shows Hansen’s incompetence – at this moment in this particular question.

The paper only uses the sediment results of others, and the only goal of the present paper is to overhype them. Media success is guaranteed because hundreds of journalists immediately write stories if Hansen publishes a paper because they create the impression – and they are victims of the impression themselves – that Hansen is one of the few most important scientists in the world which is of course absurd, despite all of the respect we may have to some of his other work.

Hansen et al. can only struggle against one particular calculation of Michael Crichton who is himself a novel writer. Even though he is just a writer of novels, he is the person who is essentially right in this dispute – although his particular quantification of Hansen’s error (300%) was calculated in a biased way. But the qualitative conclusion is clear: Hansen was completely wrong in his 1988 testimony. Moreover, Crichton has offered thousands of such rather strong arguments against this pseudoscience (although most of them were calculated by others), and Hansen et al. are just seem far too slow and inefficient to even think about all of them.

Moreover, Hansen uses the space for completely off-topic comments about the value of “engineering fixes” – proposed by the boss of the National Academy of Science plus another man, a Nobel prize winner – and these criticisms (value judgements) of Hansen have nothing to do with the climate reconstructions and moreover nothing to do with science: he only plays politics and naive journalists are eating it.

It is not true that the scientists don’t want to have contacts with Steve, and those who drink with him (and myself) are certainly not the only exceptions: he’s just a pleasant guy to be with, after all, it seems. You’re just showing the usual trick to isolate someone, but he is in no way isolated.

Note that whenever the discussion starts to redirect away from technical things to personal and political accusations here at climateaudit.org, it is always started by people like you or Steve Bloom who just can’t or don’t want to concentrate on the actual technical questions.

Lubos (29) – you say “whenever the discussion starts to redirect away from technical things to personal and political accusations here at climateaudit.org, it is always started by people like you or Steve Bloom”.

Give us a break – the “personal accusations” came directly from one Steve McIntyre – I was just the messenger who pointed it out.

And who, may I ask, are “people like me”? I take it you know all about me?

And if you are going to resort to the wisdom of Michael Chrichton, perhaps you should ask Steve to open a thread/audit on his writings?

A more recent study by Rosenthal et al 2004 (including two coauthors from Lea’s lab)compare sinter-laboratory Mg/Ca replicability and state:

The analysis of foraminifera suggests an interlaboratory variance of about ±8% (%RSD) for Mg/Ca measurements, which translates to reproducibility of about ±2–3deg C. The relatively large range in the reproducibility of foraminiferal analysis is primarily due to relatively poor intralaboratory repeatability (about ±1–2 degC) and a bias (about 1deg C) due to the application of different cleaning methods by different laboratories. Improving the consistency of cleaning methods among laboratories will, therefore, likely lead to better reproducibility. Even more importantly, the results of this study highlight the need for standards calibration among laboratories as a first step toward improving interlaboratory compatibility.

This took about 5 minutes to locate on the internet, but seems to have eluded PNAS referees, including Ralph Cicerone.

#33 heh. No Jim Barrett, it’s just funny because this whole thing IS predictable, and Crichton has spoken about it for a long time now. I am sure SteveM doesn’t want this whole debate started again, but I will say this one thing and shut up.

Michael Crichton, just like me, demands full disclosure of the uncertainties in all areas of science (he has a medical degree from Harvard ; psychology & anthropology background by the way, so he’s “amused” as well ) and dislikes the hodgepodge of information and fear based thinking put out by the media, PR or political firms and the educational materials provided to our kids on many subjects. He says “the greatest threat to mankind” is this very type of thing and I agree. He feels to bash industry and progress is ridiculous, and only people who have never traveled the world would say that. We don’t like any of what is going on right now. So, if that’s what you mean by “so called wisdom” fine. We ARE allowed our opinion whether you agree with it or not.

Yes, at one point he was a productive guy. At some point, he lapsed from being a straight up scientist to being more of a politician. Also, he has some rather frightening tendencies. Recently there was apparently a dust up where he said something to the effect that “the noise” (e.g. honest critique of the “warmer” orthodoxy) would cease when many of us here were all dead. Those are rather totalitarian or even procrustean words, if he actually said them.

Hansen et al’s listing of three scenarios certainly hedges their bets on climate prediction. This is not uncommon in investment schemes where several strategies are started in real time and at some time in the future the one with the best performance is emphasized and touted. He also has hedged his use of a comparison reference for assessing the model prediction performance by showing land/air temperature records, to which the models are purportedly hitched, and station temperatures which he now claims should be used in conjunction with the land/air temperatures for comparison. If the investment strategy performs better using the S&P over the Dow Index than by all means find a reason for using the S&P.

Temperature change from climate models, including that reported in 1988 (12), usually refers to temperature of surface air over both land and ocean. Surface air temperature change in a warming climate is slightly larger than the SST change (4), especially in regions of sea ice. Therefore, the best temperature observation for comparison with climate models probably falls between the meteorological station surface air analysis and the land–ocean temperature index.

Hansen et al give very little information on what went into the 3 scenarios so that one could compare whether the real world conditions are closer to scenario B or C and how far they are from scenario A. Evidently the intent of the overall visual effects of Figure 2 in the Hansen paper is to show scenario B and the station temperatures in close agreement. When I see presentations like this one given by stock investment marketers I became very cautious. Hansen et al should be credited with putting the disclaimer listed below in the main text of the paper and not listing it in smaller print as a foot note.

Warming rates in the model are 0.35, 0.19, and 0.24°C per decade for scenarios A, B. and C, and 0.19 and 0.21°C per decade for the observational analyses. Forcings in scenarios B and C are nearly the same up to 2000, so the different responses provide one measure of unforced variability in the model. Because of this chaotic variability, a 17-year period is too brief for precise assessment of model predictions, but distinction among scenarios and comparison with the real world will become clearer within a decade.

Overall I think reasonable people could consider the material in the paper and walk away with a significantly smaller feeling of impending doom than the editorial part of the paper implies.

Re #4, Steve Bloom, I found this excerpt from page 50 of the 2004 Arctic Climate Impact Assessment (ACIA 2004).
“Recent Studies in Siberia have established conclusively that trees were present across the entire Russian Arctic, all the way to the northernmost shore, during the warm period that occurred about 8000-9000 years ago, a few thousand years after the end of the last ice age. Remains of frozen trees still in place on these lands provide clear evidence that a warmer arctic climate allowed trees to grow much further north that they are now.”
The authors of ACIA 2004 presented a doom and gloom future for the Arctic due to GW, while knowing that the Arctic was warmer 8000-9000 years ago.

The interesting part about “distinction among scenarios and comparison with the real world will become clearer within a decade” is that Hansen and many others have repeatedly stated that we have only a decade to do something before the changes become catastrophic and irrevesible. I don’t see how he can make that claim when he admits himself knows he doesn’t have enough data to make such predictions (17 years being too little).

I think the difference is in the split personalities that one sees in some of these people, and particularly those with more of a policy bent. I see the marketeer/pamphleteer Hansen who wants sincerely to save the world from an impending climatic doom and the scientist Hansen whose published information, minus the editorial, loses its crisis edge and can be interpreted in an entirely different light.

You know everyone, this complaint about Hanson saying things won’t changes until some of the present skeptics are dead is rather unfair to him. Anybody with much knowledge of Philosophy of Science should be aware of the quote from someone (I forget exactly who it was just now) to the effect that scientific revolutions often can’t happen until the generation holding on to the old ways dies off. I.e. scientific wars are won by attrition not persuasion.

I’m quite certain that this was the context from which Hanson was speaking. The trouble is that this cuts both ways. It’s actually the establishment which is pushing AGW. Only when some of these retire or die off will the younger researchers who are afraid to speak up now be able to turn things around.

Us poor Brits tried 8 times to colonise Britain over the past 700,000 years,thwarted mainly by ice cold temperatures, only to finally succeed 12,000 years ago at the end of the last ice age. I’m only here because of ‘global warming.’

It’s a little bit same with “lacking oil”. 33 years ago whe had that oilcrise and fearmonger: “no oil after 2000”. Hansen have ideology where past milleniums are steady hardly any changes and then he add these urban surface temperatures and get that hockeystick.

Redwood is growing now 10 km more north than in 1970’s in Scandinavia. But in MWP is was growing 80 km more north than in late 1990’s.

And this is very important fact and Hansen can’t deny it. Surely Europe was warmer in MWP than in CWP. There are so much documents and proxies. We don’t know the truth of Asia, America and Africa. But proxies don’t show much evidence of ‘cooler than today’ MWP. Right?

49. You mean Max Planck: finding it impossible to convert people, said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

During the hypsithermal warming of the early Holocene (~10–6 ka), climatic conditions throughout much of northern North America were warmer and drier than those of the present (Pielou, 1991; Hebda and Whitlock, 1997), largely as a result of increased solar insolation, which peaked between 10 and 9 ka at 65°N (Berger and Loutre,1991). Temperatures are estimated to have been 2° to 4°C warmer than today for most of this interval, reaching a maximum between ~9 and 7 ka (Hebda and Whitlock, 1997). According to Heusser (1983) and Heusser (1985), rapid warming occurred at ~10 ka in southwestern British Columbia with summer conditions that were drier and as warm or warmer than today lasting until ~6 ka. Clague and Mathewes (1989) report that treeline elevation in the southeastern coast mountains of British Columbia reached elevations that were between 60 and 130 m higher than today between 9.1 and 8.2 ka. Thompson et al. (1993) argue that the driest conditions (period of maximum summer drought) of the Holocene were reached in western North America at 9 ka. The warmer and drier conditions of this Holocene thermal maximum were gradually replaced by cooler and wetter conditions (Hebda, 1995; Hebda and Whitlock, 1997).

During the mid-Holocene some 9 – 6 thousand years ago (ka), the summer in many regions of the Northern Hemisphere was warmer than today. Palaeobotanic data indicate an expansion of boreal forests north of the modern treeline [Tarasov et al., 1998; Texier et al., 1997; Yu and Harrison, 1996]. In North Africa, data reveal a wetter climate [Hoelzmann et al., 1998]. Moreover, it has been found from fossil pollen [Jolly et al., 1998] that the Saharan desert
was almost completely covered by annual grasses and low shrubs.

Between 13,500 and 12,500 14C yr BP, the climate at Little Willow Lake was more seasonal, similar to the climates of high elevations within the Great Basin today. Conditions were warmer than today between 9,000-3,100 14C yr BP, with the warmest period between ca. 9,000-7,500 14C yr BP.

Agathis australis was always present and had its maximum abundance from about 3500 to sometime after 3000 yrs B.P. Major changes in the vegetation and within the bog stratigraphy suggest that the climate was wetter and warmer than today before 4000 BP

Radiocarbon-dated pollen, rhizopod, chironomid and total organic carbon (TOC) records from Nikolay Lake (73j20VN, 124j12VE) and a pollen record from a nearby peat sequence are used for a detailed environmental reconstruction of the Holocene in the Lena Delta area. Shrubby Alnus fruticosa and Betula exilis tundra existed during 10,300–4800 cal. yr BP and gradually disappeared after that time. Climate reconstructions based on the pollen and chironomid records suggest that the climate during ca. 10,300–9200 cal. yr BP was up to 2–3 C warmer than the present day.

The SSTs reach a maximum of 15.6 C in the early Holocene (~11 to 9 kyr B.P.) and generally decrease thereafter, reaching the modern SST (~14 C) in the late Holocene (Figure 3b). A warmer and drier-than-today climate over southwestern South America in the early Holocene was also recorded on the adjacent land [e.g., Massaferro and Brooks, 2002; Moreno and Leo´n, 2003; Abarzu´a et al., 2004], and even in the low latitudes, e.g., in the Huascara`n ice core [Thompson et al., 1995].

—

Bush A.B.G, 2002; A comparison of simulated monsoon circulations and snow accumulation in Asia during the mid-Holocene and at the Last Glacial Maximum, Global and Planetary Change 32 (2002) 331–347

Snowfall during the mid-Holocene, however, is slightly reduced across the entire front range of the Himalaya because JJA temperatures are approximately 1.5–2 C warmer than today

It looks like only a few people including Steve M have read the article and the references. Then there are the same few who have nothing better than to attack Steve M for something he never said while reviewing the article.

Re #54
We were set off course quite early on by #2, who chose to focus on the people rather than the science. As we’re here to discuss methods, data, conclusions, I suggest we re-focus on the paper itself. For context people may want to refer the thread “Willis E on Hansen and Model Reliability“.

Re #15
The tree line is, as you note, difficult to determine and define. Of course there are two tree lines, one related to latitude and the other to altitude. The latter is usually more clearly defined becuse the climate transitions are more rapid. It is usually made indistinct by aspect, that is the direction the slope is facing and of course by soils. A major factor in aspect is the accumulation and pattern of snow through the winter. The latitude treeline is generally located in conjunction withthe 10°C isotherm but this also varies with aspect and other factors. The treeline is most distinct and easier to identify on the west side of Hudson Bay than on the east side. Before getting into climate research I flew search and rescue in the subarctic and arctic for 5 years and through many searches witnessed and used the treeline as a guide, especially in winter. The tree height gradually decreases as you approach the tundra and the final transition is surprisingly distinct in a very short distance in many areas.
Some argue the treeline is a result of the isotherm, others suggest it may be the cause of the location of the isotherm because of albedo changes and ability to trap snow. Clearly there is some interaction. Despite this relative clarity there are outlying clumps of trees some distance beyond the treeline. These seem to be a function of their size and ability to trap snow. Once they can no longer do that they die. Similarly there are open areas of tundra within the treeline that also seem to be a function of size. As long as the wind can blow the snow clear they remain open.
With regard to warmer temperatures in the past there is a wonderful photgraph taken by Professor Ritchie in Lamb’s Volume 2 Climate Present Past and Future of a large stump of a Picea glauca (White Spruce) about 90 km north of the current treeline and radio carbon dated at 4940 ±140. The stump looks to be about 75 cm in diameter, which is very large for a tree at the tree line. The legend says, “This tree in what is now tundra shows wider growth rings than the nearest present-day forest 80 – 100 km farther south near Inuvik in the lowest part of the Mackenzie Valley.”

Re: #2
Willis was quite correct to point out the logical fallacy. The assumption that very intelligent people are not capable of putting out sloppy work products is utter nonsense. Being “smart” does not prevent one from posessing the common human flaws. I’ve seen plenty of very “smart” people make honest, simple mistakes. I’ve also observed them doing shoddy work due to sheer laziness. As my dad told me once “Son, even smart people can do dumb things.” He was right, of course.

Another thing I’ve seen quite a bit of: being critical of someone’s work is equated to an attack on them personally. What rubbish. Anyone who has their ego so invested in their work that their self-esteem rises and falls with the work is a fool. As humans we are all flawed, so there’s a good chance our work will be flawed as well. Critical review can identify those flaws so that they can be corrected and our work improved, therefor we should embrace, not shrink from critical review.

The facts of the report in no way equal the conclusion. To try and extrapolate data from proxies and compare that to actual instrmentation records when you are talking about very small amounts (in degrees)seems disingenuous to be polite. I know Jim Hansen and he is a nice guy who is very bright but if you don’t think he has an agenda that he is pushing, then you do not know him at all. He now finds himself in the unenviable position of having overplayed his hand repeatedly and now is forced into tremendous bouts of hyperbole to get attention for his cause. He truly believes in his cause and is sincere in wanting to correct a problem he perceives is facing the world but has lost control of his proportionality. He will have to now continue to escalate his “findings” in a more dramatic method which will ultimately be apparent to all scientists in the Climate field and will allow his “facts” to be exposed for the hyperbole they are.

The facts of the report in no way equal the conclusion. To try and extrapolate data from proxies and compare that to actual instrmentation records when you are talking about very small amounts (in degrees)seems disingenuous to be polite. I know Jim Hansen and he is a nice guy who is very bright but if you don’t think he has an agenda that he is pushing, then you do not know him at all. He now finds himself in the unenviable position of having overplayed his hand repeatedly and now is forced into tremendous bouts of hyperbole to get attention for his cause. He truly believes in his cause and is sincere in wanting to correct a problem he perceives is facing the world but has lost control of his proportionality. He will have to now continue to escalate his “findings” in a more dramatic method which will ultimately be apparent to all scientists in the Climate field and will allow his “facts” to be exposed for the hyperbole they are.

What’s wrong with Branson spending $3 B on alternative energy technology ? It’s good business sense, as long as climate hysteria persists or China keeps developing. I didn’t see anything in the news accounts suggesting that he wouldn’t patent any resulting breakthroughs and capitalize on them, personally. I didn’t see the word “donate.” I saw a great PR move and commitment to use profits from one of his energy-using businesses to get into the energy-producing business. One step up the knighthood ranks sure to follow and good capitalism to boot.

“This may mean that sea level rise has recently shifted from being mostly caused by warming to being dominated by melting. This idea is consistent with recent estimates of ice-mass loss in Antarctica and accelerating ice-mass loss on Greenland.”

(from co-author Josh Willis)
The Lyman et al paper, along with a conference poster by Willis, both clearly stated there was nothing close to a mass-balance with melting and the sea level change. The wording here almost implied that it does. They say that “this idea is consistent with recent estimates of ice-mass loss” while dodging the issue that the QUANTITY does not. It’s interesting how what I recall being a major caveat of the Lyman et al paper is being spun as such by a co-author.

and this is but a tiny sample of the vehemence that pours from this site in the direction of other climate scientists (especially if they are anywhere near the “Mann camp”) – is it any wonder that so many scientists want to have nothing to do with him?

If you’ve been around here awhile, you know that the reason this site was started was to respond to mudslinging from other scientists in relatively real-time. If you review the correspondence Ross and Steve had with Mann and Rutherford, they were rather patient and courteous. The response they got in return, however, was not.

And as far as “vehemence”goes, have you not read Mann’s comments in reference to Steve? Did you not read his review comments of the recent Burger submission? But Mann has no trouble finding scientists who will work with him. So that can’t be it.

And Willis – calling someone’s work “a bizarre and undisciplined hodgepodge” is somewhat akin to implying that the person is an “idiot”.

I can refer to some of my work as being “a bizarre and undisciplined hodgepodge,” where I didn’t have the time or resources to do it justice. That’s a far cry, however, from calling myself an “idiot.”

“bizarre and undisciplined hodgepodge” eh? That’s an interesting attack on a paper written by one of the leading scientists of our time and personally reviewed by the head of the National Academy of Science.

Steve, don’t be such a square. Big money is involved. BIG. All forces are needed to stampede a carbon credit trading bill through Congress.

Either they’re both idiots or you’re missing something. The evidence would seem to favor the latter.

[If I understood correctly, Berner (who defended her thesis on September 21st) has reconstructed the sea level temperatures from deep sea sediments (somewhere along the Gulf Stream I suppose) during the last 11000 years and found some naturally occuring regularities in the changes (of around 2-3C) in the Gulf Stream temperatures.]

Re #55: Nice try, bender. It was not I who used that hyperbolic language to describe the Hansen paper. And of course your insertion of the Dr. Evil pic was intended to keep the discussion right on course.

Re #63: The lack of a mass balance refers to the apparent discrepancy between the degree of recent melt inferred (not measured) by Lyman et al and the satellite melt measurements (from GRACE and radar altimetry), which are much increased but nothing like what is needed for the match. As Josh Willis points out, if Lyman et al are proven to be correct (or even partly correct) then we have entered a new and more rapid phase of AGW. If they are wrong then the melt rate is still increasing very rapidly, although perhaps not catastrophically (using that term relative to the risk of rapid ice sheet collapse). The twists and turns of the skeptics/denialists to try to make the Lyman et al results mean something different have been amusing to watch.

Re #65: The quarterly profit reports of the fossil fuel companies should be very easy to follow, and yet you persist in looking in other directions. Consider changing that handle to “Don’t Follow the Money.”

As Bender has pointed to in his comment #55, the previous Willis E analysis of Hansen’s scenarios was done with the idea that the computer simulations work with incremental values (anomalies) and as such the models can be zeroed at some point in time together and with an actual temperature and then evaluated against actual temperatures over time. I think that analysis would be sensitive to the particular year one uses to start.

Given all the problems that can be pointed out for the 3 scenario fits to actual temperatures, it is Hansen, the scientist, and not the pamphleteer, who points to “accidental” fit of scenario B to actual data and in the main text and not in a small print footnote.

In the second paragraph below he additionally points to a problem with observed temperature change with scenario A for the time period 1988-1997 where that was the only comparison made. My question is: were scenarios B and C developed after this time period?

What I think the Hansen et al paper is saying here is that, in their eyes at least, the fit of scenario B is good despite being accidental. I give them credit for being honest, even though, when taken in total, the statement gives little confidence in the validity of that model.

Close agreement of observed temperature change with simulations for the most realistic climate forcing (scenario B) is accidental, given the large unforced variability in both model and real world. Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used (12), 4.2°C for doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3_1°C for doubledCO2, based mainly on paleoclimate data (17).

More complete analyses should include other climate forcings and cover longer periods. Nevertheless, it is apparent that the first transient climate simulations (12) proved to be quite accurate, certainly not “”wrong by 300%” (14). The assertion of 300% error may have been based on an earlier arbitrary comparison of 1988–1997 observed temperature change with only scenario A (18). Observed warming was slight in that 9-year period, which is too brief for meaningful comparison.

You attribute all the wrong factors into climate change. You are all blind to the sun in your eyes.

Dr Theodor Landscheidt says it is mostly the sun. Did you ever hear of Solar Torque Cycles or Golden Cycles? Hers a quote from one of his earlier papers:

“If my El Niño forecast proved correct, this would be the third successful El Niño forecast in a row. The second one had a lead time of 2 years. There are other successful long-range climate forecasts exclusively based on solar activity: End of the Sahelian drought 3 years before the event; the last three extrema in global temperature anomalies; maximum in the Palmer Drought Index around 1999; extreme River Po discharges around 2001.1 etc. (Landscheidt 1983-2001). This is irreconcilable with IPCC’s allegation that it is unlikely that natural forcing can explain the warming in the latter half of the 20th century. In declarations for the public, IPCC representatives stress that taxpayer’s money will be used to develop better forecasts of climate change. What about making use of those that already exist, even if this means to acknowledge that anthropogenic climate forcing is not as potent as alleged.”

This was written before his third correct El Nino and now it is four. I was waiting for this fourth El Nino for years to come. There were many false alarms from NOAA about coming El Nino’s and La Nino’s that made me wonder at times. Alas, he was right again. This is real Physics and I am a physicist.

As Bender has pointed to in his comment #55, the previous Willis E analysis of Hansen’s scenarios was done with the idea that the computer simulations work with incremental values (anomalies) and as such the models can be zeroed at some point in time together and with an actual temperature and then evaluated against actual temperatures over time. I think that analysis would be sensitive to the particular year one uses to start.

I did not do the analysis with that idea. Hansen’s graph started with the models zeroed, to compare the tree models, but then compared them to temperature starting at a different point. What’s up with that?

Also, you quote Hansen as saying

Nevertheless, it is apparent that the first transient climate simulations (12) proved to be quite accurate, certainly not “”wrong by 300%” (14).

While they were not “wrong by 300%”, as Hansen says, they were also not “quite accurate.”

Re #67
1. Although the text is mine, I didn’t insert the Dr. Evil picture. I was playing tit for tat, but I’m willing to back off if you are. Just admit that hodgepodge is not a sign of idiocy.
2. If you think “hodgepodge” is an unfair term for this paper, then let’s talk about that. I am prepared to be convinced either way. (Who knows – Hansen himself might even agree with that assessment. Sometimes hodgepodge is the only thing policy people will consume.)
3. “You’re trying too hard.”

For what it is worth, in his latest PNAS paper, Hansen claimed that in 1988 he described his scenario B as “the most plausible.” In fact, in 1988 (Hansen et al., 1988, JGR, 93, 9341-9364), he describes scenario B as “perhaps the most plausible”. Big difference. I guess including the accurate description of his previous claims would diffuse his current point. Thus, I suppose, the PNAS reviewers decided to overlook that bit of mischaracterization.

“Re #65: The quarterly profit reports of the fossil fuel companies should be very easy to follow,”

Rapacious, agreed. But “fossil fuel” is misleading. It’s the oil companies, or those relatively more invested in oil than national gas. For example BP put more into NG for years and would have a competitive advantage over competitors more invested in oil. More importantly for them are the carbon credits…

and yet you persist in looking in other directions. Consider changing that handle to “Don’t Follow the Money.”

Is the direction “oil companies v. truth-telling climate scientists? I know that one. I know they are sleazy, I know what their interests are, that’s easy. Who is the other side working for, is it just an intellecutal stampede, those are the more interesting questions. Let’s “Follow” the recent Levin global warming bill. Some subsidies here and there and some reports and…initial enactment of a carbon credit trading scheme.

Put it another way: is it not enough that we get off Saudi oil, clean the air, and do this by making our transportation systems mostly fueled by electricity and hydrogen? We don’t need any stories about carbon, AGW, etc. to reach this goal. Straight up it’s the right thing to do. Bush mentioned “hydrogen economy” years ago, now it’s “addiction to oil,” but doesn’t do anything about it. Silence from Democrats. Nothing. Early in 2004 Kerry, Gephardt started suggesting a “NASA” like program – silence thereafter. There’s no big player money involved in a hydrogen economy – the “market” is against it and pays a lot of money to politicians to not worry their little heads about such things.

The big international players want the carbon credit exchanges. How to get it? Every con needs a positive sounding selling point.

#67 — Steve Bloom says, about Antarctica, “As Josh Willis points out, if Lyman et al are proven to be correct (or even partly correct) then we have entered a new and more rapid phase of AGW.” etc., etc.

But in M. Rowden-Rich (2006) Antarctic ice & Australian Antarctic science: are they collapsing?” Energy&Environment 17(1), 37-52, we have this abstract:
“A major glaciological event is underway in Antarctica. The West Antarctic Ice Sheet is collapsing into the sea by the progressive inland movement of the boundary between the slow flowing continental sheet-ice and the fast flowing stream-ice of the major ice streams. The massive changes to ice streams flowing into the Amundsen Sea observed in the last decade include flow-rates that now exceed 1,000 metres per annum. These are internal dynamic flow regime-changes. Surging of the ice sheet is not the result of anthropic (human-caused) global warming but the result of a combination of internal changes combined with the destabilizing effect of past sea level rise on an ice sheet founded on a submerged continental shelf. Antarctic ice-surging will have climatic influences via sea level rise and impacts on oceanic circulation irrespective of mitigation measures proposed. Australian Antarctic science is condemned to irrelevance in the global warming debate. It will be unable to provide objective (disinterested) scientific advice about events in Antarctica until operations are removed from the Department of Environment and Heritage. (bolding added)”

I understand that E&E is one of Steve B.’s most fave journals in all the world, and so he’ll have useful observations to make about the article. But M. Rowden-Rich has a Ph.D. in Antarctic geology from the U. Melbourne Earth Science Dept and may have a better idea about the facts.

Re #72
I took “undisciplined” in the sense of “meandering”, “unpolished”. Neither of which is a sign of idiocy.

I took “bizarre” in the sense of “unusual”. In the sense of “hard to understand its scientific value” because it’s not all that novel and it doesn’t add much to what’s been said already. Neither of which is a sign of idiocy. I don’t know if you read PNAS regularly, but this paper was unusually unfocused for PNAS.

So now maybe you can see where our differences lie. Maybe you over-interpreted what Steve M said, whereas I under-interpreted what he said. I don’t think anybody contributing here needs to be demonized over this.

Speaking of which, the Dr Evil image is not an attempt to demonize Hansen. It is poking fun at all modelers who make large errors and are forced to back-pedal and revise their estimates. If you don’t get the humor, or see how it applies to warmer back-pedaling on “unprecedented-ness”, I can explain that in another post. Even as a modeler, I think it is funny. As human on planet Earth I grant it’s more tragicomedic than funny.

Too bad the community can’t debate the findings, as E&E isn’t linked on any of the usual scientific search engines (ISI, PubMed, Agricola etc).

That is: I just tried to look it up in my sub. Nada. My last sub at Uni didn’t have it either, and it was the largest e-journ subscriber on the west coast. You can’t even look at the abstract on Google Scholar. Talk about backwater.

Does someone have the full citation for [17] Hansen (2005) Am Geophys Union? No volume, no page numbers. Is this it – a non peer-reviewed presentation? It is reference [17] in the paper and he uses it as backing for the statement: “the Earth may be nearing a point of dangerous human-made interference with climate”. That is quite a statement. I can’t imagine it’s provable. It sounds fictitious and alarmist. (FUD: Fear the future. Don’t trust the powerful interests. Doubt Earth’s stability.) He criticizes Michael Crichton, but he may as well have cited Malcolm Gladwell here.

Here’s my guess. He’s come to believe his models are reality. e.g. The models do ‘weird things’ numerically and so he thinks Earth’s climate will do the same. If he were here that’s what I’d ask him to talk about. The factual basis for these Venusian scare scenarios.

Hansen, J. “Is There Still Time to Avoid `Dangerous Anthropogenic Interference’ with Global Climate? The Importance of the Work of Charles David Keeling” J. American Geophysical Union, Fall Meeting 2005, abstract #U23D-01

One minor point on the use of confidence intervals. Since Hansen is a climate soothsayer he should be using prediction intervals, which, dealing with future observations, are quite a bit wider than confidence intervals, which relate to population/ensemble means. I’ve made this point once before so I won’t bother expounding here & now.

We will argue further, consistent with earlier discussion (2, 3), that measurements in the Western Pacific and Indian Oceans provide a good indication of global temperature change.

Umm … no. From the site I listed elsewhere, here’s the correlation between SST and NASA Global Mean Temperature:

I have place a small, green-filled circle at the location of their Indian and Pacific Ocean sites, While they are right about the Indian Ocean (correlation ~0.7), they are totally wrong about the Pacific, which shows low correlation (correlation ~0.4) between global and SST temps.

In addition, they show two Eastern Pacific sites that are about 100 miles apart. However, since the last interglacial, they have diverged by about a degree and a half … does that make for a believable proxy?

Finally, we come to the warming from the 1870″¢’¬?1890 mean to the 2001-2005 mean. We have four proxy sites to choose from. Unfortunately, we are not given error estimates for the proxies … bad scientists. Since the proxy results for the change 1890″¢’¬?2005 vary by 1.2°C, I have assumed that the error in the proxy data is at least that much.

Here are the results, compared to the HadCRUT3 global temperature observations:

A close examination of their Fig. 4 reveals some anomalies. First, the EEP2 (Eastern Equatorial Pacific site 2) proxy is the only area that cooled from 1870-1990 to 2001-2005 … while the other EEP site, a mere 100 miles, away warmed during that period … right. Also, EEP2, the cooling site, is the only ocean site that they did not show a horizontal line representing the 1870-1890 and 2001-2005 means … coincidence? You be the judge.

Second, the two EEP sites have cooled dramatically in the last five years, by about 0.6 – 0.8°C.

Finally, the Indian Ocean site has an incorrect line for the 2001-2005 mean, which is high by about 0.3°. It cannot be at the peak temperature, mathematiclly impossible.This means that, in agreement with the other three sites, the IO site is not currently warmer than the last interglacial, ~ 150 kYrs ago.

Re #72
I took “undisciplined” in the sense of “meandering”, “unpolished”. Neither of which is a sign of idiocy.

snip.

bender, notice how Steve has waved his magic wand and gotten you to debate semantics, rather than substance? Spin. He has no legitimate gripe, and can’t make a scientific case, so he falls back on debating the way you phrase things. Yet, somehow it is bad to point out others’ logical flaws as a “tired” argument? Hypocrisy.

#72. OK, Steve B agrees that the Hansen study is a hodgepodge. Good. I would have thought that any hodgepodge was almost by definition undisciplined – so that calling it an undisciplined hodgepodge may have been redundant, but still correct once Steve B conceded that it was a hodgepodge (as no one has denied). So now that we agree that it was an (undisciplined) hodgepodge, the question is whether it is bizarre or not that PNAS publish an (undisciplined) hodgepodge after being personally reviewed by the President of the National Academy. I thought that this was bizarre; Steve Bloom seems to think otherwise, presumably it is his view that that is the regular course of business and that it is a non-bizarre (undisciplined) hodgepodge. I would have thought that my characterization of the publication was the more generous, but may be prepared to defer to Steve B’s view that the appearance of an undisciplined hodgepodge after personal review by the President of NAS is not bizarre. I’ll reflect on this.

Re #88
Yes, I thought twice before replying on semantics, when I realized that his invitation to clarify actually might serve as a useful prelude to a more substantial discussion on science. The article is a “meandering” “cut and paste job”, which is a bit “unusual” for PNAS. So, semantics put to rest, I think the most critical issue from a public policy perspective is raised in my #81: what about the science behind these runaway Venusian alarm scenarios?

To my biased ear they sound incredibly far-fetched. But I readily admit I lack the necessary background to assess this literature. So i find my opinion is near worthless. Which wrankles me. Because it goes against my nature to quit reading and quit learning and just place my faith in the hands of a once-upon-a-time scientist that is so clearly agenda-driven.

Why would Hansen actively *avoid* skeptical audiences? What’s the worse-case scenario? Someone called “bender” at CA compares him to back-pedaling “Dr Evil”? The best scientists actively seek out the most stringent, intelligent criticism they can find, in order to get the toughest test of their hypotheses. Used to be that’s what Universities were for. What does that mean when you surround yourself by sycophants and people that think just like you? It wrankles my Bayesian priors – makes me prejudicially skeptical of the quality of their science. What’s so wrong with the idea of Hansen coming here to explain himself? If anyone could address the question of propagation of errors through a GCM forecast, surely he could? Look at Judith Curry’s example. She weathered the initial storm at CA and eventually gained a notch of respect and credibility, maybe even a friend or two.

I don’t want to take Hansen down. Lord knows that’s impossible given the inequality of our knowledge & experience. I just want to hear him against the likes of the harshest skeptics here at CA. I want to hear HIM talk about multiproxy studies. Does he understand them? Or is he putting his faith in the paleoclimatologists the same way they are putting their faith in the GCMs? Because you know where that leads …

I am happy to accept tree lines as evidence of warmer climate provided they are based on multiple sites and samples from each site. A single sample from a single site may be just that. Evidence of a single tree. This is much more likely to be the case with altitudinal tree lines, which I trust much less.

Willis: in your post 86, you call Hansen et al. “bad scientists” and yet put up a plot (the first) with the inscrutable reference “from the site I listed elsewhere”! I don’t think that is a very good reference do you? I’m not sure it is even a RELEVANT reference given that Hansen et al. were apparently talking about an indicator of “global temperature change” and your plot is some correlation between SEASONAL SST and global temperature. Let’s face it — “auditing” like this is crap — it’s a waste of time for all concerned.

Folland and Parker [1995] developed a model to estimate the amount of cooling of the seawater that occurs in buckets of various types, depending upon the time between sampling and measurement and the ambient weather conditions. Adjustments to the bucket SST values for all years before 1941 are estimated on the basis of assumptions about the sampling time and the types of bucket in use at different times.

(Jones et al. Surface air temperature and its changes over the past 150 years. Rev. Geophys. 37(2): 173-99. 1999)

As Josh Willis points out, if Lyman et al are proven to be correct (or even partly correct) then we have entered a new and more rapid phase of AGW. If they are wrong then the melt rate is still increasing very rapidly, although perhaps not catastrophically (using that term relative to the risk of rapid ice sheet collapse). The twists and turns of the skeptics/denialists to try to make the Lyman et al results mean something different have been amusing to watch.

Lyman et al’s findings were dramatic because they implied yet another phenomenon which conflicted with current GW theory and models.

As I’ve shown in responses to you here and elsewhere, the historical sea level data suggests we’ve had comparable and even substantially higher melt rates in the past for even longer periods of time. Historically, rates slowed or reversed course altogether. Lyman et al’s results are correct (“or at least partly correct”), they do not necessarily suggest “we have entered a new and more rapid phase of AGW.” The only way this would turn out to be possibly true is if melt rates don’t slow or reverse course in the future as they’d done in the 20th century.

While I can still get to the Hansen et al paper via Steve’s direct link, I can’t find it listed on the PNAS table of contents page anymore (http://www.pnas.org/papbyrecent.shtml), any idea what may have happened?

The “hockey stick” was completely and thoroughly broken once and for all in 2006. Several years ago, two Canadian researchers tore apart the statistical foundation for the hockey stick. In 2006, both the National Academy of Sciences and an independent researcher further refuted the foundation of the “hockey stick.” http://epw.senate.gov/pressitem.cfm?party=rep&id=257697

I did not do the analysis with that idea. Hansen’s graph started with the models zeroed, to compare the tree models, but then compared them to temperature starting at a different point. What’s up with that?

While they were not “wrong by 300%”, as Hansen says, they were also not “quite accurate.”

Willis E, thanks for the clarification on your analysis of Hansen’s three scenarios. Just for my own edification, did you adjust the scenarios to the nearest temperature point to their zeroed point. I am assuming that you did not adjust the actual temperatures but used the fact that the scenarios are modeled based on incremental changes. I could not find any point where the three scenarios are “zeroed” in Figure 2 of the latest PNAS paper. Where were they zeroed?

I totally agree with what I perceive as your assessment of the 3 scenario presentations in that I see Hansen, the marketer, selling a policy unlike a “good” scientist and dressing up his evidence like a seller of an investment strategy might and then I see the scientific presentation of evidence by Hansen et al with all the disclaimers and limitations that to my view do not comply with the marketing image.

In general, I am surprised by the surprise that is expressed by the non-AGW oriented people to presentations like the one under discussion or the fact that a prestigious publication/organization accepts them or that the AGW oriented people are less sensitive to the split of personalities of some of these scientists. This approach to the question and study of AGW has apparently become a given and will be utilized into the foreseeable future. A true scientist, I would think, would have a very difficult time maintaining this split personality without some overwhelming non-scientific motivation.

For my interests, I would prefer digging into these presentations and discussing them like was done with the Emanuel paper to the “who called whom what” renditions into which we seem to slip.

Re #102: From the linked page: “When papers appear in print, they will be removed from this feature and grouped with other papers in an issue.” As it happens Hansen et al “jumped the queue” and went into print immediately, and can be found in the current issue.

Re #98: Michael, I think you remain confused about this. First, the older measurements to which you refer were taken with far less accurate instruments and were of sea level rather than melt rates. I seriously doubt that any of the scientists involved think the older sea level readings are an indication of melt rates anywhere close to the amounts found by GRACE and altimetry, to say nothing of those inferred by Lyman et al. The important point here is that as of the last few years we are in a whole new world of vastly more accurate measurements of both sea level and melt rates (via the ARGOs, GRACE and satellite radar altimetry). For the first time it’s possible to get a more or less accurate mass balance, but as of now the numbers aren’t adding up.

Your point about the inability of the models to reflect abrupt change of this sort is absolutely correct true, as has been pointed out in the last year by Jim Hansen. Similarly the models have a hard time reflecting short-term sea level fluctuations, but again this is not news and is irrelevant to the current mass balance problem. I would suggest that there are times when RP Sr. tries to fit square pegs into the round hole of his GCM deficiency campaign.

I hope there turns out to be some basis for your confidence that an extreme melt rate on the scale inferred by Lyman et al could be expected to bounce back to a lower level. I’m not so sure, but OTOH I think it’s more likely that in a couple of years it will turn out that the melt hasn’t really been that high.

Re #103: Press release does not equal press coverage, as I think you’ll find in this case.

Re #106: I don’t really know, but I suspect it implies that the paper is considered to be very important.

Re #107: BTW, Steve M., I should clarify that I did not agree with your “hodgepodge” characterization, but rather was pointing out that I wouldn’t have bothered arguing with you about it absent the pejorative (in tone if not content) modifiers you tacked on.

#112: I would tend to agree with this statement. It really doesn’t matter that much, but the actions which need to be taken w/regards to hurricane policy are the same whether we worry about AGW or not.

All these comments yet none states the obvious – the content and timing of this article are almost certainly a function of the election season.

Steve finds the article a “hodge-podge” and “bizarre”. These characterizations are only apt if you consider the article to be a scientific paper. Normally a scientific paper will try to make one point based on some new results. This paper is not very much like that. It takes a shotgun approach to a number of loosely connected themes. Some of the citations seem to be sober and responsible and other seem to be based on poorly supported popular alarmist materials.

This would indeed be a hodge-podge except for the fact that it is not a work of science at all. It is a collection of sound bites designed to further a political agenda. Viewed in this light it has an appropriate structure.

If I were a partisan for Kerry and Gore (like Hansen) and I recognized that the Wegner Congressional testimony had damaged their agenda, and I saw that the momentum of “An Inconvienient Truth” was waning, what would I do? Well I might fashion a paper that would impact the popular press and the electorate’s mindset just in time for the vital fall elections.

Why should this surprise anyone? The New York Times published some more national security leaks this week. These appear to be harmful to the Bush administration. Do you think the leakers are just disinterested and neutral citizens?

Am I cynical? Should I suspect the motives of people like Hansen? Well yes. When money or other goodies are at stake you should be at least a little questioning. Is there anything at stake her? In terms of money the environmental industry is perhaps a half billion dollars a year and accounts for maybe 50,000 jobs. And of course the Presidency and the control of Congress are also in play.

The Hansen paper is NOT the message. The Press release based on the paper is the message. I wouldn’t be surprised if the Press Relase was written first and then a paper was composed to support it.

Well, since numbers are useless without confidence intervals, I decided to analyze the Mann et al. paper and add confidence intervals.

There are several sources of error, both in the paleo data and the modern data.

According to the paper and the underlying references, the 95% confidence interval for the Mg/Ca method of estimating modern SSTs is +/- 1.2°C. However, I have not been able to find any estimate of the confidence intervals for this method using million year old samples … To be conservative, I have used that figure (1.2°) as the error for the full paleo record, although it seems certain to increase with the age of the sample.

In addition, we have the error in the modern gridded SST’s. The paper Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, N. A. Rayner et al., puts the figure for the average 95% confidence interval for the SST grids at 1.6°C in 1870, decreasing to 0.45°C in 1995. Since the modern SST is used to calculate the paleotemperature, this 0.45°C error must be added to the paleo temperature error, and the individual year’s error is used for the modern temperature error.

Finally, we have the splicing error between the modern and the paleo temperatures. This I have estimated as being equal to the 95% confidence interval for the paleo temperatures (without the modern SST error), or 1.2°C. I have added this error to the paleo record.

Here is the result of the analysis:

With these error estimates, we can examine Hansen’s claim that “… the Earth is now within ~1°C of its maximum temperature in the past million years, because recent warming has lifted the current temperature out of the prior Holocene range.”

Clearly, given these errors we cannot say that we are warmer than the Holocene. Remember, to say that two data points are statistically different, it is necessary that their confidence intervals (shown by the error bars in the graph) do not overlap. Thus, in order to make the lesser claim, that we are warmer than the Holocene range, we’d have to warm up by about a degree and a half. And to say that we are warmer than the old record, we’d have to warm by about 2°.

While looking for Hansen’s 2005 UD23D-01 reference, I happened upon a short article he co-authored: J. Hacker, J. Hansen, ea (2005) “Predictability” BAMS (December issue) 1733-1737. This article addresses error in GCMs and the impact of predictability. It’s very qualitative, and its only reference to an actual published study on error propagation was: “Although model and initial-condition error have been addressed extensively in the literature (e.g., Tribbia and Baumhefner 1988), the synergy between error sources has kept quantification of their respective importance elusive.”

That is, propagation of GCM error has been so thoroughly studied that the best citation one can find in 2005 to cite from in the “extensive” literature is a 1988 paper. The total error is parsed into model error and initial condition (i.e., measurement) error. These plus their interplay enter into “predictability.”

In any case, I found the 1988 paper: J.J. Tribbia and D. P. Baumhefner (1988) “The Reliability of Improvements in Deterministic Short-Range Forecasts in the Presence of Initial State and Modeling Deficiencies” Monthly Weather Review 116, 2276-2288.

This paper is a serious analytical study with plenty of mathematics, including an appendix that shows the derivation of a “Landau Equation for Data Error Growth” that is used to model the error propagation through a calculation.

The abstract: “The reliability of reductions of forecasting error derived from changes in the quality of the initial data or model formulation is considered using a signal-to-noise analysis. Defining the initial data error as the data error source and the model error as the modeling error source, we propose the use of the modeling error as a baseline against which potential reductions in data error may be calibrated. In the reverse sense, the data error can also be used to calibrate the reduction in the modeling error. A simple nonlinear model is used to illustrate examples of the above reliability test. Further applications of this test to actual numerical forecast experiments using analyses from both augmented FGGE database and the operational NMC data base are shown. Forecast comparisons using various suites of physical parameterizations are also presented.”

I haven’t read it thoroughly at all yet, however, at the end Tribbia and Baumhefner wrote: “It is our opinion that research in this area [i.e., the structure of data and modeling error sources — P] is of the utmost importance for we cannot begin to rationally prioritize our research efforts without a better understanding of the global structure of data and modeling errors.”

“Utmost importance” bears repeating. So far as I can tell, from searching the literature, no one has extended Tribbia and Baumhefner’s work. That also appears to be the implicit admission of Hacker and Hansen as well, in that they referenced no one since T&B 1988.

Considering, then, as T&B state, that defining error is necessary to a rational prioritization of climate research, and further that no one appears to have worked since 1988 to define that error (at least not in the theory-based way of T&B 1988), then it appears that climate science is proceeding with not-rational priorities. Is this a surprising discovery?

I have both papers and will deposit them with John A. If there is interest in having them, John A will be glad to accomodate you within the limits of copyright law. Won’t you John. 🙂

As a final note, let’s recall that in his recent seminar in Texas, Gerry North said that in his considered opinion as a climate physicist, the best evidence for AGW was the physical models. The GCMs, for which the error is not known and not well-studied. And likely to be large.

Now that Willis has provided the perfect contextual background in #120, here is my comment on the Hansen paper. How on earth did it pass review without any discussion of uncertainty around the measurements in the graphs? There is not a single journal I can think of that would ever accept a paper of mine (or yours, dear reader) without some expression of uncertainty. (And, I wouldn’t have it any other way.)

This was my complaint about MBH98. This was my complaint about the BAMS article on hurricane frequency. It is my major beef with climate science. If you want to argue that something is unprecedented, you need a statistical test to prove that. When are climatologists going to accept that uncertainty matters (shades of #122)? When are policy makers going to figure out that they need to know what these uncertainties are if they’re going to formulate effective policies?

Undisciplined hodgepodge. The paper is fatally flawed through lack of rigorous proof and should never have been accepted in its current state. I challenge the reader to try to submit a similar article on a different theme with that quality of analysis and see how far you get. Probably not even past the Associate Editor who decides whether it’s worthy of review. Ok, maybe that far. But not accepted without major revision.

Re #123: Does #122 seem credible to you? Do you think Hansen was just making it up when he said there had been “extensive” other work? A quick glance at the paper titles from a GS search on the cited paper would seem to indicate otherwise. Oh, and if one is citing a body of literature and a (or perhaps the in this case) important early paper remains valid, how is it inappropriate to cite that paper?

Hey, Steve, #126, I did the analysis for 120, and I’m not sure, so why should bender be sure? It’s called science, you do your best and put it out for other people to tear apart. Have you found any flaws in my analysis? There certainly may be some … but have you found any?

In any case, bender, in 124, did not say that he was sure my analysis was correct. He said a paper without uncertainty was not science, and I agree.

Re: #125 — Following Steve B.’s comment I did an ISI SciSearch on Tribbia and Baumhefner’s 1988 paper and found 37 citations since then, including 3 more by the same two authors. Most of those 37 apparently have to do with evaluating models for weather prediction, as opposed to climate prediction. In fact, Tribbia’s and Baumhefner’s 1988 paper also has to do with evaluating models for weather prediction as opposed to climate prediction, and so it’s a little obscure to me why J. Hacker, J. Hansen & company would cite T&B 1988 at all in the context of a paper on GCM predictability.

Of those 37 citations, a few were on evaluating theoretical models for error propagation in complex feedback models, including one or two by C. Nicolis, who is a deep expert on irreversible and chaotic systems. He was a student of Ilya Prigogine, I think.

However, there were none that I could find that tried directly relevant propagation of known parameter limits through a GCM model to see how the time-wise projections changed. So far as I could see, no one has ever taken specific account of the types of calculations in a GCM model with respect to how error propagates through the cumulation of operator algebra with which the models are constructed.

That is, GCM models involve calculations of seried first and second order differential functions (momentum and acceleration). It must be known how to propagate error through such functions. Why hasn’t anyone propagated the actual errors?

Honestly, I don’t understand it. In the past, in my experience, when some error has seemed to me to be too obvious to have been neglected, it’s not been neglected. I just didn’t understand the system properly. I certainly don’t understand GCM physics, and would like to know what’s escaping my notice here. I do understand science, however, and know the importance of error propagation and the need for error limits to set reliability bounds on calculated results.

This seems to be missing from GCM projections, no matter Steve B.’s ad hoc dismissals. Re: #123, I, too, would like some expert GCM modeler to arrive here and clarify this mystery. I’d like to know where I’m naively wrong, because otherwise I plain don’t understand how anyone can credit model projections without knowing the error bounds.

A question from statistical ignorance but a pretty good gut-level grasp of numbers…

I don’t see how the splicing uncertainty is based on the paleo uncertainty. The paleo data gets spliced onto the modern data (not vice versa), and is subject to a splice uncertainty that’s related to the modern uncertainty plus the splice-point center difference. Thus, the paleo uncertainty cancels out of the splice uncertainty.

Algebraically, it is (modern uncertainty range) + (modern-paleo splice point offset)… or +1.8/-1.5 in this case. (Eyeballing the splice point offset as 0.1°). I’m just gut-level visualizing here… in words:

To me, there are two separate ideas represented here, and it would be helpful to visualize and describe them separately.

One is the data uncertainty, represented by a (95% confidence) “flex pipe” that encases the (95% confidence) actual data “wire”. Two pipes actually: an old proxy pipe and a new measured pipe.

The other is the splicing uncertainty, which shifts the flex pipe.

At the splice, the old pipe diameter is 2.4° (1.2*2) and the new is 3.2° (1.6 * 2).

Eyeballing, the pipe centers are offset about 0.1° at the splice point. So the old pipe edges are around 28.5 +/- 1.2° = 27.3° to 29.7°; the new pipe edges are around 28.6 +/- 1.6° = 27.0° to 30.2°.

Here’s an obvious question about the splice: the modern SST measurements are of water at surface and things like whether the sample was taken in a canvas or wooden bucket are believed to have enough impact on results to warrant guessing and adjustment as to the bucket method.

I haven’t seen anything in the articles to state the effective depth which the Mg/Ca formation is measuring. For purely paleo comparisons, perhaps this doesn’t matter. But when you splice it to a measurement made in a completely different way, it does matter. What if the plankton measures 10m below the surface or 20m, does it make a difference and how much?

First, the older measurements to which you refer were taken with far less accurate instruments and were of sea level rather than melt rates.

We’ve got past sea level rise data, we know how much of it was roughly due to thermal expansion, and we have the temperature trends. The implication of the combination of the three is that melt rates exceeded current melt rates for periods of time ranging from a few to several years.

So old, less accurate data in this case is readily dismissed, but when it comes to early 20th century temperature data, proxy data from 100-1,000,000 years ago, etc, it’s relatively accurate as long as it supports the idea of AGW (or at least can make scary headlines to help sway people into accepting the idea)? When Spencer and Christy came out with their “more accurate” satellite data (especially after it went through several years of scrutiny and a few corrections), did exclaim “Oh gosh, warming isn’t as bad as we thought it was,” or did you take a stand against it?

I find it extremely amusing you are willing to make historical judgements about melt rates based on a decade or so of “accurate” sea level change measurements – especially on a thread with a topic such as this, where we only have “accurate” data for maybe 0.01% of the time period in question!

Re #125
Yes, #122 does seem credible (incredible as that may sound). But impressions can change, and so I challenge you to cite a definitive analytical study of propagation of error through a GCM calculation. The question has been asked many times here. You’re a reader, you know that. If #122 is not credible, then where are the answers?

I deliberately underestimated the uncertainty in the splice. The true uncertainty is neither what you say, nor what I used.

The difficulty is that the errors add orthogonally. There is one chance in 20 (95% confidence) that either pipe is at the top or bottom edge of the range. But the chance that both pipes are at the extreme end of the range simultaneously is actually smaller than one in 20. Thus, the errors do not add directly, but at right angles.

Pre-splice, the error in the paleo data is ±2.4. At the start of the modern data, the error in the modern data is ±1.6. The error (including the splice) is thus

However, since I wanted to be conservative, I used 2.8 for the paleo error instead of 3.2. I believe this is correct, and I’m sure bender or someone will tell me if it’s not.

Re #126
This is what I am sure of: line graphs can not be used as a basis for inference about that which is unusual or “unprecedented” unless *some* accounting is done for measurement error, sampling error, reconstruction error, etc. It doesn’t have to be perfect, but it’s got to be in there. That is precisely why MBH98 HAD to be followed up by MBH99 – because it was obvious to any real scientist that an uncertainty-free hockey stick was garbage – as a scientific statement and as a policy pool.

But MBH98 had what purported to be uncertainty estimates – based on calibration period residuals from an overfitted reconstruction – you can get the same calibration period residuals from red noise. So its uncertainties were worse than useless.

Re #126
#120 is not perfect. What it is is an honest attempt to make a clearer statement about the amount of uncertainty surrounding that line graph. It’s imperfect. But it’s a heck of a lot better than NOTHING – which is what you see on Hansen’s graph. What I am willing to stake my reputation on is the claim that inferences based on uncertainty-free line graphs are not statistically robust. Those graphs are not science; they’re wishful thinking.

Re #128
Thank you for looking. You seem very surprised that no one is seriously studying these models in terms of how error propagates through them. I agree it is a little surprising, but OTOH If I was a modeler [and I am], why would I spend my time criticizing my baby when my time could be better spent improving it? Why would I release my baby to you so that you could skewer it? If a model is a formalized extension of your brain [and it is] then that kind of skewering could hurt if you’re not prepared for it.

Maybe it’s time for child care services to come and take the baby away from the negligent parents that don’t want to see the baby grow up. Audit the GCMs.

Re #135
Yes, you’re right – my recollection of events is hazy. The uncertainty was there in MBH98. But it was recognized for the garbage it was, thus necessitating MBH99. Where the uncertainty was missing from was not MBH98, but from every other glossy pamplet derived from it. I stand corrected.

#137. Actually the MBH98 calibration intervals were never recognized in the climate science community as being the garbage that it was. Bengtsson in Stockholm argued against me that MBH98 confidence intervals accommodated the various defects in their reconstruction without realizing that the confidence intervals were worse than useless. The MBH99 intervals are changed but no one knows how they were calculated so it’s impossible to say whether they are more rational than MBH98. MBH99 purported to allow for the fact that the residuals were autocorrelated and to expand CIs for that. In econometrics, autocorrelated residuals are taken as a sign of misspecification – thus there are no standard texts on the adjustment procedure undertakn in MBH99. Jean S and I batted our heads against the wall trying to figure out what Mann did in MBH99 and were unsuccessful although we narrowed the field somewhat. Wahl and Ammann’s supposed “replication” of MBH does not attempt to do this – they don’t attempt anything that we had not already more or less pinned down.

Let me put it this way. Surely MBH99 “Inferences, Uncertainties, and Limitations” was an invited follow-up to MBH98? If so, then why was the invitation seen as necessary or worthwhile? Somebody somewhere thought the issue was topical for some reason.

I’m trying to make a point here, not about confidence interval estimation, but about invited submissions that are fast-tracked through, or past, peer review. Which is what Hansen’s “paper” clearly is.

#139. MBH99 was an extension of MBH98 to the year 1000 so that they could deal with the MWP which was the problem area. Jones et al 1998 had gone back to 1000 and raised the bar on them.

The reason for uncertainty calculations was merely to get to being able to say that 1998 was the warmest year of the millennium. They did their 2 sigmas around the proxy reconstruction and said that instrumental 1998 was outside the reconstructed range. If the CIs had been done more appropriately, there would have been a reconstruction like UC’s – essentially a flat line with huge CIs.

The upper few meters (or 10s of meters) is the mixed layer so I doubt there’d be any difference in Mg/Ca. The forams are going to be in this layer during life as light diminishes rather rapidly beneath the surface. So I’d say the Forams will be a good “proxy” of SST depths.

Re #140
If Mann, in 1999, recognized the importance of using a confidence interval on historical data in order to determine if an observation in 1998 was unusual, then why, in 2006, is Hansen not using the same standard of evidence? Why are his curves uncertainty-free? Is climate science regressing in its standards for publication?

In comment #22, I mentioned how tropical animals were found in Europe during the last interglacial and this has been known for a long time. Today we have the following news:

“French and Belgian archaeologists say they have proof Neanderthals lived in near-tropical conditions near France’s Channel coast about 125,000 years ago.

In a dig at Caours, near Abbeville, France, archeologists found evidence of a Neanderthal “butcher’s shop” to which animals as large as rhinoceros, elephant and aurochs, the forerunner of the cow, were dragged and butchered, The Independent reported Wednesday.

Jean-Luc Locht, a Belgian expert in prehistory at the French government’s archaeological service, told the newspaper: “This is a very important site, a unique site. It proves Neanderthals thrived in a warm northwest Europe and hunted animals like the rhinoceros and the aurochs, just as they previously, and later, hunted ice-age species like the mammoth and the reindeer.”

Hate to be picky, but I believe the corrected calculation is inaccurate. Plus, I still have a question. You wrote: (hope this works – my first Tex attempt!)

Pre-splice, the error in the paleo data is ±2.4. At the start of the modern data, the error in the modern data is ±1.6. The error (including the splice) is thus

combined error =

However, since I wanted to be conservative, I used 2.8 for the paleo error instead of 3.2. I believe this is correct, and I’m sure bender or someone will tell me if it’s not.

First, my calculator says the above calculation gives 2.88 — as does Google BTW! (did you know you can put sqrt(1.6^2 + 2.4^2) into Google and it will do the calculation!)

Second, I believe you said the unspliced paleo data is ±1.2 not ±2.4, which when plugged into the above gives

combined error (for paleo) =

Finally, my question: my seat-of-the-pants gut feel method says that there’s more to the splice uncertainty than just a combination of “pipe” diameters. If the centers are offset, that adds to the splice uncertainty. Is that wrong?

MrPete, right on both counts. However, regarding the error size of 1.2 vs. 2.4, the problem is in my text regarding the error, not in the number. 1.2°C is the standard error, not the 95% confidence interval (CI) as I mistakenly stated.

Determining the error in the method is fairly complex. The main document regarding errors in Mg/Ca paleothermometry seems to be Core top calibration of Mg/Ca in tropical foraminifera: Refining paleotemperature estimation, Petra S. Dekens et al.

Dekens says that the standard error of the Mg/CA paleothermometer for G. ruber in the Pacific is 1.2°C, including the depth correction. This makes the 95%CI equal to 2.4, the figure I used. However, this does not include a several other errors.

One is the inter-lab errors, which as Steve M. pointed out (above) are on the order of 2-3°C. In addition, there is a long-term analysis reproducibility 95%CI of 0.5°C, and a split-sample reproducibility 95% CI of 0.5-0.7°C.

Ignoring the lab confidence interval, and assuming that the errors are orthogonal, we get for the combined paleo error, which I have triple-checked as being 2.5. Including the splice error brings the paleo error up to 3.0. I have used 2.8, so my original figure is conservative, as was intended.

w.

PS -Dekens final paragraph in the paper says

Taken as a whole, our results suggest that Mg/
Ca in G. ruber and G. sacculifer can be used
effectively for paleo-SST analysis. Calibrated over
a broad scale of water depths, basins and preservation,
the standard error of temperature measurements
is between ±1.2 and 1.4 C.

Since this makes the 95%CI 2.4 to 2.8, and I have figured it at 2.5 including all errors, looks like we’re in agreement …

PPS – Remember that we have not included the inter-lab error (95%CI ~ 2.3°), nor the post-sedimentation downcore dissolution error over the millennia, which Lea et al. estimate at a 95%CI of ~1°C … so I’ve actually been extremely conservative …

I have no doubt about the conservatism of your error estimates. You’ve now brought in several other factors to additionally demonstrate just how conservative you are being, which is fine by me.

I’m just trying to ensure that I understand what is being said, and that we’re being clear in our assertions.

After all, CA is all about ensuring that good science is done, particularly on the quantitative and reproducibility side of things. (OK, that’s a formal statement. I think Steve M is just having some good intellectual fun and meeting new people and having a few good beers as a bonus :-D)

I always want to avoid situations where we end up with the right answer or even the right numbers, for the wrong reasons. Such calculations never got good grades for me 😉

I’ll make a separate post to clarify a couple of things in the numbers. I’m still sensing some basic looseness in terminology and interpretation that can easily cause confusion (at least to a neophyte like me!)

First, I know that *I* was not keeping my eye on the ball with respect to the difference between std error and CI.

So, first observation: CI is twice the error, correct? I.e., if error is ±1.2, the CI is 2.4.

Second observation: Looks to me like we’ve been mixing CI’s and errors here and there in the error calculations.

The original posting with the graph (#120) says:

1) the 95% confidence interval for the Mg/Ca method of estimating modern SSTs is +/- 1.2°C…
2) I have used that figure (1.2°) as the error for the full paleo record…
3) average 95% confidence interval for the SST grids at 1.6°C in 1870, decreasing to 0.45°C in 1995…
4) this 0.45°C error must be added to the paleo temperature error..
5) splicing error…estimated as being equal to the 95% confidence interval for the paleo temperatures (without the modern SST error), or 1.2°C.

Then, we both were loose on the orthogonal calculation and nobody seems to have caught it to this point. In #133,

Pre-splice, the error in the paleo data is ±2.4. At the start of the modern data, the error in the modern data is ±1.6. The error (including the splice) is thus

combined error = (SteveM’s server obviously miscalculated ;))

Yet it is the CI’s that are 2.4 and 1.6, (error are ±1.2 and ±0.8). Correct? And thus, the ‘error graphic’ in that posting has incorrect numbers?

Whew.

So your most recent posting mostly recasts everything in CI terms, which is quite helpful. I obviously need to be careful as I read the conclusion:

Ignoring the lab confidence interval, and assuming that the errors are orthogonal, we get for the combined paleo error, which I have triple-checked as being 2.5. Including the splice error brings the paleo error up to 3.0. I have used 2.8, so my original figure is conservative, as was intended.

Finally, I’ll repeat the question I asked at the end of #149. Unless I missed it, it has not been addressed:

question: my seat-of-the-pants gut feel method says that there’s more to the splice uncertainty than just a combination of “pipe” diameters. If the centers are offset, that adds to the splice uncertainty. Is that wrong?

In other words: all of the splice uncertainty calculations seem based on separate uncertainties in paleo or modern data. Yet the splice itself (as shown in SteveM’s latest front page posting) plays a humongous part.

Ignoring the small (hah!) issue of deciding the correct splice point, how does off-centered splicing impact these error/CI calculations?

My gut-feel method said that a 0.1 off-centering introduces an imbalance in the standard error (i.e. instead of +/- 1.2 one might get +1.3/-1.1). Presumably, that would not impact the confidence interval (unless one side goes to zero), but seems it would impact the maximum or minimum 95% data range. Since everyone is eyeballing the max/min values, this then becomes more interesting.

So…if a splice is off-center, how does that enter into the calculation?

“twice” is a rule of thumb for large samples. The z-score for alpha=0.05 (95% confidence) is actually 1.96. With small samples (lt 20) use the t-statistic, not the z-score. CI will inflate as t increases to levels gt 2, as standard error, SE=s/sqrt(n), increases, as n, the number of observations in the sample decreases. t-values and z-scores, for varying degrees of fredom and confidence levels, can be looked up in statistical tables in the back of any standard stats text. Excel has a function for calculating “t”, given alpha and n, tinv() it is called. tinv(0.05,20) yields 2.086. So that’s where the “twice” factor comes from.

This is a great example of where we who are not statisticians ONLY learned a “rule of thumb” but it was presented offhand as a simple fact. “Just do this…”

Not knowing to even examine whether there’s a simplification involved, we extend its use inappropriately.

Without having even a cursory understanding of what is involved, it is way too easy to make totally invalid assumptions.

Now where have I heard that one before? 😉

[In my own area of expertise, this reminds me of the time long ago… I’d finished a teaching obligation at a developing world university and offered to help out a bit. I found a grad student writing a paper on an early personal computer. He was very very frustrated with the pain involved. Among other issues, he complained, “every time I add or remove a few lines, I must move all the blank lines at the bottom of every page, that make my dissertation print properly on each sheet!” He had never heard of top and bottom margins…]

The spectra of the calibration residuals for these quantities were, furthermore, found to be approximately “white’, showing little evidence for preferred or deficiently resolved timescales in the calibration process. Having established reasonably unbiased calibration residuals, we were able to calculate uncertainties in the reconstructions by assuming that the unresolved variance is gaussian distributed over time.

MBH99:

In contrast to MBH98 where uncertainties were self-consistently estimated based on the observation of Gaussian residuals, we here take account of the spectrum of unresolved variance, separately treating unresolved components of variance in the secular (longer than the 79 year calibration interval in this case) and higher-frequency bands. To be conservative, we take into account the slight, though statistically insignificant in inflation of unresolved secular variance for the post-AD 1600 reconstructions.

Calibration residuals. That’s not good, but won’t hurt to look into it. They fit proxy records to average temperature, i.e.

where matrix P contains proxies, is a vector of annual temperatures and is the noise vector. True (or approximately true) is known only for the verification and calibration period j=a…b. Calibration residuals are

In MBH98 they find out that looks white (uncorrelated over time). This is generalized, i.e. it is assumed that is white for the whole period. If this holds, sample std of times two would be a good estimate of 2-sigma values. These grow back in time, as they should, so is probably computed with sparser P for earlier periods (all in the above are my guesses).

In MBH99 they find out that is not white (non-flat spectrum, high low-freq component, Figure 2). In other
words, proxy noise is red. I haven’t found supplementary material for MBH99, so the magnitude of redness remains open. MBH99 just
states that ‘five-fold increase in unresolved variance is observed at secular frequencies’, my guess is that it equals one-lag
autocorrelation of 0.5-0.6. But Figure 2 does not say anything about the absolute variance! All we know is that 2-sigmas in MBH98 were
about 0.3 and in MBH99 they are about 0.5. I think it is impossible to figure out how they’ve done it without supplementary material. In
addition, it would be easier to follow if they had used these definitions when talking about stats:

Gaussian process: A stochastic process is Gaussian if its finite
dimensional distributions are Gaussian. Gaussian density function is

White random sequence: all the are mutually
independent. As a result, knowing the realization of in no way helps in predicting what x will be in
future.

iid: independent, identically distributed. If a sequence is iid it
is white.

Gaussian process is not necessarily white, and white process is not necessarily Gaussian or iid.

MBH99 figure 2 shows only relative variance. And the y-axis scaling is confusing. It is impossible to tell how red those (12-proxy) calibration residuals are. It is impossible to tell what is the total variance of calibration residuals. Thus, it is impossible to figure out how they computed that 0.5 C 2-sigma.

MBH in Nature, 2006:

The subsequent confusion about uncertainties was the result of poor communication by others, who used our
temperature reconstruction without the reservations that we had stated clearly.

Now I got it, y-axis scale is logarithmic and distorted by those significance levels. 5-fold increase in normalized variance, that would mean p=0.67 (on the average). That’s red. No wonder they show only relative power spectra of residuals, and no time domain plots.

Relative variance is obtained by removing that S0 term. Now, in MBH99, at zero-frequency we have five times the white noise. White noise p=0 gives S(f)=1, makes sense. Now we need to solve p with S(f)=5, S0=0 and f=0. That is easy:

Thks. Good guess, quite close. Still a student, though (PhD candidate who is spending too much time here? 🙂 ). Linear optimal estimation with Gaussian inputs is fun, but things get tricky in real life when we have non-linear systems with non-Gaussian inputs.

Homework for math geeks: using the first eq in #160 show that is a good approximation of random walk power spectrum.

#Re 162, UC, good luck with your PhD studies, I personally use extended Kalman filters extensively, appreciating that they’re suboptimal. I need to run and pack as my wife and I are off to Mexico for a week, but a thought for you. If one uses optimal estimation techniques for a NH temperature reconstruction, assuming that NH temperatue time series is nice (ie ergodic, widesense stationary,gausssian) than the optimal estimate for the time series with no or poorly correlated proxies will always have a hockeystick shape.

If one uses optimal estimation techniques for a NH temperature reconstruction, assuming that NH temperatue time series is nice (ie ergodic, widesense stationary,gausssian) than the optimal estimate for the time series with no or poorly correlated proxies will always have a hockeystick shape.

That’s where we need to think about splicing. But if you assume the average to be 1890 value or something like that, then yes. Theoretical AR1 wanders around 0, in real life we have additional random constant to estimate.

#164

If p of the calibration residuals really is around 0.67… That’s something. p of global temp during calibration period is 0.64, proxy noise is not red they say, 12 thermometers couldn’t bring the 2-sigma down to 0.5 C, ozone in the stratosphere affects tree rings, but CO2 doesn’t… Well, then we can just forget MBH99.

re #160/#164: UC, great, I think you are after finally figuring out the ’99 confidence secret! I knew it had to be something simple. If you have time, check (the 27.4 Update part of) this post (IGNORE these columns), I think that might give the rest of the secret.

Oh, IMHO, UC, as a Ph.D. student, you should not look too closely to Mann&Lees 96, it may give you too many bad ideas 😉

Thanks for the link, the raw data surely helps! But I’m still not sure if we can compute that 0.5 C without seeing the full 12-proxy reconstruction. Weighting by 2/3 is mentioned, but does it have a theoretical basis even if redness is p=2/3? Maybe that is because they use RE and not RMS. RMS would catch that extra variance due to redness without any extra tricks, right? (if the sample window is large enough). Still a bit confused.

Oh, IMHO, UC, as a Ph.D. student, you should not look too closely to Mann&Lees 96, it may give you too many bad ideas 😉

OK;) Median smoothing of spectrum will remove that peak right away. So, ‘background noise’ can never be very red.