Calibrating “Dr. Thompson’s Thermometer”

One of the most persuasive images in the global warming debate is a graph that Al Gore describes in his An Inconvenient Truth as “Dr. Thompson’s thermometer.” According to Gore, this graph is based on oxygen isotope ratios from ice cores collected by Lonnie Thompson and his colleagues, and provides “the most definitive” independent confirmation of the Mann, Bradley and Hughes Hockey Stick curve:

[T]he so-called global-warming skeptics often say that global warming is really an illusion reflecting nature’s cyclical fluctuations. To support their view, they frequently refer to the Medieval Warm Period.

But as Dr. Thompson’s thermometer shows, the vaunted Medieval Warm Period (the third little red blip from the left, below) was tiny compared to the enormous increases in temperature of the last half century (the red peaks at the far right of the chart).

As it happens, the graph that Gore presented really was the MBH HS, spliced together with an instrumental record as if they were a single series, and has nothing to do with Thompson’s ice core research. See “Al Gore and ‘Dr. Thompson’s Thermometer”.

Thompson really did publish a similar graph in 2003 in Climatic Change, shown below as Fig. 1:

Fig. 1 Fig. 7 from Thompson et al. CC 2003

Gore was supposed to have used panel (c) of this figure, which is based on 6 Andean and Himalayan ice core records, but instead used panel (d), which is the 1000-1980 portion of the MBH99 HS, overlain with a CRU temperature index as a separate line. Thompson has confirmed that this substitution was made, but despite being an official member of AIT‘s Scientific Advisory Board, has made no effort to publicly correct the error. (See Gore Scientific ‘Advisor’ says that he has no ‘responsibility’ for AIT errors.) With the simple trick of shading the space between the two lines and the horizontal axis with a uniform color, Gore made Thompson’s two lines appear to be a single series.

In fact, Thompson’s panel (c) is not calibrated to temperature, but is simply a composite of Z-scores computed from the underlying oxygen isotope ratio data. It is therefore not a thermometer at all as claimed by Gore, but rather is what might be called a Z-mometer. However, it does turn up sharply in the last century, with the last decade plotted being the highest in the past 1000 years, suggesting that it does accurately measure temperature, and that indeed the 1990s were the warmest in the past millenium.

In an online working paper, I show how Thompson’s Low -Latitude Composite Z-Score (LCZ) index in his panel (c) and the two regional indices in Panels (a) and (b), which I call ACZ and HCZ, were derived from the underlying isotope data, and calibrate LCZ to the CRUTEM3vGL global land air temperature index for 1851-2000.

Fig. 2 below shows my emulation of Thompson’s LCZ series. Note that whereas the decadal averages for 1000-1990 (shown in blue) are based on all 6 cores, the final decade of the 1990s (shown in red) is based on only 4 cores, since two of the Himilayan cores end by 1990. (In order to make the final point show up, and for comparison to the 6-core LCZ, the 4-core LCZ for the 1980s is also computed and plotted in red. The line segment connecting the last two points is also plotted in red.) Furthermore, since Thompson first averages the available cores for each region to obtain ACZ and HCZ, and then averages these two regional indices together to obtain LCZ, the weight on the remaining Himalayan core, Dasuopu, suddenly increases from 1/6 to 1/2 in the last decade. Although Dasuopu was not as high in the 1990s as it was in the 1940s, its Z-score was running higher than the other two Himalayan cores, so that when the other two drop out, HCZ and therefore LCZ suddenly increase to record highs, even though neither ACZ nor any of the individual Himalayan series exhibits this behavior.

Fig. 2

Some of the individual cores, such as Quelccaya and Dasuopu, are strongly correlated with temperature, while others, such as Sajama and Dunde, are not. Therefore there is no universal relationship between Thompson’s oxygen isotope ratios and temperature, and whatever relationship is present must be determined emprically for each site or combination of sites. Since LCZ6 (for 1000-1990) and LCZ4 (for the 1990s) assign different weights to the individual cores, they cannot be expected to have the same relationship to temperature. Accordingly, LCZ6 and LCZ4 must be calibrated separately to temperature, and the 1990s reconstruction computed from the LCZ4 correlation.

Fig. 3. below shows my calibration of LCZ to decadal averages of CRUTEM3vGL. The long period 1000-1990, shown in blue, is based on the LCZ6 calibration, while the final decade of the 1990s is based on the LCZ4 calibration. (Again, the 1980s are plotted both ways for comparison and in order to make the final point visible. the last two points are connected with a red line segment.)

Fig. 3

Even though Thompson’s LCZ series ends on a dramatic record high in the 1990s, when the series is correctly calibrated to temperature, the point estimate for the 1990s in fact comes in a little cooler than the 1940s.

However, Fig. 3 merely shows point estimates of temperature. Because the underlying regression coefficients are uncertain, the reconstructed temperatures have considerable uncertainty. The working paper linked above provides details of a new method of computing Bayesian confidence intervals under an uninformative prior for past temperatures.

Fig. 4 below shows the reconstruction of Fig. 3, along with a conventional 95% confidence interval computed by this method, plus a 50% “confidence interval” that merely indicates the quartiles of the posterior distribution. Because the calibration posterior distribution, which is based on the ratio of normals distribution, is much heavier-tailed than the normal distribution itself, the 95% CI is very wide in comparison to the 50% CI.

Fig. 4

It may be seen from Fig. 4 that “Dr. Thompson’s Thermometer” is in fact completely uninformative about the existence or absence of a Medieval Warm Period (MWP), Al Gore to the contrary notwithstanding. Temperatures throughout the period 1000-1990 could have been as high as 1.2° C warmer than 1961-90 or as low as 1.8° C colder. The estimates for the 1990s are considerably tighter because of the highly significant slope coefficient for LCZ4, but even that decade has a 95% CI of (-0.32, 1.75) °C.

Details of the calculation are contained in the working paper linked above. Comments here or by e-mail are welcome. The paper is still preliminary and incomplete, but revisions will be indicated here as they are made.

Update 1/12/10. I’ve further updated my online working paper, showing that my approach is in fact a new derivation of a method proposed by Hunter and Lamboy (1981), and that my new derivation overcomes objections that were raised against the HL method by Hill and others when it first appeared.

I’ve also simplified the diagrams, so that the 1980s are simply represented by the 6-core index, and the 1990s by the 4-core index, with a red line connecting the last point to its predecessor.

Update 3/14/10. I’ve again revised my paper, which is linked with data and programs at http://www.econ.ohio-state.edu/jhm/AGW/Thompson6. The new version, dated 3/12/10, corrects a key typo in the new (11), and corrects the treatment of the multiproxy case. The derivation of the latter still needs some work, but this does not affect the rest of the paper.

A greater shortened version has been submitted to Technometrics, the journal that published the 1981 Hunter & Lamboy article whose approach to calibration CIs I am vindicating.

The conclusions of the paper as reported in the post have not changed. The new revision clarifies some points that were not clear to referees on a previous submission to another journal and on a grant proposal.

Paul — I fixed the link to the working paper, which just had a case conflict.

RE oLD gUY, I’m seeing the pictures OK. The unnumbered first figure with Al Gore is from the old CA, but is on the new server as climateaudit.files.wordpress.com/2007/11/gore_a1.jpg . (You may have to widen your window to keep it from covering the text to its left.)

That’s the problem confronting all agnostics! Perhaps we have some quaint and naive belief in the “truth” of science.

The tricks pulled with various charts and graphs by shortening or lengthening chronology scales is closer to a marketing effort than actual scientific endeavour.

Whats really depressing is the amount of scientists i’ve seen on tv making weak excuses for the manipulation of datasets exposed by Steve Mcintyre and the CRU emails. And their excuses about the witholding or deleting of original raw data used for papers with astounding conclusions re:agw.

You’d think there would be a few honest scientists, with enough of a reputation, to be able to look into the science of the agw controversy, and report purely on the facts.

Another odd thing to come from all this is that i feel now as if seminal works on non-linear dynamics (chaos) are like worthless because suddenly we are being told the climate system is possible to model for accurate projections/predictions. Sorry but every book on the subject i have read says that is just plain bogus (in any practical sense). Its come down to the climate modellers doing semantic acrobatics in order to ignore already established science on the subject. But for some reason they say; it does not apply to the climate system.

This all is starting to feel terribly “invasions of the body snatchers”!

What books have you read that suggest this* is bogus? Can you quote some specific passages?
.
* “this” meaning a chaotic weather/climate system that nevertheless responds linearly and predictably to external forcings.
.
Steve M does not like one-paragraph hand-wavey dismissals. If you are going to make an assertion like this, you will need to be prepared to back it up with evidence, preferably derived from IPCC sources.

Thanks, Hu.
This clear presentation shows why you want to see those confidence intervals. The ice core data are in fact too imprecise to constrain our knowledge of past climate.
.
(P.S. Did you see the testimonial from William Schaffer that I posted on unthreaded? I’m asking Gavin Schmidt to respond.)

Lonnie Thompson and colleagues, famously described by Gore (2006: 60-65) as follows:

Lonnie and his team of experts then examine the tiny bubbles of air trapped in the snow in the year that it fell. They can … measure the exact temperature of the atmosphere each year by calculating the ratio of different isotopes of oxygen (oxygen-16 and oxygen-18), which provides an ingenious and highly accurate thermometer. ….

Did Gore really say this? If so then he is wrong. Thompson’s measurements are on the oxygen isotope composition of the water and not of the trapped air in bubbles in the ice.

I left out some equally muddled statements Gore made about how Thompson measured CO2 and how “The correlation between temperature and CO2 concentrations over the last 1,000 years — as measured in the ice core record by Thompson’s team — is striking”. In fact, I don’t think Thompson’s cores were used to measure CO2, since that requires very deep cores in which the “firn” has consolidated enough to seal off the air bubbles. (My first ellipsis in the passage you quote should in fact have been 4 dots, since it left out a full sentence — I’ll correct this in the WP when I expand it.)

It would have been nice if Thompson, as one of Gore’s official scientific advisors on the project, had corrected these little slips via an announcement on his Byrd Center website, not to mention the much bigger error of the true authorship of “Dr. Thompson’s Thermometer”.

I suggest that it is probably always worth telling the new reader that proxy studies are usually trying to predict a GLOBAL average temperature from, say, four or six ice cores, or indeed one tree.

Probably somewhere else is the place to remind them that one of the Global Warmmongers’ attacks on the MWP was to claim that it was restricted to a small area of the earth – just Britain, Iceland and Greenland, say – and therefore unrepresentative – unlike, for instance, that one Siberian larch.

Are ice cores theorized to show an annual average or just an average during a season (like trees rings)?

Interesting to see that huge spice around the 30’s-40’s and the huge downturn after way down around the 70’s low point (tough to eyeball the decades). Much different signature than global instrumental surface stations. Kind of makes me wonder a) if this is a thermometer, and b) if it is way too local to be compared to global temps (see dearieme’s comment).

I’d say it doesn’t back up tree ring proxies at all (nor does it refute them; it doesn’t have enough to do with them, IMO). The only things that can support global temp reconstructions are other independent global temp reconstructions. I don’t see how this is global; it is independent though🙂 Do we have global ice core data and what does it look like (with error please)?

Your question is very relevant and if you read Hu’s paper you will see that he does state that other factors may be responsible for changing isotope compositions including a change in the seasonality of the snowfall.

In general, high latitude ice (e.g. Antarctica and Greenland) has a strong positive correlation with local mean annual air temperature. If you sample at high enough resolution you can also track seasonal temperature changes. However, if the precipitation is heavily biased towards one season then we might expect stronger correlations with seasonal temperature than annual average temperature.

For low latitude regions the link between ice core isotope composition and temperature is not so robust and other effects are seen. These include changes in the source region for the water vapor etc. Thompson’s data is probably affected by some of these issues.

This is very interesting. Here’s an observation, and a question based on it. A critic might make these two points:

1. Although the year-by-year (or decade-by-decade, whatever the time unit is) 95%CI is very wide, broader time averages (say across 25, 50 or 100 year intervals) could have a tighter 95%CI, particularly if the degree of persistence isn’t too great.

2. Moreover, we might say that we are more interested here in changes than levels. This could matter a lot if (say) the ice cores are heterogeneous.

So here is the question. Using your techniques, can you construct a 95%CI for the difference DELTA = LATE – EARLY where (for instance) LATE is the average true temperature over the 100 years 20th century and EARLY is the average calibrated temperature over the 100 years of the 11th century?

It seems to me that showing that a DELTA like this one is also highly unreliable would be a much more persuasive demonstration.

If there is something wrong with what I am suggesting, or problematic given your statistical assumptions, let us know so we can all learn.

1. This is a good point — this procedure just calibrates each past decade independently of other decades, and therefore does not take into account the information in the adjacent decades’ reconstruction and/or instrumental data. The “sequential prior” approach I will discuss in Section VI would take this into account, and presumably give smaller CI’s, but goes beyond the state of the art.

2. These CI’s already are for the differences (decade t) – 1961-1990, the latter being the reference period of the temperature index.

Could we have a little bit more information on this statement: “Some of the individual cores, such as Quelccaya and Dasuopu, are strongly correlated with temperature, while others, such as Sajama and Dunde, are not.”

Secondly, I keep feeling that the temperature reconstructions proceed in an odd fashion. This is a set of distinct sites being weighted to generate a reconstruction.

Why does the entire process not proceed down the path of calibrating proxies into the best local long-term temperature record possible by site and then worry about converting individual temperature records into regional or global ones?

If the individual sites are fully calibrated as a first step, then you’ve got a direct measurement of how well that method works in the error bars. And if your weighting uses the errors as part of the averaging, you’ve got a firm grip on at least the expected “instrumental” error prior to adding in the geographical issues.

I see this might be addressed by “d) a more powerful sequential prior approach”, but I’d like to hear more.

Specifically, I keep thinking about how useful the temperature distribution plots – as from modern satellite – would be if extended as well as possible into the other available temperature data. Fully recognizing that the data has issues and is much more sparse, it would still seem more relevant than calibrating to “Northern Hemisphere Average Temperature” or “Global Average Temperature.”

Hu M, I would think it more proper to address your differing approach and what it reveals solely to Dr Thompson. Al Gore appears to be in a muddle over what he is looking at when in comes to things that are a least bit technical. He apparently even has trouble deciphering emails these days.

I think Gore is very relevant to this LCZ series, since it was he who made it famous by claiming that it accurately measured past temperatures and vindicated the HS.

Thompson himself just says that maybe it measures temperature or maybe not, but probably it does because see how much it looks like the HS. So he was actually using the HS to vindicate his LCZ as a (qualitative) thermometer.

I don’t think AR4 ever used this series (or ACZ or HCZ), if only because Thompson never archived the numerical values.
I was able to replicate what he did, but only with a lot of unnecessary work.

I’m assuming that the “v” versions average temperatures by weighted least squares (WLS), giving low weight to cells that have only a few stations and therefore high variance. That sounds reasonable, so I went with this version.

I’m not particularly wed to CRUTEM3vGL, so if anyone thinks CRUTEM3GL or something else would be better, please let me know.

I did try GISStemp, on the grounds that it doesn’t suffer from CRU’s secret data problem, but it gives much weaker slope coefficients, if only because it has only 12 decades of data before 2000. In order to give Thompson’s index the benefit of the doubt I used a CRU series back to 1850. Also, CRU3 is the successor to the CRU2 Jones(99) series Thompson had alluded to.

It is easy data. I know modelers use it. And as stated in Al Gore’s book excerpt hyperlinked above,

“The correlation between temperature and CO2 concentrations over the last 1000 years – as measured by Thompson’s team – is striking.”

The hyperlinked text shows confusion from Gore about CO2 concentrations and isotope matters how they related to Dr. Thompson’s Thermometer. Maybe there is a reason he is confused.

Elsewhere, might also try modern atmospheric CO2 concentration for recent centuries. That qualifies as the proverbial “instrumental climate data” to be used as calibration against alleged temperature proxies.

Looking at Fig. 1, it seems like – taking his words literally – Gore’s caption could be called correct: The curve that appeared on AIC (Mann’s HS) is the only of the four graphs in Fig. 1 that’s actually a *thermometer* in that its values are given as degrees Centigrade…! Mann’s “reconstruction” did serve as “Thompson’s thermometer” to which Thompson compared his ice-core data to show the latter is a temperature proxy because both look hockey-stickish.

I tried using several different colors, but then the legend grew so large there wasn’t much room for the graphs! One alternative is to just fill the 95% CI with a custom pale version of the point estimate color (with an intermedediate tone for the 50% CI if plotted), but I didn’t have the patience for that. UC showed me how to do it once using Matlab’s fill command, with a special “trick” if I may use that term, to suppress the edge lines of the polygons.

I’m trying to be expansive in the WP, with the idea that it will become the SI for a future, greatly abridged journal article. While I’m hoping this post elicits technical discussion of the WP, the post itself is directed to a general audience.

Just a few minor points. I assume you are using the greatest lower bound and least upper bound to arrive at the CI of (-1.8, 1.2) °C. With this in mind, shouldn’t the 1990’s CI be (-0.32, 0.75) °C. And, after reading your paper, it looks like it should be the CI for 1981-2000. It might also be clearer to say: “Temperatures throughout the period 1000-1990 could have been at least as high as 1.2° C warmer than 1961-90 or at least as low as 1.8° C colder.”

The -1.8 and +1.2 are the approximate LUB and GLB. I should use a more precise value in the revision. But for the 1990s (1991-2000), the (LCZ4) recon and .025, .25, .5, .75, and .975 values are
0.3259 -0.3229 0.1137 0.3230 0.5825 1.7476
so the exact 95% CI is (-0.3329, +1.7476). (Note that the classical inverted reconstruction and the posterior median are very close, but not quite the same thing. I’m going with the classical estimator as my point estimate, since it is a lot easier to compute and explain.)

I didn’t tabulate the plotted LCZ4 values for 1981-1990, since these are supplanted by the LCZ6 recon when available. Perhaps I should just plot a step function in the future to make this more clear.

I was just eyeballing the 0.75 from the graph, but I now realize that the 1990’s is just a single data point (vector) as you have shown with (0.3259 -0.3229 0.1137 0.3230 0.5825 1.7476). I was confused by the red lines that included the 1980’s data vector that starts at approximately 0.75 for the upper CI. Yes, a step function would be clearer. Thanks for the response.

Kind of off topic, has anyone ever gone to the same ice core site, and taken multiple cores, over multiple years (for say, a 20 year period), to see:

1) what kind of variation we get from different cores sampled in the same year at the same site

2) what kind of variation we get from different cores sampled in different years at the same site (for the same analyzed year) and if there is a trend in variation increasing back in time. Does the core data from 10,000 years ago really have the same CI as the core data from 50 years ago? Is the measured parameter preserved so well that there is no significant long term trend in variation as the layer ages and gets more compressed?

I was just thinking of Gage R&R earlier after reading this article and some of the comments. There is much discussion here about various samples (of whatever material) being used to reconstruct temperature. The discussions get very technical, involve copious amounts of math, and usually end up being challenged in varying degrees from mild criticism to outright rejection. (Whether here or on sites like RC.) I’ve read numerous sources since Climategate broke and I don’t know if I’m seeing any convergence toward an agreement on the best method(s) to reconstruct past climate and then compare it to today’s climate.

Based on that it would be extremely helpful if CA, WUWT, and other similar sites could be at least working toward building a reference with regards to temperature reconstructions. I believe Gage R&R studies should be a part of that.

I would also suggest more emphasis on the big picture and what each detailed technical analysis means in the big scheme of things. The comment by “bender” about the “objective middle ground” troubles me. What is it about the middle that makes you more objective? Does that mean you can’t have a strong stance on one side and also be objective? The goal here should to be accurate and correct with respect to the past and present climate, I think.

On the other hand, “The ice core data are in fact too imprecise to constrain our knowledge of past climate.” is exactly the kind of statement I would make if the evidence supported it.

The classical inverted estimator may actually be the posterior mode, and therefore truly the “most probable” value, though I haven’t tried very hard to prove this. It certainly isn’t the posterior median except when the proxy exactly equals its calibration period average value and the distribution becomes symmetrical, but except in that case the median won’t be the mode, either.

I think that plotting the quartiles (the 50% CI, or old-fashioned “probable error” band) do help draw attention away from the point estimate. Again, half-tones may help make this point.

To add, to enhance the visualization effect, imagine the x axes of the CO2 concentration plottings doubled. The Etheridge 1000 year graphs are half page, “square” propotions. The multiproxies and similar are full page wide.

In many places, not cherry picked, they are merging the CO2 concentration data. Just because it sounds stupid doesn’t mean it’s not true. They believe recent millenial temp change is congruent to CO2 change. One in the same, statistically.

Just a question. Are there any physical experiments that would suggest that Oxygen Isotope ratios in snow fall ? Ice?… are dependent on temperature. ie Is this a pure correlation study or is there an experimental reason for pursuing this avenue?

From reading Thompson CC03 and PNAS06, it appears that δ18O in ice deposition (ie reverse sublimation) is positively correlated with temperature at the time of deposition.

However, the correlation is not always observed with annual average temperature, because this might be dominated by variations in the season of max snowfall, or even in the altitude of deposition.

So it looks like there is a sound physical reason to expect a correlation with some kind of temperature, even if sometimes it doesn’t show up with annual average global (and therefore local) temperature (eg Dunde, Sajama, Bona Churchill, etc).

primitive, non-mathematical, but works visually: enlarge to 400%, put rectangular marquee frame around section to colour, with 0 feathering to stop colour “leaking”, use paint bucket, section by section if necessary.

I read a book on El Nino over 10 years ago. It did portray Dr Thompson in a very favorable light, as was his studies of tropical glaciers. I wonder if the Ohio St professor is open to having outside sources study his numerous ice cores. In light of the recent CRU problems, and potential questions concerning all proxies, and the fact that he is emplouyed by a state funded institution, what is the harm in re-evaluating his work?

Thompson has collected a lot of very important data, but unfortunately hasn’t properly archived much of it, with the result that he may as well have not bothered.

Most of this work was NSF supported, so the NSF is the place to go to put pressure on him to release definitive and complete data.

Steve McIntyre has commented frequently here on the inconsistency and inadequacy of the data from his sites. In the right margin of this page, select Proxies-Thompson in the Categories window for links.

One particularly frustrating site is Bona Churchill, an NSF -funded study that was completed in 2003, but for which there still is no publication or data. You can search CA for Bona Churchill using the search engine on the right margin of this page. Start with Steve’s “Gleanings on Bona Churchill.”

It’s common for even NSF-supported researchers to keep data to themselves for a reasonable period so that they can do the first analysis of it and get the first publication. But IMHO this shouldn’t be more than a year or 2 at most. Thompson is way over due on a lot of his data.

You write:
However, it does turn up sharply in the last century, with the last decade plotted being the highest in the past 1000 years, suggesting that it does accurately measure temperature, and that indeed the 1990s were the warmest in the past millenium.”

I suspect that you meant to imply that the Thompson graph is presented in such a way as to “suggest” the accuracy. However, this sentence could easily be read to say that you concur that the ice core graph accurately measures temperatures. If you meant the former, I suggest you rephrase the sentence more clearly.

I meant the former — superficially, the graph looks pretty persuasive that it is picking up the 1990s high in the instrumental temperatures, strongly correlated with instrumental temperature, and telling the same no-MWP story as the MBH HS.

But then I go on to show that this is largely superficial — the 1990s were computed with a different set of cores than the rest of the curve, and when correctly calibrated, show a somewhat cooler temperature than the 1940s reconstruction. There’s still a correlation with temperature, but it’s only weakly significant (p = .079 makes it weakly significant at the 10% test size, but not the standard 5% size). As a result, the 95% CI for the reconstruction doesn’t exclude much of interest.

(But at least my 95% CI’s are finite and contiguous, unlike Brown’s inverse CI approach in such a case.)

“Gore was supposed to have used panel (c) of this figure, which is based on 6 Andean and Himalayan ice core records, but instead used panel (d), which is the 1000-1980 portion of the MBH99 HS, overlain with a CRU temperature index as a separate line.”

I see that, if he clearly meant to use the multi-ice core isotope graph. Which, I guess, could be cherry picked among ice sample for a hockey stick without being tainted with CO2 concentration data. But he was confused, and did not mention a thermometer record, but a CO2 concentration record, which demands the question, was he presented (d) as a CO2 concentration record or presented a graph with similar appearance purporting to be CO2 concentration record? Because there is only one record that looks like these recent millenial cherry picked proxies and multiproxies and temp reconstructions—the CO2 concentration record.

What are the chances of that if not on purpose?

For the believer, the recent millenial CO2 record, spliced with recent other anthropogenic ghgs, must replicate the temp history.

Of course, it might simply be the case that the reason many of these proxy series look so much like CO2 is that CO2 is causing warming that is being validly measured by the proxies…

Or maybe they’ve been cherry picked, truncated, spliced with instrumental temperatures, etc, to give a false appearance of correlation.

I found it very odd that LCZ has such a HS shape, when the individual series do not particularly. Part of it is due to the abrupt change in the weight on Dasuopu in the last decade, but even LCZ6 has a fair amount of HS to it, even if it turns out, as I have shown, to be uninformative about past temperatures.

I guess I need to read up more on Thompson’s method but can someone explain quickly why/how the oxygen isotope ratio is supposed to measure global temperature? I seem to recall another paper I read quite some time ago that talked about how burning coal or wood released O-18 preferentially which caused skewing in some archaeological dating. Perhaps I’m misremembering but if that’s valid it would go a long way toward explaining the relative spike in Dr. Thompson’s graphs.

In other words, is there really a causal mechanism between temperature and oxygen isotope ratios or are the ratios really linked to a different phenomenon?

There are very sound physical reasons as to why we can use 18-O as a thermometer for high latitude (Antarctic and Arctic) ice core, but for low latitude core there are some complications. There are several steps to the model: (i) isotope fractionation as water evaporates from the ocean; (ii) the transport and cooling of air masses from their source regions to point of precipitation and the accompanying changes in isotope composition as the amount of water vapor decreases due to the decrease in saturated vapor pressure of water (iii) isotope fractionation associated with local condensation temperatures. We have very good experimental, and theoretical data on the behaviour of oxygen and hydrogen isotopes in this system and thus can understand the changes in iisotope composition as a function of temperature.

It turns out that the key process is the transport step. Think of it like this: warm air containing water vapor is cooled and water condenses out to form rain. The 18-O preferentially condenses compared to 16-O. This depletes the cloud in 18-O. To maintain the liquid phase it is necessary to continuously cool the air mass as precipitation continues. More and more 18-O drops out of the vapor thus making it even more depleted in the heavy isotope. This is remarkably well modelled by a simple Rayleigh distillation process.

I think your comment about burning coal relates to the release of carbon isotopes and not 18-O. Changing the atmospheric 13-C composition may have a small effect on calculated ages through the 13/12C fractionation correction used when calculating 14C ages.

A second point is the isotope composition of the ice core reflects local temperature, or more specifically the difference in temperature between source region and the site of precipitation. This is not a global temperature. Thus Dasuopu data might be related to local conditions but this isn’t global in extent. Thompson averaged data from the Himalaya and South America to arrive at what he thinks is a global estimate.

Which way does the fractionation during evaporation from the oceans work? I’d guess that 16O evaporates relatively more than 18O at any given temperature, but how does this change with ocean (or marine air) temperature? Does this reinforce the local temperature effect on deposition, or work opposite?

I envision global temperature as just the average of local temperatures, with each locality having an idiosyncratic deviation from the global mean, so that local temperature is a valid (if noisy) proxy for global temperature. So if in the end we’re actually after global temperature and don’t really care much about medieval Tibet or Bolivia per se, we may just as well just calibrate these series directly to a global average and see what they tell us.

One quirky thing I found about this data (in an earlier program that got lost in a disk crash) is that if you regress the 6 individual core d18O on global temperature, the regression residuals are strongly correlated (rho roughly .4), and with the exception of the closest pair, Sajama and Quelccaya (rho = .8), this correlation does not fall off with great circle distance. I fact, the interregional correlations were slightly higher than the intraregional correlations (leaving out S-Q).

I took this to mean that there is a global component to atmospheric water vapor d18O that varies from decade to decade without being particularly correlated with global temperature.

Hu you’re right about evaporation from the ocean. Water molecules containing 16O evaporate preferentially from the ocean such that the water vapor has a composition that is about 10 per mille depleted wrt the ocean. As temperature increases the preferential partitioning of H216O into the vapor phase is not so marked. There is also a component of fractionation due to kinetic effects because evaporation normally occurs into air that is unsaturated wrt water.

It’s not immediately clear to me what will happen if there is a rise in both local and source temperature. I’ll think on this and come back tomorrow with a considered comment. My first comment is to suggest that if both source and local temperatures rise then one might not see any change in isotope composition of precipitation. This is because it is the temperature difference between source and local region that is the dominant factor in the isotope composition of precipitation.

This is true for high latitude regions but doesn’t seem to hold at low latitudes. Here we don’t find strong empirical evidence for an isotope-temperature correlation. Here there is a stronger relationship between precipitation amount and isotope composition. This may be true for tropical ice too. I’ll check on Monday when I’m back in the lab.

This may relate to your observation that there are strong correlations between residuals that don’t fall off with respect to distance and are not strongly related to temperature. Perhaps there is a regional (tropical pattern) to precipitation amount (ENSO?).

“Of course, it might simply be the case that the reason many of these proxy series look so much like CO2 is that CO2 is causing warming that is being validly measured by the proxies…”

Yes! Which demands the question, why are they not bandying about the Law Dome 1000 year CO2 concentration graph as correlative proof of their proxies and temperature recons? Would this not be the slam dunk? Indeed, why do they so little use comparative historical CO2 represenations when that is the core of their argument? The only one I have ever found is here:

Which was for a non-scientific audience. The CO2 looks like Etheridge 1996. Is the nondisclosure of correlation due to a concern about calling attention to an anthropogenic statistical causation, as I suggest is a possibility in posts above?

But first a natural possible “causation.” In the MacFarling Meure (2006) 2000 year study, the 0-1800ad CO2 concentration plotted next to Mann and Jones (2003) and Moberg et al. (2005). (fig. 2) the “look” evidence is better for Moberg, and, it appears CO2 change follows temp change. so there, causation–just not in the order the believers want it to be.

But that paper is recent. I find it “interesting” Etheridge 1996 shows a thousand year hockey stick graph with a soft vibration, a low crest in the first half of the second millenium, then a dip starting 1550 or abouts, then the modern anthropogenic zoom. Within a few years thereafter 1000 year multiproxies and such with similar crest, trough and zoom began to appear. I believe there may be emulation going on, or massaging of data with the Etheridge CO2 concentration data. Why not show the graphic representations of the Etheridge data? I suspect, next to the proxy temp reconstructions, it looks too good to be true. Might raise anthropogenic statistical causation questions.

and other times there may be cherry picking, adjustments, overlooking UHI effect. It’s all exciting!

Etheridge et al. (1996) Jour. of Geophysical Research 101, 4115-4128(data included) There is also a 1998 version of much the same on the web.

also

Etheridge (1988) Annals of Glaciology 10, 28-33

Fig. 4(a)this is a 500 year plot, which I bring up because it shows the grafting of instrumental data onto something else. Precedent for climate science? I do not know, but to be clear, there is nothing untoward here, it is helpful even. The grafted instrumental data is very recent SH atmospheric CO2, which shows the trend upward in the ice core continuing.

Mosh — That would be an alternative way to present the results. I didn’t save the full reconCDF array, but if you run my Matlab script overnight it should be in your memory in the morning, and easy to do this.

I’m just picking up Matlab after using GAUSS for many years, so I’m not very adept at writing files yet. In the future I’ll add reconCDF to my SI page at http://www.econ.ohio-state.edu/jhm/AGW/Thompson6/, to make it easier for R users to play with it.

Not to go too far politically OT, but it would be good to remind that “Dr. Thompson’s Thermometer” was used by Mrs. Robert Creamer (aka Rep. Jan Schakowsky) as a cudgel during the Barton hearings. This misdirection needs to be widely exposed, so kudos.

1. Has a good recap of the four reasons why land station thermometer records are to be adjusted. homogenization, etc

2. and better, it gives another understanding why the temp recons do not pay much attention to UHI. Here the UHI is not denied, but dismissively treated bluntly. Dronia (1967) is dismissed for reasons including poor coverage in High latitude regions, and that is bad because two studies showd strongest 20th century warming over Greenland and northern Siberia. (Likely this is the source of the modern red pastings over Siberia in model representations–and super warming is still going on, I infer. I should open bathing suit shop in Irkutsk.)

The paper shows representative station records that show obvious quick jumps that mean something is wrong, and to be adjusted. (I say thrown out). Although dismissive of UHI they opine that there was rapid growth in American cities so if they saw a jump in an American urban record, they threw it out.

Translation: urban sites around the world may show a rise, but because they don’t look like jumps, keep them in.

I have to say, the reasoning is shaky IMO.

The Temp. Reconstructions show a peak in the late 1930s and a decline to about 1970. but the new temp recons, jones 1998, mann 1999, so on, show a very short cooling, about 1940 to 1950–the little blips in the hockey blade. Not the cooling, but the length, may be a big clue to CO2 records being merged in…but such not brightly evident from the Etheridge 1996…but elsewhere. More tomorrow.

One more thing. MBH 1998 (the six centuries recon, not the 1999 1000 year one) has one small graph showing an Etheridge-like CO2 plotting against his temps, with the upswing in CO2 graphically merged with early 20th century warmin to about 1950. Then CO2 contiues up and temps diverge downward for a few decades. I found it amusing.

Had been reading CA for years, Daly before that. Often one of the few in my circles to hold climate science to the same standards of all science, which your writings have assisted so well. Never commented that I can remember, but always enjoyed the dispassionate, scientific, and truly polite site you have. The patience of Job, and now rewarded just a bit quicker. The world was benefited by your quirky interest in a topic far from your field (as engineer I have interests in far-flung topics as well, but have not and will not have the clear impact you have had). Just a note at this time to echo the many who have thanked you for your time and efforts. Dang, sound like a groupie now.

More on CO2 concentration records per my multiple posts above. First, I mention Trudinger et al. (2002) Kalman Filter Analysis of ice core data 2. Double deconvolution of CO2 and 13C measurements. At figure 7, of possible interest, is a Mann 1999 temp reconstrution without the attached “instrumental” data. So if you need a clean graph, there it is.

———
In Jones (1986)(see above) Fig. 5 represents air temp anomalies. Temp peaks at about 1900, then 1940 (probably ~’38) nadir at about late 1960’s, then up again. Here is the WMO graph showing Jones 1998, Mann 1999, and Briffa 2000

there is a 20th century anomaly of sorts–each of the three show an aberation in their 20th cent zooms upward, in the same place. Mann is flatline, Jones and Briffa have a dip – Briffa hitting nadir earlier. Could this be from temperature data like Jones used in 1986? Around this point there happened a similar dip, or flatness, in the CO2 record, but not a dip to 1970, but to about 1950.

The CO2 anomaly is not brightly visible in Etheridge graphs, but Trudinger’s highlight it. The “1940’s Flattening”, a pause in the upward sweep of increasing CO2, is found to be a dip by Trudinger. Figure 10. The phenomenon begins around 1940 nadirs at 1950 then reache1940 level around 1960. In 2006 MacFarling Meure found it to be a CO2 stabilization “at 310-312ppm from ~1940-1955.”

First thought, there are several times I have encountered the year “1950” as a beginning date for the supposition why warming “since 1950” must be anthropogenic. Is 1950 chosen not on the emotional basis of round numbers, but upon scientific knowledge that it was a significant year in CO2 science? (another “hiding the decline” re: the 1940s.)

Second, are the 20th century anomalies in Jones, Mann and Briffa 1998-2000 reflective of e.g., Jones 1986 thermometer records and their late 60’s nadir, or the 1950 CO2 nadir? If the latter, the CO2 info is somehow merged into the results?

It is and interesting CO2 anomaly. It is refreshing to see climate science not burdened with proven global warming. This science is seemingly overlooked, yet should it not be the most important matter for the anthrowarmists? (I googled it. Dibs on anthrowarmist /s! No other anthrowarmism either.)

Are they emulating the 1000 year CO2 record from Etheridge 1996 with their subsequent 1000 year hockeystick graphs, or are they mixingm massagin, what have you, CO2 concentration data into their cherry pies (does not preclude other cherry pickings).

Or does CO2 have the effect they claim? They certainly are quiet about it, however.

Their is no better correlative to the Team’s 1000 year hockeysticks that the uncontroversial hockeystick shape of the Etheridge’s CO2 records. Causation?

So although CO2 does not always eliminate or even reduce the effect of temperature on treering growth, it often does. Accordingly, the uncertainty bounds of a proper CCE temperature reconstruction based in whole or in part on treerings may be substantially increased by including this factor.

But important though it may be, as both the #2 GHG and as a potential tree nutrient, CO2 is irrelevant to the subject of this thread, namely Thompson’s ice core data.

Text reads:
“It may be seen from Fig. 16 that the true “Dr. Thompson’s Thermometer,” as Gore called it, is in fact completely uninformative about the existence or absence of a Medieval Warm Period (MWP).”
.
Can you really use Andean + Tibetan samples (n=2 spatial locations) to make inferences about the spatial extent of warming during a MWP? Seems like the sampling error in the spatial domain must be huge. If the MWP was strongest in the North Atlantic (= Mann et al’s 2009 MCA=NAO hypothesis) then it is not surprising that Andean + Tibetan evidence is weaker. Seems like spatial domain sampling error is hugely neglected in all these proxy/multiproxy studies. They habitually genuflect and say “more samples would be better” yet hand-wavingly conclude results are “robust”.

I’m not defending Gore’s use of Thompson’s 2-region series as if it told us about global temperature, but just asking what it really does tell us.

It does seem to me that had Thompson (or someone else) calibrated the Andean and Himalayan cores separately to local or global temperature, then they could be useful additions to Craig Loehle’s 18 series, which have no direct coverage in either region. See map at See http://www.econ.ohio-state.edu/jhm/AGW/Loehle/

One of his series did include one of Thompson’s ice cores in a reconstruction for all of China, but it was only a small portion of the total.

To actually determine regional temperature, it would be ideal to calibrate the two groups of cores to local temperature data. However, I doubt that there were many series in Peru or Tibet going back into the 19th c, so the sample might be rather small, esp. at Thompson’s decadal frequency.

But if you’re just trying to make generalities about global temperatures (as in the case of the HS/MWP), it seems to me that you may as well just calibrate your local series directly to global temperature.

“Why does it matter to me, you ask? Well, as a historian, because as
close as I have to an answer about what caused the now-legendary
‘feudal transformation’, or at least the bundle of slow or not-so-slow
changes that we have at times piled under that name, starts with it.
Between about 700 and 1200, it seems fairly safe to say, the climate
in Western Europe got warmer by, say, one or two degrees. Maybe more,
but that’s all I need to be able to say that rainfall would have
decreased, crop yields would have increased, there would have been
fewer famines (and that bit we can check from other records and it has
been done and checks out, as far as what records we have can
demonstrate something from silence), and more surplus.”

though he ends with:

“It really gets to me that the argument
against action on climate change makes so much of this. The argument
being that, if the medieval warm period is ‘true’ and there really
were Vikings farming now ice-bound lands on Greenland (irrespective of
what the rest of the world may have been getting weather-wise…) then
the military-industrial complex ™ hasn’t necessarily caused the
current climate rise and so our lifestyle needn’t change hurrah!”

Fact is the guy pretty much agrees with CA/Watts etc., but misrepresents your views so he can oppose them.

The column by historian Jonathan Jarrett that you cite completely misses the relevance of the MBH Hockey Stick for AGW.

In fact, MBH raised the entirely reasonable question of whether the well-known northern European MWP was a global (or at least hemispheric) phenomenon, or if it was just local in scope. If temperatures elsewhere were just average during the MWP, or perhaps even colder than average, then the Current Warm Period (prior to this winter, anyway🙂 ) would appear to be the warmest in the last 1000 or 2000 years.

Unfortunately for MBH, however, Steve and Ross have shown that they used a lethal combination of bad data (stripbark trees) and bad statistics (short-centered PCA). That put the burden of proof back on the AGW camp to show that the CWP is historically unprecedented.

Contra Jarrett, Climategate does factor in, in no small part, because one of the series not used by MBH but which is cited by IPCC and many articles as confirming the MBH HS is the Briffa MXD series through 1960. However, a data file in the CRUtape Letters (the title of Mosh’s forthcoming book?) reveals that this series in fact declines in the long-concealed post-1960 portion, thereby indicating that it isn’t a temperature indicator after all. See “New! Data from the Decline” (CA 11/26/09). Of course, it doesn’t help any that some of the accompanying e-mails speak of “hiding”this “decline”.

As Jarrett’s column relates to the topic of this post this post, Gore added the very relevant assertion in AIT that Thompson’s ice core data provided independent confirmation of the absence of a global MWP, as apparently someone (not necessarily Thompson himself) had told him. Thompson himself (CC03) had in fact claimed only that similarity in shape between his LCZ and the presumably authoritative HS indicated that his ice core index was also a valid temperature proxy, but Gore’s turnabout to make Thompson validate MBH was a reasonable step.

In fact, neither Thompson nor Gore had made any attempt at actually calibrating LCZ to instrumental temperatures. When I do this, I find that the primary 6-core index does respond with weak significance (p = .079) to temperature, but that a 95% CI is entirely uninformative about the MWP vs CWP issue.

As noted at the end of the post, I’ve further updated my online working paper, now showing that my approach to calibration Confidence Intervals is in fact a new derivation of a method proposed by Hunter and Lamboy (HL 1981), and that my new derivation overcomes objections that were raised against the HL method by Hill and others when it first appeared.

I’ve also simplified the diagrams, so that the 1980s are simply represented by the 6-core index, and the 1990s by the 4-core index, with a red line connecting the last point to its predecessor.

The new version, dated 3/12/10, corrects a key typo in the new (11), and corrects the treatment of the multiproxy case. The derivation of the latter still needs some work, but this does not affect the rest of the paper.

A few comments in addition to the fact I enjoyed the previous version– you have a few wry statistical statements I want to quote:

1. “It is not clear why one would want to average Z-scores in this manner, since it makes inefficient use of the data. However, this appears to be a common practice in paleoclimatology (see e.g. Kaufman et al. 2009)”

The “inefficient use” is funny, to me, being such an understatement.

2.”The changing composition of LCZ would not alter its expected response to temperature if δ18Oice were a universally valid and linear indicator of annual average temperature…”

“If…linear…” in spite of it’s not

Anyway, well worth a read to anyone who wants to see how a set of series can ‘magically’ be dropped and yield a possibly different conclusion wrt to the total dataset.

Lastly, I note that the temperature calibration uses CRUTEM3vGL global land air temperature index, annual averages, 1850-Oct. 2009, which is very recent but I’ll add your caution:

“The widely used CRU series are partially based on confidential weather data that has not been made publicly available (see Met Office 2009). Since they are not scientifically replicable, they should only be used with caution. However, the alternative GISStemp series, produced by NASA/GISS, is only available back to 1880, and hence has 3 fewer decades than the CRU series.”

The new version is dated Oct. 31, 2010. A greater shortened version has been submitted to Technometrics, the journal that published the 1981 Hunter & Lamboy article whose approach to calibration CIs I am vindicating.

The conclusions of the paper as reported in the post have not changed. The new revision tries to clarify some points that were not clear to referees from a previous submissions to another journal and from a grant proposal.

Mann teaches “The Earth System” using Gore’s AIT: I really thought this had to be a joke, but it’s real. Of course it’s possible to include written or visual propaganda in an academic course in order to correct and debunk it, and for diversity of views, but somehow it’s hard to imagine that is what is happening here. Does Mann even correct the “Dr. Thompson’s Thermometer” for his eager students?? I wonder…. (anyone know any recent Penn State grads who could try to find out from fellow students or alums?)….

In the past when I criticized the misrepresentations in AIT to “serious” people they would always say some version of “well sure it’s reckless propaganda, but what do you expect, it’s made by a politician for political purposes.” Now it seems that the great climatologist Michael Mann regards AIT as worthy of filling TWO slots in his course….

Judging from his public behaviors and CRU email comments I will guess that any student in a Mann course knows better than to ask any “inconvenient” questions. Or if they did bring anything critical up it would be in a sycophantic way to say “oooh Prof. Mann, please tell us how you debunk this idiotic slave of fossil fuel plutocrats”

As it happens, the graph that Gore presented really was the MBH HS, spliced together with an instrumental record as if they were a single series, and has nothing to do with Thompson’s ice core research.

Minor correction — In fact the 12 series that the MBH99 HS is based on pre-1400 include 2 of Thompson’s Quelccaya series, d18O and precipitation, each averaged over the two Quelccaya cores.

Therefore even if Gore had shown Thompson’s ice core index, it would not have been truly independent of the HS as claimed by Gore.

3 Trackbacks

[…] I have attempted to correct this deficiency in a working paper I discuss in my 12/10/09 CA post, Calibrating “Dr. Thompson’s Thermometer”. I conclude, however, It may be seen from Fig. 4 that “Dr. Thompson’s Thermometer” is in […]

[…] Calibrating the Thompson Ice Core Index,” which was discussed in my earlier CA post, “Calibrating ‘Dr. Thompson’s Thermometer’”. Like this:LikeBe the first to like this post. This entry was written by Hu McCulloch, […]

[…] The Mann et al. studies seemed to vindicate those who had been claiming that the recent global warming was unusual and “man-made”. As a result, it received a lot of publicity, and featured heavily in the IPCC’s 3rd Assessment Report (2001) and Al Gore’s popular “An Inconvenient Truth” film (although it was mistakenly labelled as “Dr. Thompson’s Thermometer”). […]