But, the point is that the same people (HP&S, but in a different order) a year earlier published in GRL (SR, of course :-() a reconstruction going back 20,000 years. There are features of that that appear distinctly odd to me – they only get about 1.5 oC cooling at the last glacial maximum; and to get more it looks like they would have to increase temperatures in the holocene too high. And the LGM was… at least 5 oC colder. Also the timings are wrong: the “node” between the “LIA” dip and “MWP” peak is at 500 years before present, which is when the paper a year later ends with maximum cooling.

So why has the earlier paper disappeared? Of course the true explanation must be nefarious interference from the UN/IPCC to suppress the truth; but what is the excuse? I’m not really sure, and would be grateful for anyone who does know. The 1997 GRL paper appears to use heat flow and about 6000 sites; the 1998 Science uses temperatures and 358 sites. So maybe it turned out there was something badly wrong with the earlier method? It certainly contradicts the later one; and produces implausible values for the LGM temperatures.

The paper “only” gets referenced 17 times, according to WoS, and always in a “various people have done boreholes, including Huang (1997)” sort of way. I can’t see anyone who has used or commented on their particular profiles (apart from the recent uptake by the skeptics, of course).

[Update: Eli has already noticed this oddity (and there is more) but he doesn’t explain it either. He does discover Pollack and Huang in 2000 referencing both studies but apparently failing to notice their incompatibility -W]

[Another update: I’ve had some email exchanges with a very friendly Henry Pollack. He supplies almost all the answers: part of it is HPS 97 paper had data to depths of two kilometers, whereas the 500 year database comprises boreholes mostly in the 300-400 meter depth range. The much larger number of boreholes in the HPS 97 dataset did enable the greater depths to be sufficiently populated to have a reasonable estimates of the mean heat flux over a depth interval. OK, so this is why HPS ’97 goes to 20 kyr and all the subsequent ones only to 500 y. But then this leaves the central problem: why is a larger database of deeper boreholes no longer used? The answer to this seems to be Quality. The datasets used in the 500y studies are better controlled. The HPS ’97 dataset is of heatflow, derived from the International Heat Flow Commission database. But though the heatflow measurements there were derived from temperature measurements, the original Ts are gone and only the heatflow is in the database.

An interesting additional point is that since HPS didn’t use the top 100m of borehole, the reconstruction there contain virtually no information about the 20th century… the ‘present’ (the zero on the time axis) really represents something like the end of the 19th century, rather than the end of the 20th century… the present-day is indeed warmer than the ‘goldilocks’ curve b (as well as curve a) throughout the Holocene, and at least as warm as the Medieval Warm Period of curve c (quotes from Pollack).

Which is brings us back to the original point: no-one now uses HPS ’97. The shorter record is preferred, as (presumably) being more reliable -W

Comments

Stoats obviously don’t read the literature. There is more to the story than you point out. Both figures also appeared in a Advances in Something or Other in 2000, practically cheek by jowl, and there is some back and forth on the Rutherford “correction” esp see the comment, which I think may be Henry Pollack. Another one of those thinks I meant to get back to.

BTW, all the Huang/Pollack stuff is at Huang’s web site, and I still think there is something interesting going on with the Australian boreholes.

“On a longer timescale embracing all of the Holocene, Huang et al (1997) used the global heat flow database (Pollack et al 1993) to establish a composite profile of heat flow versus depth to 2 km beneath the surface. The inversion of this profile revealed a long mid-Holocene warm interval some 0.2-0.6 K above present day temperatures, and another similar but shorter warm interval 500-1,000 years ago. Temperatures then cooled to a minimum of approximately 0.5 K below present, about 200 years ago. This six-continent reconstruction shows essentially the same climate history, albeit more subdued in amplitude, as that revealed in the GRIP and GISP2 boreholes in Greenland (Dahl-Jensen et al 1998, Clow 1998, Clow & Waddington 1999).”

It is not just “various people did blah blah blah”.

Czech Republic is the main Euro source of data in the paper above. They also cite their 1998 work here, positively, in the article above.

[Thanks Lubos (and Eli) for finding that one. How they can reference two such incompatible reconstructions one after the other without pointing out the obvious disparities is unclear to me (and Eli) -W]

Pollack has published a great book targeted for the layperson, “Uncertain Science, Uncertain World”. In it, he discusses Bayes and Boreholes (pp. 160-165) and refers to the results of the 1998 study with relevant confidence. He also explains that, “In our interpretations of rock temperatures, we make very conservative inital guesses of the climate history, one that asserts that there is no climate change at all. This is called the ‘null hypothesis’. Along with presenting the null hypothesis as a first guess, we also tell the computer that we are willing to deviate from that conservative hypothesis if the temperature observations push in that direction, but we impose limits on how big an adjustment may take place. Next the computer interrogates the subsurface temperatures to see if they are consistent with the hypothesis within the assigned range of uncertainty of the temperature measurements” (p. 164). Perhaps the 2 can be reconciled because the uncertainty range is larger in the 1st study, thus the limits allowed for ‘adjustment’ within that model are smaller?

[OK – the ’98 study is fine; everyone is happy with that. The question is, does he mention the ’97 study at all, and periods longer than 500 years? -W]

(I am not dismissing the science done by Huang and Pollack. Actually I like the book by Pollack so much that I have translated it in Japanese. At least three publishers were interested but finally declined to publish it. I still look for a publisher.)

It seems that their works about past temperature except the 1997 one deals with the last 500 years and these are mutually consistent.

If I also assume that their analysis procedures are neither wrong nor inconsistent between the papers including the 1997 one, it is likely that the distribution of long holes which were needed for the 20000-year study (1997 paper) was much more biased towards northern midlatitudes, and therefore less representative of the globe than their 500-year studies.

As for the last glacial maximum, we should note that the temporal resolution of the results of inversion becomes lower and lower as we go back to the past. Probably we should interpret the values plotted at “20000 years ago” as moving averages centered at that time with span of many thousands of years. So they do not mean the temperature of the last glacial maximum, but just that of the last glacial.

[Thanks for the comment. Even interpreted as “last glacial” the values are probably rather warm – especially if they are to be interpreted as NH land.

As to the 1997 study… OK, so we might explain the timing differences by distribution of holes. But if we can “fix” the distribution by just dropping down to the same holes as 1998 and all the others, how do we explain that a (valdi) method going back 20kyr is now ignored? I think the most likely explanation is that the method is for some reason not valid; but I’m unsure exactly why.

Also the difference between reconstructions using temperatures, and using heat fluxes, remains unexplained -W]

Kooiti, several of the usual denialist sources have blown up the 20KY figure to show that H&Ks predicted temperature is well above today’s in 1500-1600. I blogged about this contradiction and displayed all the figures. (linked above and here), but also H&K’s text follows the figure and the blow ups are accurate (I looked real close:(.

“I’m interested to know why you limit your recent reconstructions to AD1500 and onwards. I know you published an earlier reconstruction that extended to the turn of the last millennium (and indeed the whole millennium). I assume that there are methodological problems with extending the reconstruction back that far, but what are they?”

He replied:

“Thank you for your interest in my climate reconstruction work. There are several constraints on the amount of climate information one can retrieve from a borehole temperature profile. The temporal length of a reconstruction is basically limited by the depth of the borehole temperature profile. Our recent reconstructions are based on the global database of borehole temperatures. Given the depth range and the noise level of this database, we focus our efforts on the past five centuries for global and regional reconstructions. For an overview of the geothermal approach to a climate reconstruction, I would refer to you our review paper in Ann. Rev. Earth Planetary Sci., Pollack and Huang, 2000, 28: 339-365.

A borehole-based reconstruction does not have to be limited to 500 years. Indeed, there are temperature measurements of higher qualities from deep boreholes that would allow for a reconstruction of a climate history much longer than 500 years. Further refining the borehole-based climate reconstruction technique and expanding the global database of borehole temperatures for a better spatial and temporal coverage are among the goals of my recent research. Unfortunately I have not had much progress from this effort to report yet.”

[Hmmm. I’m not impressed by Huang’s reply to you. If the 1997 paper is still valid, why doesn’t he mention it? If its invalid, I can understand his reticence -W]

Yes, I wasn’t left much the wiser after his reply. Perhaps I should’ve followed up. I took it to mean that the 1997 paper is as valid now as it was then – it’s the best guess that can be made with available data. But there’s not much that you can usefully do with such noisy data, so they focus now on the past 500 years.

[No, I think you should take it that the 1997 is now invalid, for whatever reason. No one references it; the TAR and AR4 ignore it. If there was good borehole data for 20 kyr I can’t see why people wouldn’t use it. I think the authors are hoping it will be quietly forgotten -W]

It’s becoming clearer and clearer that they can’t as the denialists start parading it in public. Moreover, it is interesting and perhaps dispositive of borehole methods that it got something so wrong at first. What changed? Is there something in the later, deeper data that pushed the earlier stuff higher?

[The mystery, to me, is that the earlier 1997 paper used a *larger* set of boreholes, and apparently a different variable – heat flux – rather than temperatures. Unless thye just changed their naming in betweem. OK, so I didn’t read it that carefully! -W]

I’m sure I checked one of the denialist-touted papers (about Greenland ice core temps) in the last few weeks, and found more recent cites correcting it, and mentioning that the original paper’s data as published had been reviewed and that at different levels the temps reflected not the temperatures at the current drill site over time, but the temperatures of the location at which the (moving) ice had been located at various times, and included a component due to geothermal heat transfer from the underlying rock. Sorting out those factors reduced the bump in temperature at the Medieval.

Dang. I’ll have to find it again, and see if my memory’s working on that.

[But anyway, thats just Greenland. THough it will be one of the better records -W]

Let me tell you what I think really happened. Pollack et al. in 1998 simply learned that Mann et al. were going to publish the (now infamous) hockey stick graph where all temperatures in the past were constant. So they were afraid that their 1997 paper could be wrong or despised, so they cherry-picked a small subset of their 1997 data that didn’t contradict MBH98 so much, and published it.

Pollack et al 1998 then became more convenient for the growing alarmist, and because it is convenience – not truth – that is at the first place for the climate activists (see e.g. the order of the movie title), no one had any interest to talk again a more complete paper Pollack et al. 1997.

OTOH, you are still left with the problem that either a or b is wrong and the answer goes directly to the value of boreholes as proxys.

[I would accuse Lubos of jumping the shark on this but he did that some time ago. The Lubos climate-science world would be a very depressing place – all research would have to acknowledge the MWP or it wouldn’t be allowed to be published :-( -W]

Dear Eli Rabett, “b 1998 is wrong” doesn’t mean that the borehole proxies are inevitably bad. “b 1998″ can be wrong just because an incorrect statistical selection of it was made, but “a 1997″ and/or some other studies can still be correct. I think it’s obvious that a scientist must be a priori open-minded about all possibilities – each paper can be wrong independently of others unless a logical correlation such as equivalence is proven.

I don’t care whether someone finds MWP or not. It would be very foolish if our decisions today depended primarily on the question whether 1400 was warmer than 1998 or not. Does it really matter? It is clearly a matter of coincidences. The somewhat convincing papers – and evidence from chronicles – indicates that the MWP existed, and wrongness of papers that wanted to settle the question in the opposite way strengthens this point.

But whether it was really warmer than today doesn’t matter. What matters is whether the variations and perceived trends in the past were much smaller than today and whether we have evidence that the current evolution is mostly man-made. Based on the currently available data, I think that the answer is No, but science is about allowing our understanding to evolve. I am certainly not sure about the conclusion.

Well, let us assume that there are two proxy measurments of the length of my left ear using string methods (you take a string, etc….) one says that it is 5 cm long, the other that it is 50. Clearly one of these measurements is wrong, and maybe both given that Eli has ears. Of course, the string method may not even be able to measure the length of Eli’s left ear, in which case it is not even wrong.

If you publish both measurements, an observer would observe that the string method is not very useful either.

Of course that if you could prove that the validity of one paper implies validity of the other and vice versa, both of them would be wrong because they’re inconsistent.

But if someone publishes a wrong paper, it doesn’t mean that all his papers are wrong. Blaming a whole scientific method etc. is just silly – it shows the kind of ad hominem attacks and labels that you want to attach to everyone and everything.

Would you try to look at each paper independently without preconceived notions who is bad and who is good? It would be appreciated. I am afraid that no alarmist – which means almost no climate scientist – is able to do so these days.

Lubos, if you don’t quit making sensible remarks, I’m going to end up agreeing with you too often. I could get my climate conspiracy decoder ring and membership badge taken away, and have to go back to driving the 1969 Dodge V-8 on weekends ….

Seriously, assuming the two papers had good data originally (and if so where the heck _is_ it) — isn’t having two incongruous sets of data no surprise, time will tell? Blind men, elephant, publications — put grad students on figuring out why, whether instrument variation, moving ice carrying different amounts of geothermal heat from below in different layers depending on what the glacier was passing over at the time, whatever.

Apparently the mystery is what happened to the actual original measurements — and maybe what they were measuring?

I am convinced that most of my denier friends agree with me that the comparison of 1400 and 1998 is not the key to science and global decisions.

In other words, even if 1400 were cooler than 1998, it is not a rational justification of some global policies. 1400 had to be either cooler or warmer than 1998.

If you created policies just because of that, you would create policies – and destroy freedom – with probability 100%: in 50% cases, they would try to cool the planet down; in 50% cases, they would try to heat it up.

It would be equally ridiculous in both cases.

Even if the last 20 years were the warmest 20 years in the millenium, it doesn’t mean much. Some decades had to be the warmest, and this has probability of 2%. First and last decades are likely to be extreme as long as there is any cycle with a low period or aperiodic effect with a long typical timescale.

Dear Hank, yes, I think that science and re-verification would be under normal circumstances able to find which paper is better, eventually. Climate science in 1985 when it was eating 15 times less money than today would be able to do it. What about the current climate science?

For example, if they will look how complete and extensive ensembles of data the papers have used, do you think that a paper with 358 sites from 1998 will beat a paper with 6000 sites from 1997? Make your guess. :-)

Of course it will, after James Hansen’s reticence factor is taken into account and 358 is multiplied by 30, just like what he does with sea levels.

Dear Eli, yours is simply an unscientific way to approach these questions. When a scientist studies the value of methods, he must read individual papers and judge individual papers, instead of your approach of clumping papers in the groups and attaching universal labels and signs to these groups.

These two papers lead to different results so they are clearly in different groups, regardless of similarity of titles and author lists. There is thus clearly no universal label one can attach. Either both papers are wrong because of any reason – inherent unreliability of their experimental approach is just one possible reason – or one of the papers is more or less correct. A serious scientist just can’t discard this possibility.

I know that it is probably very complicated to imagine for you but if you want to find out whether one of these papers is correct, you will have to read this paper sentence-by-sentence and check evidence behind each of them, instead of depending on demonization, celebration, or general cliches about whole methods or on testimonies of your prophets.

My guess is that this is the first time you hear about this idea to improve your scientific method. It’s a good idea, isn’t it? Incidentally, here is a nice video about the scientific method, climate science edition:

“Even if the last 20 years were the warmest 20 years in the millenium, it doesn’t mean much. Some decades had to be the warmest, and this has probability of 2%.”

Surely this kind of simple probability calculation only holds if the climate is taken as a random system, not as a chaotic one operating within certain restraints, and with a variety of cycles that are not random.

Eli thinks Lubos needs to talk to McKitrick and Monckton. However, let me review the bidding again. The two papers describe different samples using the same method. They give different answers for the period in question, and indeed for the period of time about which we have the best data both are essentially useless. However, going back in time a bit, when we still have some data, although most are other proxys, not only do they disagree with each other, they significantly disagree with all the other proxy measurements. Now Lubos instructs that one must use cabalistic methods to read the secrets written in invisible ink between the figures and the text. Eli says it’s still spinich.

Of course that it does depend on the actual physics and the statistical patterns the temperature follows. The actual probability of the assertion above is much higher than 2% (i.e. way bigger than 4% if I allow the same thing with “or coolest”). If the climate were dominated by waves with period 10,000 years and not decadal fluctuations, it would be way more than 90% probability that the last 2 decades are either coolest or warmest in a millenium – simply because the millenium graphs would look as either increasing or decreasing straight lines.

Most people don’t want to look at things quantitatively, and they are assigning a huge, religious importance – comparable to a Jesus Christ walking on the ocean – to statistical facts that are pretty likely to occur by chance.

Dear Eli, I explained how the second paper was probably created: the authors just wanted to have a work that would be more compatible with the hockey stick in the hypothetical case that the hockey stick would become accepted, so they crippled their previous paper in order to get some new “results”. They may have used the same experimental methods but they didn’t use the same statistical method if they chose the small subset. One must get both experiments as well as statistics right to get right results, and selection bias is an obvious problem.

I have talked, via e-mail, both with Lord Monckton as well as Ross McKitrick. You know, the MWP is about cancelling the alleged evidence that the recent era is unprecedented. This argument has been used by some people to argue in favor of the AGW theory. And this argument is probably wrong. I agree that it is wrong. If the MWP was warmer, then it’s almost certainly wrong. But if the MWP was not quite warmer than the present, it still doesn’t mean that the argument for AGW is right because it is a matter of chance whether we’re now warmer than 1400 or not.

The answer to this question can’t become a religious dogma leading to a religious war. I would never participate in such a war because it would be plain stupid. If there’s no other evidence that the recent climate dynamics is unprecedented, then I find it obvious that we will continue to assume that it is normal, regardless of some arbitrary comparisons of randomly chosen two decades in the millenium.

Nope, your still not making any sense. Your just making some kind of rhetorical point with no measurable relationship to reality. Fortunately, the climatologists involved in all this have actually measured a great deal of the variables.