Kinnard and the D’Arrigo-Wilson Chronologies

Two interesting new proxy studies out recently, one with a meticulous archive, one without.

Kinnard et al 2011 (Nature) here is a proxy reconstruction using 69 proxies to reconstruct Arctic sea ice. It contains a comprehensive archive: all proxies as used as archived; all code is archived. Three of the coauthors are Canadians, including David Fisher, who’s had a commendable history of archiving and distributing data even before web-based solutions. (Fisher’s CDs of data were the core of the ice core collections in MBH98 and Mann et al 2008).

Christiansen and Ljungqvist (Clim Past Disc) here is a reconstruction using 91 proxies (all plotted). Although Ljungqvist has recently made two substantial archives collating recent proxies, for some reason, the present study lacks such an archive. Like Kinnard, it uses a procedure said by the authors to be innovative (but without code.) The combination of no archive and no source code detracts from the ability to efficiently analyse the article. I’ve written to Ljungqvist hoping that they will remedy the situation (and I’m hopeful that they will). Unless they do, I don’t plan to consider this article.

Back to Kinnard. The article uses a complicated multivariate method (Partial Least Squares). The authors provide information on their procedures, but overlook one of the most important aspects: at the end of the day, Partial Least Squares, like other methods, results in a vector of weights for the various proxies. While the authors present maps showing loadings for different PCs, unfortunately they didn’t connect the dots to carry out the linear algebra to extract the weights. It will take a while to analyse.

The D’Arrigo Proxies
In 2005, I tried to get the component chronologies (and measurement data) for D’Arrigo et al 2006 (of which Rob Wilson was a co-author, responsible for much if not most of the analysis, but not in control of archiving decisions.) Unfortunately, six years later, the D’Arrigo-Wilson chronologies remained unarchived.

Kinnard used 11 tree ring chronologies, of which 9 were attributed to D’Arrigo et al 2011 (one to Grudd; one to a third party.) The attribution to Grudd is incorrect: this series also comes from D’Arrigo et al. To my knowledge, this is the first time that these chronologies from D’Arrigo et al 2006 have been archived. (Definite progress here – CA readers will recall that Nature required Moberg to archive third party data sets even if the originating author hadn’t archived the data. Nice to see this happening without a complaint being required.)

Unfortunately, the authors seem to have jumbled 7 of the series, so that the wrong location is attached. (The sites are transposed in the style of the incorrect location of the MBH98 instrumental precipitation data used as temperature proxies.) I noticed this when I plotted up their Tornetrask version which looks as follows. This doesn’t look like the Grudd Tornetrask series at all – as any knowledgeable reviewer would have known. It looks like a Yamal version.

Figure 1. Series 68 in the Kinnard file. Site 68 in the information is Tornetrask.

As CA readers are aware, the Polar Urals series of D’Arrigo et al 2006 was actually Yamal (though the core counts illustrated in their figure came from Polar Urals.).
Figure 2. “Polar Urals” series from D’Arrigo et al 2006. (Actually Yamal with Polar Urals core counts.)

Here is Kinnard series 68 plotted in a similar style – showing that the two series are identical.
Figure 3. Kinnard Series 68 in D’Arrigo style.

Here is the complete transposition (as archived. It’s possible that it’s an archiving error rather than a substantive error):

If the error in their archive exists in their data as used, these erroneous locations will obviously affect spatial maps of loadings and weights, to the extent that these proxies are used. This sort of error should have been observable almost immediately to anyone familiar with the proxies.

Over and above the transposition error, Kinnard et al have incorrectly used the Yamal chronology as the “Polar Urals” chronology (with Polar Urals core counts). In fairness to Kinnard et al, D’Arrigo et al incorrectly labeled the Yamal chronology as Polar Urals and then refused to issue a Corrigendum acknowledging the error.

Kinnard et al do not discuss the discrepancy between the divergence problem and the chronologies selected in D’Arrigo et al. Senior author D’Arrigo told the NAS panel that you have to pick cherries if you want to make cherry pie. The large population Briffa et al 1998 showed declining ring widths in a very large population of high-latitude sites (the divergence problem). The tree ring sites selected by Kinnard are also high-latitude sites, but on balance go up. The inconsistency between the decline in the large population and the rise in the small subset suggests biased selection at some point in the process – an issue not addressed by Kinnard. By blending the tree ring results with other proxies, the information from proxy class separately is not shown. I’ll try to extract this on another occasion.

25 Comments

I took a look of Kinnard et al last weekend, and it took me less than 10 minutes to notice something disturbing: before calibration they infill and use Mann’s lowpass filter (with end point effects) to “smooth” the proxies. You see the effect by plotting the original proxies (SI data 1) along with the smoothed (SI data 2). Here’s series #58 (blue orignal — red “infilled” and “5yr smoothed”):
The archieve does not contain all code. With a lot of trouble and googling (I suppose no reviewer tried to run the code…) I was able to get the main code to run. However, the archieve is missing code for the infilling and smoothing step.
IMO, this study really needs a closer audit.

Once again basic mistake that would not be acceptably for a undergraduate students essay or dissertation , included no-doubt students at the authors work with , are passed over in published research. Is the standard of science in this area really that low ?

Steve- its easy enough to make this sort of error. (And it might just be in their SI and not in their calculations.) If the error is in fact embedded in the calcs and it makes a difference, it’s unfortunate.

BTW as I go through the data, Kinnard et al have assembled a lot of data that hasn’t been otherwise available.

It remains the case that academics continue to make mistakes in their published work which they would not accept from their own students. Remember this is supposed to be a ‘Gold standard ‘ a level which people are supposed to aspire too . Is it really to much to ask them to meet a standard which would be expected of an undergraduate in their studies , the answer to often in climate science seems to be yes . They were told this is the ‘best of the scientific knowledgeable ‘ in the area and you can some see the area as a bit of a joke .

Yeah, I haven’t found any either, but I’ve only spent 15-20min so far. I think they’re going along the lines that they’ve listed what proxies they used & they’re already in the public domain. But, I think I saw on my first reading that they had to digitise some of the proxies!

On the plus side, your campaigning over bristlecones/Yamal seems to be finally making SOME headway… Anonymous Reviewer #1 seems to recognise they’re problematic, although still seems to have contempt for “so called sceptics”.

In their reply to C. Lemmen, they say that they’re planning on archiving their data on the WDC for Paleo, if they’re published. Maybe they’re planning the same for Christiansen and Ljungqvist.

I don’t know whether that would include the proxies & code or not. It looks like they’ve tried submitting this before, so maybe they’re reluctant to give away their archive & then have their paper rejected… I agree it does make it harder to review the paper though.

Good catch AJ. You really need to find out if that is related to be published work , but the attitude and intent is shows is damning enough.

Maybe look at what publications Briffa was invovled with at that time and see it you can see evidence of what he describes. He gives quite an accurate description so it may be possible to recognise it.

We deal in different industries. In the space business this kind of clouding of the results would be a shock.

Then again, when we look at errors and probabilities we tend to be concerned with things like how much time do we have to self-destruct a launch before it can kill people living nearby. The physics dictates the cone of safety as a rocket ascends. The farther it goes the more time to respond. But we have to get that probability cone right.

In those cases, screwing up the precision is truly criminal.

I have concluded my standards are probably to high for such mundane things as comparing tree rings between the last 50 years and 1500 years ago…

The lack of statistical or scientific rigour in paleoclimate science… or climate science for that matter… shocks many of us, when we start to “peek under the bonnet”. But, if you’re shocked by that particular discussion, maybe it would be better for your sanity if you don’t look any further!!! ;)

I don’t recall offhand which figure he’s referring to. But, it most likely was for Chapter 6 of the IPCC 2007 Working Group 1 report (the “4AR” Steve refers to). If you’re interested in finding out more about it, have a read of Osborn & Briffa, 2006 first (you can find a .pdf in the link I gave you above). He used a similar logic there. You might not agree with it. But, you’ll understand his thinking a bit better. Then, you can have a look through the section of Chapter 6 in the IPCC report for the section on millennial temperature change. The figure you’re inquiring about is probably there.

But, to be honest, there are far more troubling difficulties with that chapter. You’re rightly concerned about the error bars associated with estimates. But, measurement error bars are only relevant if your measurements actually mean something!!! ;)

The most recent reconstruction shown in the B. Christiansen and F. C. Ljungqvist paper does do a great service to the interested observers by showing the individual components and proxies that are used in the construction of the reconstruction. What bothers me about these reconstructions and taking the composite result seriously is that that the individual proxies vary greatly in responses that are shown over time. It appears to me that we are expected to accept that putting these greatly differing pictures together in some average composite reveals a truth about the past climate and specifically temperature.

With the amount of individual proxy response variation over time I would have to think that either the CIs for the final reconstruction would have to reach from floor to ceiling or that a large number of the proxies are not responding or responding weakly to temperature over time and should not be used in a reconstruction. Obviously for the second consideration to be addressed would require selecting the “correct” proxies to use. That the paper above and that from Loehle and McCulloch can show differing past temperatures relative to current times than those in many other reconstructions (with those other reconstructions also varying one from another ) only shows me that a selection process of proxies can significantly change the picture of past temperatures but that that says little about the validity of using the proxies as thermometers.

Steve,
Another dendro compilation, this one from China, I’m sure you’ve seen on WUWT or elsewhere. China has it’s own agenda, so I’m suspect of this one too. But I noticed the error bars in the first fig shown here: http://wattsupwiththat.com/2011/12/07/in-china-there-are-no-hockey-sticks/#more-52667
Oddly symmetrical, for instance the error around the spike at 400 AD is equal above and below the average temp even though the calculate data point is very high above average. Seems bizarre.