Jacoby’s “Lost” Gaspé Cedars

I’ve been trying for over a year to get a location for the Gaspé cedars. Jacoby, as a Hockey Team member, refuses to provide such mundane information. At one point, Ed Cook, another Hockey Team member, promised to provide the information, but failed to deliver. Now Jacoby has told Climatic Change that the cedar location is lost. As an explanation, Jacoby says that the site was sampled before GPS.

Here is Jacoby’s complete response:

From: Gordon Jacoby
Subject: Re: Gaspé data

To those concerned:

The "Gaspé" tree-ring data are in the International Tree-Ring Data Base and can be accessed there. The actual site name is St Anne River and the associated name is Edward Cook. The record extends up to 1983. There was an attempt to update this record but the original site was not located. The original sampling was prior to GPS locating. Therefore there is no newer data for this particular site. If we implied this is any published paper, we mispoke. In updating chronologies one must revisit the exact site and trees.

Best regards, Gordon Jacoby

If you look back at this post on Gaspé, you will see an updated Gaspé version, including samples taken up to 1991. When I asked for the actual measurement data for the newer series, Rosanne D’Arrigo, Jacoby’s associate, refused to provide it, saying that:

" the data you have [the archived cana036 data] are probably superior with regards to a NH signal."

I’ve had lots of experience with geological maps and geologists were able to make maps for mineral exploration long before the invention of GPS. Gaspé is not particularly remote. The site is probably pretty close to a road. I wonder how many other dendrochronological samples have failed to record reproducible location information.

We discussed the impact of the Gaspé site on the MBH98 reconstruction in our E&E paper. It’s too bad that this remarkable site is now "lost".

26 Comments

I wonder when they use the excuse that the timber industry cut down the evidence? This behaviour to data makes the most dishonest mining speculator seem almost angelic in comparison!

Mind you, just because we in the mining industry knew how to create accurate maps all those years ago does not mean that our colleagues in the more geographically focussed disciplines knew. So I would not be too harsh on them for not understanding the need for spatial localisation of data. But then little did they know then how their data would be used in the future.

And for that matter there are quite a few in the mining industry who have difficulty managing maps and geospatial data, even in these days of accurate GPS.

“the data you have [the archived cana036 data] are probably superior with regards to a NH signal.”

If the dataset cannot be independently verified or replicated, then it should be withdrawn.

What is happening is an ethical crisis in climate science. People are making claim after claim that cannot be checked back to the sources nor verified as to the mathematical treatments used. If this was a commercial environment, then we’d have a massive financial scandal on our hands.

The Kyoto Protocol was argued on the basis of scientific research that was shown to be unsafe and unreliable. The cost will be enormous, taking resources away from much more deserving causes, and producing no demonstrable benefit other than the increase in global bureaucracy.

It is an ethical crisis when scientific studies which affect public policy use data that cannot be checked and methodology that cannot be replicated or justified, and does leave the field wide open to abuse of the public trust in science.

It is a financial scandal that so much money is diverted into an environmental crisis that may not even exist.

You have not bothered to check any sources for consistency or reliability and run away from any examination of evidence. I give you an “F” for scientific content.

I think you are all being rather harsh. In my experience it is not uncommon to loose sampling points, even when you know you are coming back to them and think you have marked them adequately. I have even known of an instance when a whole 100m x 100m sampling plot could not be found in a forest after a several year gap, never mind loosing individual trees. I suspect when the cores were taken no one ever thought about coming back so you would not expect locations to be recorded accurately. How often have you done a piece of work and only when looking at your results though, of if only we had measured X as well!

Paul, there are a couple of different issues. First, I’ve seen updates that do not contain track down and update individual trees. I do not know that this is an actual dendrochronological practice (I can’t say that the opposite is the case.) Second, they did re-sample and the new results are not a hockey stick. They have not disclosed the new results.
If the new results had been a hockey stick, I’ll bet you dollars to doughnuts that they would have released them, whether or not they were the exact same trees. Why not update the sampling record? Call it a “new” site if you will. Third, Jacoby has said that he is “mission oriented” and looking for “temperature related” chronologies and not archiving “bad” chronologies. So there’s a reason to suspect bias in his recording. Fourth, cedars elsewhere do not have a big 20th century growth spurt; they like cool moist climate.

As to the site information, I still want any existing information on the site. Maybe they didn’t mark individual trees – but where is the turn-off on the highway? is the site facing north, south? What directions did they have when they tried to re-locate the site? Where is the location that they re-sampled? Maybe they don’t have data on individual trees, but they should have directions at this level. Steve

Perhaps you are right, John’s interpretation of an ethical crisis in climate science could be wrong – I instead would suggest, from experience, that it is sheer scientific incompetence that is the explanation – I posted yesterday a rough computation (back of envelope type) on SMERSH in http://www.henrythornton.com a few days ago. Amazing when one goes back to first principles to understand a physical process.

As for the rest of John’s alleged hyperboles, compared to BrEx, the Oil for Food scandal, the financial cost of Kyoto would appear to be an understatement, so John’s comments, in that light, would seem to be overly polite.

Gentlemen, we are given to understand that in the use of tree ring data as a temperature proxy, as opposed to a moisture proxy, that the exact position of a tree is vital. Therefore, if the data is to have any meaning for temperature reconstruction all sample sites that cannot be accurately located and revisited (assuming they are not under pavement or paneling someone’s den) should be dropped from consideration. Any competent and ethical scientist should do this without hesitation as accurate location is fundamental and vital to the interpretation of the data.

Trees do not have such a short lifetime in a mature forest unless severely affected by natural, or unnatural, catastrophe. Hence in the space of a several years I would not expect any 100 by 100 metre sampling plot to disappear except under one circumstance – a severe inability to navigate.

But if this fact of 100 by 100 plots being “lost” is true, then that is even worse scientifically, because your data sources are transient, and any data derived from such phenomena equally transient. No useful theory could be deduced from that data since by definition, it cannot be replicated.

Science depends on replication, so clearly our climate technicians never understood the principles of science.

In the case of the lost sampling plot it was an ecological study in the wet tropics. Though we knew the rough location of the plot from dead reckoning no plot markers had survived so we could not locate it exactly. Being close was useless as we would have been sampling a different set of trees. One of the (dis) advantages of working in ecology is that nothing can ever be replicated exactly. I don’t really know why the exact same trees are needed for dendrochronology though, surely temperature records are determined from different overlapping time series anyway. As long as the sample is large enough to account for random variations between individual trees at each site you don’t need to revisit the same trees.

Let’s see. They won’t release the statistical program used in multiproxy studies. The most important tree locations are ‘lost’. They won’t do a thorough and independent audit of the surface stations. And the satellite data can’t be trusted even though it matches the data from balloon measurements. But we’ve got to trust them because the evidence is so overwhelming that we can already measure anthropic global warming.

I tend to agree with Steve and comment #10, I really can’t see why the same trees need to be measured. I can believe the temperature signal could be more prominent in some trees than others (which may be more constrained by light/water etc) but I can’t see why you can’t just head out and sample a series of different trees, and dredge out the trees with the best temp signal in them.

I know this is just a remark, and not a peer-reviewed(tm) comment, but what does an “NH” signal mean? Is this referring to Northern Hemisphere temperatures? Surely a set of trees can only respond to local temperature variations, and not offer a measure of the whole Northern Hemisphere? I presume though, this is a harmless error. The most important question must be, are they being selective about the records they put forward as being ones which support their argument?

Too many questions and not enough answers! These sorts of studies should all have a clear protocol laid down in advance of the work. Deviation from such a protocol should be permissible but frowned upon and closely scrutinised for bias. Otherwise how can we trust the data?

Spence, NH is Northern Hemisphere. Don’t assume at all that there is a “harmless error” here. Many Hockey Team studies assume that trees can somehow inhale NH “climate fields” without intermediation by local temperature. For example, in our Nature correspondence, we pointed out that the bristlecones had little to no correlation to local or regional temperature. Mann’s response was that they never said that proxies had to be related to local temperature – only to some “climate field”, i.e. a temperature PC series. Obviously this lends itself to data mining of the most egregious kind. Jacoby’s involved in the same stuff. If something has a growth spurt, then they interpret that as inhaling a NH temperature signal, regardless of local temperature activity.

So any error here is not a slight mis-speaking by D’Arrigo, but going right to the heart of their methods. Steve

Tropics ? Holey Moley, fair enough losing the original plot then. And yes, if the sample was large enough to minimise variation, it could be considered adequate but that still raises the problem of replication, and in the case here, of a living changeable object such as a forest. Such difficulties are not encountered in mining, of course, and I suspect we have another problem of specifying protocols in documenting data of “living” objects,

Re #10 – Just thought of it, but if biological evolution vectored, ie, with passing of time life changes in light of the past, and cannot revert, then the idea that not being able to replicate data is not a problem, actually becomes a serious problem since, statistically, one is dealing with a changing population, which statistical theory has not encountered.

Well, I can’t say I’ve read tons about balloon measurements, but they would certainly have a lot fewer areas which would be doubtful than satellites. If you have a good reference concerning temperature measurements by balloon, I’d be interested in seeing it. I’d suspect that their exact map location can be determined and I believe the altitude is determined barometrically. And surely the temperature measurements should be about as accurate as those from surface stations.

And when you say ‘early’, just how early do you mean? I wouldn’t think we’d need to be concerned with ones before the 1970’s since that’s when the first satellite MSU measurements were made.

That brings to mind something from way back when I was an undergraduate looking at forest remnants in Scotland. The theory was that when forests are logged the straightest tress are taken in preference to the rest and that this could cause a genetic shift in the population over several 100 years. I am not sure if this was ever actually shown, never mind if it influenced growth responses to temperature.

Re#13 They seem to be attempting to use ‘local’ temperatures in some way, now:http://www.cru.uea.ac.uk:80/~timo/p/a/osborn_summertemppatt_submit2gpc.pdf
How big or small a part these play in the end result, I’m not sure; the external ‘low frequency’ component seems to have made a big impact on the North Western North American series at least. Was this derived using gridded temperatures?

You say that there is no attempt by many of the hockey team to relate growth changes to local temperature but just to some theoretic NH climate. If that is true (I would be amazed) how do they calibrate the tree rings? They must have some way of calibrating each species response to temperature (using the local instrument record?) otherwise all you can say is year X was warmer than year Y because the rings are wider. Indeed you may not even be able to say that because I dobt the response is linear.

Paul, I’m not trying to over-generalize. Sometimes there is calibration against local temperature or at least gridcell temperature – e.g. the Tornetrask or Polar Urals calibrations. However, this rule is not strictly applied if there is a series with a big hockey stick bend. For example, the bristlecones do not calibrate to gridcell temperature. Or the 1982 Gaspe cedar series does not calibrate to gridcell temperature.

Mann’s calibrations are to “temperature principal component” series rather than local temperature. So for example, for the Stahle Southwestern US/Mexico network, he uses 9 PC series in the AD1750 reconstruction, which he regresses against 11 temperature PCs. The regression period is only 79 years. So this regression is going to turn up a lot of high spurious correlations – as soon as you think about it, it is impossible to believe that there is a stable relationship between the PC7 of the Stahle/SWM network and the PC11 of world gridcell temperature, no matter what the correlations say.

By and large their statistical methods are very primitive. Mostly they rely on correlations, rather than checking t-statistics which are heteroskedasticity-autocorrelation consistent. In this area, economometric techniques are much more advanced than those used by the Hockey Team, so some of their condescension is pretty grating.

As I mentioned before, in our Nature correspondence, Mann argued that MBH98 did not assume that proxies have a linear relationship to temperature as follows:

“To make matters worse, they attempt to do so based on an incorrect description of the assumptions behind the MBH98 methodology. They claim that the method requires that predictors have “a linear relationship to temperature”. The method only assumes that the signal in the predictor (not the predictor itself, which contains both signal and noise), varies linearly with some large-scale pattern of temperature, not with local temperature itself (see top right column of page 780 of MBH98, paragraph
beginning with “Implicit in our approach”). For example, a coral indicator in the western tropical Pacific which records precipitation influences due to the El Niño/Southern Oscillation is a suitable proxy for ENSO-related sea surface temperature patterns. This issue is discussed both in MBH98 and numerous follow-up articles by the authors. The demonstration by MBH98 of Gaussian calibration residuals indicates that the linearity assumption of the MBH98 reconstruction is not violated.”

So there’s not much statistical analysis. I’ve been trying to get the residuals for the 15th century construction step for over 18 months without any success. The “demonstration” method is remarkably simplistic.

The method only assumes that the signal in the predictor (not the predictor itself, which contains both signal and noise), varies linearly with some large-scale pattern of temperature, not with local temperature itself (see top right column of page 780 of MBH98, paragraph beginning with “Implicit in our approach”).

Does someone really have to have a PhD to know this is nonsense? The method is not reliant on science but upon magic. What on earth gives the idea that trees do not repond linearly to temperature but does to some “large scale pattern of temperature”?

elsewhere we’re being urged to concentrate on the science not personalities, yet here you describe M, B, and H as ‘nonsense’ and ‘magic’. No reasons, no reasoning. Just a typically acerbic, arrogant dismissal of real and intelligent people. Noted….

I note that you spend absolutely no time discussing the science or the plausibility of what Mann wrote (which makes no sense). I note that you do spend lots of time making ludicrous aspersions every time I show even a slight lack of respect for the work of people you clearly idolize. Mann, Bradley and Hughes are not gods, whose words are spoken from on high, but ordinary scientists promoted by a political process that is intolerant of criticism and apparently immune to the normal laws of logic.

I did not “describe M, B, and H as “nonsense’ and “magic'”, I note that Mann’s explanation of how trees can somehow not be sensitive to temperature but mysteriously can be sensitive to a "large scale of temperature" makes no scientific or rational sense whatsoever. It is not arrogant or dismissive to make such a statement. To say that Mann’s explanation makes no sense is not simply my view, but also Steve McIntyre’s (who referred to it as ‘magical’) and not a few scientists with degrees in the relevant disciplines.

It is typical of your hit and run tactics to continually snipe with these ludicrous charges and highly personalized attacks. Perhaps you should grow up a little and actually think about what is being presented.

A. Several people here are reiterating my point that you can do another independant survey with new trees. You can follow whatever guides are practical in terms of location to avoid confounding effects (in the botany). And heck, if the original work is valid it ought to be insensative to picking a stand of trees a bit away. If it is sensative, then that shows that you need to sample more trees, more widely. Whole thing requires backing away from the “catching others in mistakes or bias” to “determining what the right answer is” as an objective. Heck, doing that might actually prompt the disclosure of the withheld information. no promise, but could happen…after all you are bringing new info to the party, which may hasten a desire for more involved comparison.

B. You could also do this study with a model for how you think such studies should be done (in terms of describing locations, methods etc.) Surely after all your criticism, you should have some opinions on how to do this right. (Might also be nice to see you cite some “model papers” that do type of job in recording things that you want in the esperimental literature.)

C. Regarding correspondance to local temp, I agree that you need to at least have foundational studies to show that there is even a proxy effect! And that such studies ought to give the relationship, linear or not, nonmonotinic, slope, etc. In general, I’m extremely disttrustful and coff at idea of “reflecting the global climate field” since the whole basis of the study is to build up local observations to a global picture. However, I do concede that their might be some “tele connections” via monsoon rain or what have you. But each individual one just needs to be proved in the foundational study (really no different in concept than the average tree ring calibration).

[…] unpublished update that I obtained somewhat by accident. As I reported in some early CA posts here, here and here , Jacoby and d’Arrigo did not publish the updated information, refused to provide a […]