6. An expanded network of long treeline chronologies from the Western US
Brown, P.M.; Woodhouse, C.A. & Hughes, M.K.

We report on new, high elevation, temperature-sensitive tree-ring chronologies from sites in the central and northern Rocky Mountains in Colorado, Utah, Wyoming, Idaho, and Montana, and from the Cascade Range in Washington and Oregon. This new collection of long chronologies, several approaching 1000+ years in length, fills in a spatial gap between long, temperature-sensitive chronologies from the Great Basin and Sierra Nevada to the southwest and from the Canadian Rockies to the north. The goal of this expanded network is to provide a more complete picture of past temperature variability in western North America over the past several centuries through analysis of ring width, ring density, and cell size chronologies from Larix lyalli, Picea engelmannii, Pinus albicaulis, and P. flexilis.

Response functions for new chronologies developed so far have been investigated using seasonal, monthly, and 5-day average temperatures from stations throughout the region. Preliminary results indicate often strong growth responses to growing season temperatures. A comparison of new and existing chronologies suggests periods of spatial and temporal synchrony and as well as asynchrony, both at high and low frequencies, across the region. Late-20th century patterns in several of the chronologies are very asynchronous, with some sites showing unprecedented growth that is possibly indicative of anomalous warming and others showing growth declines that may be related to increases in spring snowpack. Calibration of growth-climate relations using 20th century instrumental data is confounded by changes in relationships during the last few decades.

Could this be the long-sought for dendroclimatological evidence showing that ring widths respond in a positive manner to recent warmth?

Connie Woodhouse has written some interesting articles on precipitation reconstructions, but I hadn’t noticed any articles by her on temperature reconstructions. She’s archived many chronologies at WDCP/ITRDB, but the virtually all of them seemed to pertain to precipitation – see this listing.

I’ve met her briefly at a conference. I emailed her at NOAA inquiring about whether there had been any publication or archiving of temperature-sensitive chronologies from the Americas Treeline project – after all, the conference report was 7 years ago. She’d moved from NOAA to the University of Arizona, but replied very promptly and cordially.

She said that they’d not “gotten around” to publishing the temperature-sensitive sites – something that seemed very surprising given that temperature change has obviously been a big issue. She identified the sites said to be temperature-sensitive sites as the Engelmann spruce(PCEN) and alpine larch (LALY) sites archived under her name at ITRDB, noting that one of them had been removed from the archive because one of the other parties (either Hughes or Peter Brown) was working on a publication.

From this information, I was able to identify 2 sites in Colorado, one in Idaho, 2 in Montana (one of which was withdrawn) and one in Wyoming. I’ve inquired about what happened with the sites in Utah, Oregon and Washington mentioned in the 2000 conference abstract. I collated the 5 available archived site chronologies and calculated an average, shown in the Figure below. Readers will undoubtedly notice that ring widths in the 1990s have declined with values in 1993 and 1999 being among the lowest in the last 700 years, a result that I think would have been worthwhile for the authors to have brought to the attention of people interested in temperature proxies.

Figure 1. Average of 5 Woodhouse chronologies.

Where does that leave us in terms of archived temperature-sensitive proxies from the IAI program? These 5 sites obviously do not show a linear positive response of the site chronologies to warmer temperature in the 1990s; the archived Mexican proxies are precipitation proxies; no South American data has been archived from the program; as far as I can tell, no Canadian measurement data has been archived (although Luckman and Wilson archived a chronology for Jasper/Athabaska/Icefields); Jacoby and D’Arrigo have archived a portion of their data (but they were funded through a separate program.) So there’s virtually nothing to show in the archives for temperature-sensitive proxies and the situation for all chronologies is not much better.

One of the year 2000 co-presenters, Peter Brown, recently commented on CA here saying to Rob Wilson:

You and I talked this in Beijing; while I admire your tenacity in trying to talk sense to those folks, sense appears to me to rarely be able to rise above the muck. I have to admit I am a lurker on CA, and while some of those that comment there appear to be quite sensible (in fact often have some good points) most are in the realm of far-out absolute deniers of anything that smacks of global warming…

He went on to say:

For example, this series of recent posts on your recent Climate Dynamics paper; for goodness sake, they’re berating you for not immediately posting data that hasn’t even yet been defended by the student that gathered it!

As to this last comment, if it’s important to the integrity of the data that the student “defend” it, shouldn’t that be done before it’s published? Just a thought. I checked Peter Brown’s own archiving record, and other than being named on 27 sites archived by Connie Woodhouse in the last few years, I am unable to find any other chronologies by Peter Brown at the ITRDB, although, as pointed out below, he has archived much fire-scar data to ITRDB (this being his principal area of work.)

Update:
Since Connie Woodhouse identified larch and Engelmann spruce as species that are appropriate for temperature reconstruction, I collated and took a simple average of all larch and all Engelman spruce chronologies in the ITRDB as shown below. I am not saying that all larch or all Engelman spruce chronologies are temperature proxies. At this particular point, I am unable to decode any ex ante dendroclimatological criterion for a site being a temperature proxy. I am simply posting showing what the average looks like IF these sites are temperature proxies. I’ll also show a map. (Rob Wilson has done larch Engelmann spruce chronologies, but they aren’t in the ITRDB archive). Figure 1 below goes to 1998.

There’s an interesting back-story with the Washington larch chronologies. Many of these chronologies were listed in the original MBH98 SI as being used. However, it turned out that Mann didn’t use all the chronologies that he said he used and the high-altitude Washington larch chronologies were among the sites that were listed in the Corrigendum as not being used. The reason for them not being used remains unclear. The reason given in the Corrigendum is untrue. Nature did not have the Corrigendum peer-reviewed. I was sent a copy of the Corrigendum prior to publication and notified Nature that the reason offered by MBH for the discrepancy between series said to have been used and series actually used was false, but Nature didn’t care whether it was false or not and proceeded to print the Corrigendum without peer review or, as far as I can tell, even double-checking with Mann about the incorrectness of the proffered explanation.

Figure 1. Average of 14 ITRDB larch site chronologies, primarily from Washington.

Here is a very general location map of the larch sites. The larch site in Saskatchewan is unlikely to be a temperature proxy, but the larch sites in Churchill and northern Quebec were used by Jacoby and D’Arrigo as temperature proxies. The Washington larch sites are at high altitude and certainly plausible sites for temperature proxies.

Next here is a similar average for 48 Engelman spruce sites. Again we don’t see a strong positive response of ring widths in the 1990s.
Figure 3. Average of 48 archived Engelman spruce chronologies.

The map below shows the location of the 48 ITRDB Engleman spruce sites, with sites colored blue being below 2000 m and in red 19 sites above 3000 m. The 4 Woodhouse sites were between 2800 and 3200 m. 8 of these sites were sampled by Schweingruber and were viewed as temperature-sensitive sites from the outset.

Figure 4. Location Map of Engelman spruce sites.

As a final experiment, here is an average of the 19 Engelman spruce sites from above 3000 m. (See list below). 8 of these sites were sampled by Schweingruber. The 1993 value was the lowest in nearly 600 years,

114 Comments

Response functions for new chronologies developed so far have been investigated using seasonal, monthly, and 5-day average temperatures from stations throughout the region. Preliminary results indicate often strong growth responses to growing season temperatures. A comparison of new and existing chronologies suggests periods of spatial and temporal synchrony and as well as asynchrony, both at high and low frequencies, across the region. Late-20th century patterns in several of the chronologies are very asynchronous, with some sites showing unprecedented growth that is possibly indicative of anomalous warming and others showing growth declines that may be related to increases in spring snowpack. Calibration of growth-climate relations using 20th century instrumental data is confounded by changes in relationships during the last few decades.

Translation:

“The treering widths correlate very well with temperature except when they don’t. We thought we had a linear response except they don’t reflect recent warming. We hypothesize about snowpack (precipitation) but we really don’t have a clue.”

John A:
You beat me to it: I noticed the same very elipitical comment. The conference Steve referenced had 35 papers. I quickly skimmed the abstracts and I think there is a surprising absence of unqualified support for tree rings as temperature proxies across a global geography, different topographies and many species. I am going to try and score them on the level of support for a clear temperature proxy. It probably would help if others did the same thing: Kind of a mini meta-analysis where we could establish some inter-rater consistency.

…they’re berating you for not immediately posting data that hasn’t even yet been defended by the student that gathered it!

I would expect that the data would be archived at the time of publication, not after the person has defended his dissertation which could be several years away. If the data was only published in the dissertation, then I would agree that waiting until it has been defended is appropriate.

As an amateur, can I observe that IF the relationship between tree ring thickness and temperature is inverse quadratic, then the declining signal in recent times could be due to rising temperatures stressing the trees!!

Of course, while that point may be seized upon by the dendrochronologists to support their position, it seems to throw a bit of a spanner in the works for interpreting past tree ring records. How do you separate thin rings that are due to too much cool from those that are showing too much warmth?

“How do you separate thin rings that are due to too much cool from those that are showing too much warmth?”

That is pretty much the point I have been trying to make to people in my private discussions in real life (as opposed to “internet life”). Tree rings are an indication of an how good of a growing season it was. There are many factors that could create a good growing season, including an elk that happened to die nearby. Changes in tree ring widths are nothing more than indications of deviations in growth conditions. Greater or less than average and the extent thereof showing the deviation from optimal. All one can hope to glean from it is to find the widest ring, consider that year to be the optimal growth year in the life of that tree, and label all other years a delta from optimal. You can’t deduce the reason for that deviation. And I would doubt that all the tress show the same year as the optimal year even if they are relatively close in geographic location.

Quite right, Trevor, as we’ve been stating here for quite some time now.

Of course the dendroclimatoligists would want you to accept that they can tell which sites can only be on the rising side of the curve at present, but they really don’t have an answer to how to tell past situations from present ones.

What the hey, I’ll bite since my name has come up in your discussion. First, on archiving, most of my current research is involved in fire history reconstructions, not dendroclimatology. Check out archived data sets on the International Multiproxy Paleofire Database (http://www.ncdc.noaa.gov/paleo/impd/paleofire.html). I archive my data sets as soon as they are published; however, I do not dismiss others who do not. Each has his or her own reasons for when and, in fact, if they do so. As far as ecological or climatological data sets go, as a discipline dendrochronology was long ago ahead of the curve with the ITRDB. Second, as for our chronologies that we collected several years ago to look at temperature response in the nothern Rockies most are from 5-needled pine spp (whitebark and limber) and to be quite honest do have a mixed temperature and precipitation response in them. Others are from Engelmann spruce and show the divergence problem seen in upper latitude sites. We have both ring width and density chronologies, and we are still exploring ways that we can refine the climate signal in these data – from mechanistic, statistical, and simulation modeling approaches. There is no hidden agenda.

Finally: I have little patience for your blog. You only picked out the above quote of mine from the ITRDB thread of the other day, but here is the part right before that:

I have to admit I am a lurker on CA, and while some of those that comment there appear to be quite sensible (in fact often have some good points) most are in the realm of far-out absolute deniers of anything that smacks of global warming. It’s completely analogous to Henri’s description of dealing with the creationists; they’ll seize upon the slightest uncertainty to tear down the entire discipline.

Ad hominen attacks invariably arise for anyone who attempts to offer an alternative view to the preconceived notion of the moment. Typically a thread here quickly devolves from anything remotely connected to science into a personal attack. And that is no way to try to carry on a conversation (if that is the goal).

we are still exploring ways that we can refine the climate signal in these data – from mechanistic, statistical, and simulation modeling approaches. There is no hidden agenda.

well, I am downright suspicious of scientists that are cagey about their data and methods; there are many examples, and the slow striptease of the methods and data in MBH’98 was a particularly unedifying spectacle.

In (astro/particle)physics, there are innumerable projects where data is collected, but put in a public archive rapidly. I think that has great benefits.

In epidemiology, there are terrible problems with publication bias; you get twenty investigators doing a study, and only one gets a positive result at P

Peter, thank you for your comments. I was interested in your statement that:

I archive my data sets as soon as they are published; however, I do not dismiss others who do not. Each has his or her own reasons for when and, in fact, if they do so.

Since if a study cannot be replicated, verified, or tested if the data is not available, and since most journals require such archiving as a condition of publishing (although they don’t always follow their own policy), and since the NSF requires archiving for work done under their aegis, I am curious what you would consider a valid reason for not archiving data used in a published paper.

Also, the part of your statement that Steve didn’t quote, viz:

I have to admit I am a lurker on CA, and while some of those that comment there appear to be quite sensible (in fact often have some good points) most are in the realm of far-out absolute deniers of anything that smacks of global warming. It’s completely analogous to Henri’s description of dealing with the creationists; they’ll seize upon the slightest uncertainty to tear down the entire discipline.

is itself an ad hominem attack. Comparing “most” posters here with creationists says nothing about whether they are right or not, it is a transparent attempt at guilt by association … Complaining that Steve didn’t quote your ad hominem attack in a post decrying ad hominem attacks seems like a curious tactic to me.

Next, the questions raised about dendroclimatology are not “the slightest uncertainty”. They are basic, fundamental questions like:

1) Are the mathematical methods used in dendroclimatological reconstructions valid? In particular, the use of novel, untested statistical methods without any attempt to mathematically justify their validity is worrisome. See, for example, this thread on variance adjustment.

2) Are proxy reconstructions without ex ante proxy selection rules valid? And if so, what protection is there against “data snooping” and “cherry picking”?

3) Many proxy temperature reconstruction use proxies which have been used previously for proxy precipitation reconstructions. What methods (if any) have been used to control for the other variables? If there is no attempt made to control for confounding variables, is the study valid?

4) Temperatures which are too hot or too cold both cause narrow tree rings. Under the assumption of linearity in proxy reconstructions, this means that hot times will be interpreted as cold times. Why is this ignored in temperature reconstructions? At a minimum, it should make for asymmetrical error estimates, but I have never seen even that done, much less any serious discussion of the inherent problems.

5) Are proxy reconstructions which do not have a validation period, but only a calibration period, scientifically defensible?

6) How can proxies which do not correlate with local temperatures be used as a proxies for global temperatures?

7) Fritts divides sites into “complacent” and “sensitive”, depending on whether they respond to a given signal (temperature, precipitation, etc.). However, even within “sensitive” sites, there are trees which respond and trees which don’t, and no one seems to know why. This raises questions, including:

a) Does a site remain “complacent” or “sensitive” over a period of centuries, or can it be sensitive for a while and then become complacent?

b) If a site is “sensitive” to temperature due to it being close to the treeline, will it stay “sensitive” when the treeline changes? And if so, will the “sensitivity” change?

c) If a site is “sensitive” at a given average temperature, will it remain “sensitive” if the average temperature rises? If it falls?

8) Rob Wilson’s study says “Cook et al. (1995) state that using traditional’ individual series detrending methods, as done in this study, the lowest frequency of climate information that can be realistically recovered is 3/n cycles per year (where n = the mean sample length).” This means that the minimum frequency from most dendro studies will be on the order of a hundred years. Why do so many studies, including Rob Wilson’s, report much shorter term fluctuations? Shouldn’t all the results be filtered with a hundred year filter before being reported, if the higher frequency information is known to be noise?

Perhaps there are excellent, valid answers for all of these questions. However, the questions have been raised various times on this site, and no one has stepped forward to provide answers.

Finally, despite all of the claims of ad hominem attacks on people who post here, I find that the overwhelming majority of comments on this site are about valid scientific questions, such as the seven questions above. Saying that Rob Wilson has not archived his data is not an ad hominem attack, it is a comment about the violation of scientific norms as well as the stated policy of the journal in which he published. As the person who raised the issue, I can assure you that I was not saying that his conclusions are wrong because of his actions, that would be an ad hominem attack.

I was saying, however, that without the data we can’t determine if his conclusions are wrong, and thus they are currently just anecdotal claims, not valid scientific conclusions, and as such have no business being published in a scientific journal.

Again, thank you for your contribution, and I look forward to the possibility that some of the questions I listed above can be put to bed.

As to this last comment, if it’s important to the integrity of the data that the student “defend” it, shouldn’t that be done before it’s published?

Are you misunderstanding the concept of a thesis defense here?

Peter Brown, your remark about the integrity of the postings on this site is, well, an ad-hominem. Also, not all personal attacks are ad-hominems, but I’m sure you know that. It only becomes an ad-hominem when the attack is used to cast doubt on the person’s argument. I’m curious, btw, if you chastize your colleagues every time they lob insults, and lob ad-hominems, towards Steve and/or Ross, and anyone in here? Probably not. That said, calling you a hypocrite is not an ad-hominem, and not really an insult, either. It’s just an observation.

Typically a thread here quickly devolves from anything remotely connected to science into a personal attack.

I disagree. While this is true of some threads, it is not generally true of the “professional” threads. Rob Wilson, Judith Curry, Isaac Held, even Martin Juckes – all have been treated respectfully, by commenters seeking to learn.

#1, John A. You did not miss a single thing. The dendroclimatologists have now picked enough cherries to make a dandy cherry pie. It is time for them to admit that they cannot reliably determine temperature from tree rings. Tree ring studies are important and are useful for many things, especially drought studies, but they are NOT good proxies for temperature, as these guys are slowly, but surely, admitting. And I am not a total amateur on this point, although I am certainly not a dendroclimatologist.

I am growing very tired of the prevarications thrown out to justify withholding the data used in support of a study’s conclusions. The field of Climate Science is increasingly taking on more of a resemblance to a religion class than science. We are asked not to challenge but to accept things on the basis of faith and trust that everything has been done properly. If that is the case then why not openly display the basis used for making your arguments?

Why is this even a subject of question? I saw a thread on “The Weathermaker’s” site by our old friend Dano justifying attacks on Steve and the withholding of data because Steve has been known to be only interested in “trashing” a poor scientist’s work. How can the scientific method of trying to falsify a study’s conclusions be called “trashing.” How can you trash something that is scientifically sound. If criticism fails, then its object becomes stronger.

This is all becoming stranger and stranger, more befitting of a Tom Clancy or Michael Crichton novel. What do they have to hide?

As to this last comment, if it’s important to the integrity of the data that the student “defend” it, shouldn’t that be done before it’s published?

Are you misunderstanding the concept of a thesis defense here?

Granted: there are probably different strokes for different fokes but in many fields, a graduate student will publish various papers for several years. Those papers are often combined in some form into the thesis. However the process of the defense is not at all meaningful to whether the data is archived or correct. It has after all already been subject to review by the journals in which it was published.

The purpose of the thesis and the defense is to demonstrate that you’ve done enough original work–and did so with sufficient understanding–to justify the doctorate. I’ve yet been at a defense wherein the process more than a formality.

Consequently, I cannot see how a graduate student not yet having done the defense should be grounds on which to defend a failure to follow through with archival requirements of journals in which the work was accepted for publication. Yes they don’t have their PhD yet but then if this was really an issue, their adviser must have abdicated his supervisory role as well as his obligations as coauthor (as he frequently is).

I archive my data sets as soon as they are published; however, I do not dismiss others who do not. Each has his or her own reasons for when and, in fact, if they do so. As far as ecological or climatological data sets go, as a discipline dendrochronology was long ago ahead of the curve with the ITRDB.

Organizations are most productive and most respected when they are successfully self- policed. Had MBH been fully expected to proactively archive data by their dendro peers, a great deal of dendro credibility would have been saved. Perhaps Peter you and your peers should reconsider your professional expectations of your peers.

Nice to have an expert on these issues post a comment. I have a few questions though. You state in #8

I archive my data sets as soon as they are published; however, I do not dismiss others who do not. Each has his or her own reasons for when and, in fact, if they do so.

I think you’ve missed part of the point. If the data has been aquired under a Federally funded grant, NSF, NASA, etc. If isn’t your data.

Also, I’m unaware of any good reasons why data wouldn’t be archived. Can you suggest some? Further can you explain why dendroclimatology should be indulged while scientific fields are held to a higher standard? Why for example its okay for dendroclimatologists to ignore stated NSF archiving policies? JGR archiving policies? I’m really curious what makes dendroclimatology special.

Second, as for our chronologies that we collected several years ago to look at temperature response in the nothern Rockies most are from 5-needled pine spp (whitebark and limber) and to be quite honest do have a mixed temperature and precipitation response in them. Others are from Engelmann spruce and show the divergence problem seen in upper latitude sites. We have both ring width and density chronologies, and we are still exploring ways that we can refine the climate signal in these data – from mechanistic, statistical, and simulation modeling approaches. There is no hidden agenda.

They have the chronologies for years without being able to come up with anything useful. But, rather than archive them and let someone with possibly a different perspective on analysis have a whack at them, they’ll sit on them.

What are these people afraid of? Looking foolish if someone with a fresh outlook makes good use of them? It’s been years. If you can’t find something useful, they should still be archived along with the evidently flawed selection method used, so that others don’t waste time and more taxpayer money attempting the same method in the future. At least some good would come of it. In addition, who can know there won’t be a use for them at some future point in time for some sort of analysis not yet considered? Many in the profession seems to act more like pack-rats than scientists.

I archive my data sets as soon as they are published; however, I do not dismiss others who do not.

Happy to hear you follow good scientific practice and what’s routinely established for published works by the journals and the grant awarding organizations. However, why don’t you dismiss those who don’t follow standard practice? Don’t you care or is it that you’re afraid of pointing out these deficiencies to your fellow scientists?

…as for our chronologies that we collected several years ago to look at temperature response in the nothern Rockies most are from 5-needled pine spp (whitebark and limber) and to be quite honest do have a mixed temperature and precipitation response in them.

Really? How do you know there’s such a response? What methods did you use to determine that? How did you separate out the other confounders? Nothing new with my queries; they’ve been asked many times by many people, here and elsewhere, but have received no cogent answers.

***********Composed prior to reading the other responses, but submitted even if it’s redundant and not as erudite as the others.

It only becomes an ad-hominem when the attack is used to cast doubt on the person’s argument.

Thank you for helping me make my case; ad hominem is when one attacks “an opponent’s character, rather than answering his argument” (Webster’s). I made an observation about what I perceive as a general tone of much of the discussion on this blog: you manage to insult me by calling me a hypocrite in one of the very next posts. You have no basis for questioning my dealings with any of my colleagues, yet you resort to calling me names.

Sam:

I am growing very tired of the prevarications thrown out to justify withholding the data used in support of a study’s conclusions.

John Norris:

Organizations are most productive and most respected when they are successfully self- policed. Had MBH been fully expected to proactively archive data by their dendro peers, a great deal of dendro credibility would have been saved.

John Baltutis:

However, why don’t you dismiss those who don’t follow standard practice? Don’t you care or is it that you’re afraid of pointing out these deficiencies to your fellow scientists?

Pete:

I think you’ve missed part of the point. If the data has been aquired under a Federally funded grant, NSF, NASA, etc. If isn’t your data.

Bob Koss:

They have the chronologies for years without being able to come up with anything useful. But, rather than archive them and let someone with possibly a different perspective on analysis have a whack at them, they’ll sit on them.

Per says:

In (astro/particle)physics, there are innumerable projects where data is collected, but put in a public archive rapidly.

A recurrent theme here: where, oh where, are thou data? First, it is no one’s job (certainly not mine) to police my peers. We are all individuals with our own reasons for doing what we do. But here on this blog one person starts the conversation with an ad hom calling me “afraid of pointing out deficiences to your fellow scientists”, when the person who makes the statement has no clue what or what not I have pointed out to peers. Second, the existence of online databases and supplemental data sets as part of publications is a relatively new development in science and scientific data management. How long have the internets been around, since the early 90s? Many journals that I am familiar with do not, in fact, have any capability at present for managing supplemental data. The ITRDB has been online only since about 1995. Before that since about the mid-1970s it was available on floppies but, of course, updating as new data were submitted was a bit problematic. And Per, please list some of these public archives from astro/particle physics that you appear to know so well. I just Googled “astroparticle physics public data archives” and the third link (the first was a PDF and the second a PS file) was an article from the CERN Courier (2007, Vol 47, No 3) entitled: “Let the data free! Three researchers working in the new field of astroparticle physics argue the case for making the data from astroparticle experiments public”. Sounds to me like they are having some issues with data access as well. Should there be better data access? Most certainly, and personally I have been and am helping in that regard in my own little corner of the world. But this strawman set up initially by Mr. McIntyre and repeated ad infinitum on this blog is certainly getting old.

Finally, they ARE my data. Funding agencies pay me for my expertise, my imagination, and my insights to be able to make some advance in our understanding of how nature works, not for raw data sets. Our society has made a commitment to supporting science in the form of funding researchers to do what they do best as determined from their education, experience, and academic achievements. Quite often scientific investigations lead to dead ends, science is not done by recipe, and often data are used to address more than one hypothesis. It is the understanding and inferences supplied by the scientist that funding agencies are interested in, not her or his raw data. As for NSF data archival requirements, from what I understand there are no hard and fast rules as to what data are to be archived: for example, in dendroclimatology, raw data in the form of ring widths are not requested or required, only the final reconstruction.

Well, if you’d merely said in your first post: “Steve, I need to set the record straight. I do my best to ask my fellow researchers to archive their data as soon as possible. It’s just done quietly; behind the scenes.” Then I’m sure Steve would have been happy to congratulate you and change his post. But instead you said:

I archive my data sets as soon as they are published; however, I do not dismiss others who do not.

which says quite the opposite. That you’ve had quite a number of responses pointing this out and taking you to task for it shouldn’t surprise you.

BTW, I just went back and read Steve’s initial post in this thread, your response and the quote you both include. While you may have answered the question of where your data is stored, I still don’t see your hit on CA as being valid. Yes there is a bit of mindless anti-AGW here, but there’s no way it comes from the majority of posters here. Personally I’m big enough to take a claim either way from you, but in fact I’m not a “denier” in the sense you appear to be defining it. And neither are Willis, Jean S, Hans, Bender and any number of other regulars. I’d love to see you name names and provide ten people who post regularly here who you belive are “far out absolute deniers”.

The overall purpose of these policy statements is to facilitate full and open access to quality data for global change research. They were prepared in consonance with the goal of the U.S. Global Change Research Program and represent the U.S. Government’s position on the access to global change research data.
The U.S. Global Change Research Program requires an early and continuing commitment to the establishment, maintenance, validation, description, accessibility, and distribution of high-quality, long-term data sets.
1.ⵠFull and open sharing of the full suite of global data sets for all global change researchers is a fundamental objective. As data are made available, global change researchers should have full and open access to them without restrictions on research use.
2ⵠPreservation of all data needed for long-term global change research is required. For each and every global change data parameter, there should be at least one explicitly designated archive. Procedures and criteria for setting priorities for data acquisition, retention, and purging should be developed by participating agencies, both nationally and internationally. A clearinghouse process should be established to prevent the purging and loss of important data sets.
3ⵠData archives must include easily accessible information about the data holdings, including quality assessments, supporting ancillary information, and guidance and aids for locating and obtaining the data.
4ⵠNational and international standards should be used to the greatest extent possible for media and for processing and communication of global data sets.
5ⵠData should be provided at the lowest possible cost to global change researchers in the interest of full and open access to data. This cost should, as a first principle, be no more than the marginal cost of filling a specific user request. Agencies should act to streamline administrative arrangements for exchanging data among researchers.
6ⵠFor those programs in which selected principal investigators have initial periods of exclusive data use, data should be made openly available as soon as they become widely useful. In each case the funding agency should explicitly define the duration of any exclusive use period.

Senior policy is very clear that data collected with federal funds should not be the personal property of the collector and that funding agencies should ensure that their contracts and grants implement this policy. You say “it is no one’s job (certainly not mine) to police my peers. We are all individuals with our own reasons for doing what we do.” In fact, it is the job of funding agencies to ensure compliance with data archiving policies. I have been very critical of deplorable administration by NSF of compliance with data archiving policies both in the terms of their grants and in enforcing existing language. I’m actually more critical of the co-opted bureaucrats than the non-compliant scientists. If the terms of your grants have left you with the view that the data is your personal property, then your particular granting agencies, if they are federal agencies, have obviously failed to ensure compliance with federal policy.

As to the continuing resonance of this “old” issue, the only reason why data archiving is still an issue here is that the situation remains unresolved. In my opinion and I asserted this as an IPCC reviewer, IPCC should require authors seeking to be cited to consent to data archiving, regardless of the requirements of their publishing journal. If climate science was merely being chatted about in faculty seminars and there were no policy implications, then lack of data availability would be of little general interest. But that’s not the case.

Re #28
This illustrates one of the problems with the open blog approach to auditing. If everyone is free to contribute without policing, misinformation will propagate at a rate that is impossible to constrain. McIntyre’s opening post are not often incorrect. (When small errors are pointed out, he fixs them.) The problem lies in the unregulated commentary afterward. CA asks dendroclimatologists to “answer back” on specifics. But the problem is not incorrect specific statements. The problem is overall mistaken impressions.

For example:
1. There seems to be an attitude at CA that tree-ring data are public property because the studies were publicly funded. I am not a lawyer, but, as Peter Brown indicates, I do not think this is true. ITRDB exists only because the community of individuals wants it to exist, not because it is legally mandated to do what it does. ITRDB has no authority.

2. Often it is stated categorically at CA (in the comments, not the posts) that tree rings are not temperature proxies. This is neither true nor false. It’s not a true-or-false question. It’s a matter of degree.

3. The picture of the Qilianshan juniper in the Chinese desert sand is impressive (groan). But is it representative? I suspect that most of the treeline conifers alleged to be temperature proxies are not situated in desert sands. Although there’s nothing about the picture that needs to be corrected, what needs to be corrected is the impression that the average CA reader leaves with: dendroclimatologists are either ignorant or dishonest.

The silence of the dendroclimatologists is not surprising. Why would a specialist put himself on public trial when the end result is going to be a group of non-specialists taking him to the edge of his expertise and showing him to be ignorant in an area where he is not specialized (cloud physics, climate modeling, carbon cycling, global energy policy, etc)? For the expert in question, there is nothing to gain from this exercise, and much to lose. I am not certain, but would be willing to bet the average dendroclimatologist knows little about the inner workings of the GCMs, how CO2 sensitivity is estimated, for example. This does not reflect poorly on dendroclimatologists. It merely reflects the overall fragmented nature of human scientific knowledge.

The internet is bringing these disparate groups together like never before. Collisions are inevitable.

Peter Brown should not be pilloried, but thanked for his ongoing contributions here.

When a site is selected, I would assume that the site contains trees of various ages. Some trees may be 20 years old, some 50 years old, some much older. Do trees of various ages all react the same to the same input? For example, would a tree that is 20 years old react the same to various temperature, precipitation, insect infestation, etc., as a tree that is 50 years old? Or 100 years old? If not, are those differences accounted for when obtaining core samples? Or do they only select trees of approximately the same age when sampling? I would think that even amongst trees of the same species at a site, and even at the same relative age, could show different responses, based on physical location relative to each other. For example, one tree might be on more of a slope where greater run-off occurs, thus limiting moisture availability whereas another tree might be in a slightly depressed area, where moisture pools on the ground, leading to too much precipitation. In another example, what about where two trees are in relatively close proximity to each other and fighting over the same moisture pool in the ground whereas two other trees might be farther apart and not have that issue.

Are those important issues and is data recorded with regard to the layout of trees selectd at a site (Steve has mentioned this as meta-data), which could be important when trying to discern various signals from a selected site? If so, how might these factors affect various temperature/precipitation reconstructions? Are the various smoothing techniques used to manage these disparities and are they the appropriate measures to take?

bender, if these people don’t feel that they have to archive the data, despite the collection of that data being publically funded, fine. Then they are free to not archive the data. However, scientific papers can not be published based on unarchived data, as replication is a part of the scientific method. That papers are published in journals that call themselves scientific, without enough information to replicate those papers, shows a carelessness on the part of the journals.

On the other hand, in the USA at least, isn’t it a fact that the work of public servants is always in the public domain, except for when it is classified? And, doesn’t the “Freedom of Information” act require that such information should be accessible? I suppose it’s possible for data collection to be publically funded, without the collectors themselves being public servants, but morally and scientifically, I don’t see the distinction really. US taxpayers are, in effect, paying for this data, why should they not be entitled to examine it?

1. There seems to be an attitude at CA that tree-ring data are public property because the studies were publicly funded. I am not a lawyer, but, as Peter Brown indicates, I do not think this is true. ITRDB exists only because the community of individuals wants it to exist, not because it is legally mandated to do what it does. ITRDB has no authority.

I agree that ITRDB has no authority. However NSF has lots of authority. U.S. federal policy states that they want data to be archived; NSF should be ensuring compliance with that policy. They don’t, but they shuld and it would be very simple and practical for them to do so.

2. Often it is stated categorically at CA (in the comments, not the posts) that tree rings are not temperature proxies. This is neither true nor false. It’s not a true-or-false question. It’s a matter of degree.

The issue for me is a combination of statistics and data. Are the statistical methods used in the studies cited by IPCC adequate to provide an opinion on whether there was a MWP or not? I think not. Does that it mean that it is a priori impossible to winkle out information from the impressive collection of tree ring data. No. One of the thins that I would have liked to have done is to show bright young stats students the interesting collections of tree ring data, combining a variety of complicated statistical issues, and try to get them to approach these things with fresh spectacles. I don’t think that the methods of inverse regression on PCAs are a very promising line of multivariate treatment.

3. The picture of the Qilianshan juniper in the Chinese desert sand is impressive (groan). But is it representative? I suspect that most of the treeline conifers alleged to be temperature proxies are not situated in desert sands. Although there’s nothing about the picture that needs to be corrected, what needs to be corrected is the impression that the average CA reader leaves with: dendroclimatologists are either ignorant or dishonest.

I agree that most of the treeline conifers are not located in desert sands. Even if it’s a one-off, there’s something wrong in the chain of command when Qilianshan junipers are used at all in multiproxy studies. My point has been that there has been a lack of due diligence. You get this same lack of due diligence even with the NAS Panel. They say that bristlecones (which occur at the same latitude, altitude and probably precipitaiton as Qilianshan junipers) should be avoided in temperature reconstructions, but then illustrate studeis that use bristlecones/foxtails without checking – even though they had notice of the problem. Jerry North was quoted as saying that they “just winged it”. Is this “ignorant” or “dishonest” – those words don’t apply, but there’s obviously something wrong.

Re #33
My comments on data ownsership were in regard to University professors, not public servants.
Re #34
No substantive argument from me. Yes, NSF has authority. How do you suggest they enforce it? Who will pay for the infomatic infrastructure required? And the quality checking? Note: ITRDB is not part of any NSF infrastructure. Tear down ITRDB and what will NSF build in its place? Granting and compliance are two completely different functions. It would take quite a culture change before compliance were taken as seriously as granting. Finally, NSF covers so much more than climate science. Climate science is in the public policy spotlight right now, so there is a sudden need for transparency and data access that does not exist in other scientific domains. It would cost millions upon millions to implement and maintain a central data archive that would cover all areas of research that could eventually reach the public policy domain. You know how fast that budget item would be cut in order to fund other, “higher” priorities?

But despite longstanding treaty commitments to help poor countries deal with warming, these industrial powers are spending just tens of millions of dollars on ways to limit climate and coastal hazards in the world’s most vulnerable regions ‘€” most of them close to the equator and overwhelmingly poor.
Next Friday, a new report from the Intergovernmental Panel on Climate Change, a United Nations body that since 1990 has been assessing global warming, will underline this growing climate divide, according to scientists involved in writing it.

Jeez, I’m not saying “tear down” ITRDB/WDCP – I think that it’s terrific. The U.S. GCRP applies to climate science and not to all science, so policies specific to climate science are not only justified but mandated. There are some administrative steps that wouldn’t costs a dime: include data archiving within the languae of the grant as the NSF is supposed to do. Make the grantees file a form stating that they’ve complied with the condition. Whether anyone vets actual compliance is a different issue. I’m sure that most authors who submit a form stating that they’ve archived data will actually have done so.

Bender:
FWIW I totally agree with your sentiment of treating everyone with respect and for making allowances for reasonable levels of sensitivity around the issues raised on this thread. Forthrightness needs to be combined with a desire and willingness to maintain the dialogue. Under such circumstances ad hominem attacks and attacks on motivation are probably not conducive to maintaining the dialogue. The difficulty here is that a number of issues raised, for example, the completeness, archiving and sharing of data and methods and apparent significant flaws in fundamental methodological assumptions, are so basic to the fields under discussion that the professional risks attached to real dialogue are very high: Core methods may be shown to be mortally flawed.
It is disappointing when someone seems willing to pursue a dialogue under such circumstances feels a need to withdraw. Hence my lament that Rob Wilson seems to have cut off his contributions to this site. He seemed to be willing to take these risks – but was treated somewhat disdainfully by some commentators.
On the other hand, Steve and others here have been working these issues for a long time and have been repeatedly treated dismissively even after their criticisms have been largely legitimated. Charges that somehow they remain “naïve” about the field and make “ignorant” mistakes as in the WSP discussion could be made so much more constructive by an open acknowledgment that seminal articles contain the same mistakes and a fundamental issue exists. Such openness diminishes the chances of a merited criticism being taken as a “cheap shot”.

In addition, there is a real dilemma in being critical yet too considerate. The NAS report I feel is a perfect example of where too much consideration to established names masked or undermined genuine criticisms and as a result failed to move the debate forward: Each party could claim vindication. On the other hand, blogging seems to bring out the worst in most of us and we write things that we would seldom say face to face.

Finally, while this is serious business for many ‘€” it seems that all should be willing to acknowledge and accept a ribbing or a few “barbs” when mistakes are made or boundaries are over-stepped. JMS’s condescension towards Ms. Byrne is a case in point. His subsequent responses were much more appropriate.

Re #38 My point, Steve, is that NSF is incapable of anything as imaginative or energy-demanding as ITRDB. They have the authority, yes – but no ability to ensure compliance. The dendro community is to be credited for what they’ve achieved, under no obligation to do so, I might add. You know this, but the average reader may not. The general tone of discussion here is typically anti-dendro. Yes, there are things the dendro community can do better, in terms of archiving more data and better metadata. But where does it stop? Fact: the dendro people are doing a FAR BETTER job of making their work accessible than are the climate modelers. When are THEY going to get around to archiving ALL their work so that it can undergo audit?

Finally, they ARE my data. Funding agencies pay me for my expertise, my imagination, and my insights to be able to make some advance in our understanding of how nature works, not for raw data sets.

If the labor costs for collecting the data were paid for by the grant issued by the agency funding the research, without prior contractual language saying that the data was somehow deemed proprietary, then the data is not yours but belongs to the funding agency. Whether they wish to release it is their business, depends on their policies, etc. If your salary, and the salary of your staff, is paid by a publicly funded institution then in that case the data is not yours either. If you work for a private corporation that funded the data collection then the data is theirs not yours…unless of course you had some sort of understanding with them that said otherwise. Finally, if you and your staff collected the data on “green time” then I guess you could call the data “yours”.

That all being said, replication is fundamental to the scientific method, something I would think you would hold sacrosanct.

Oh, in your latest response you did not answer 1 post that asked you pointed questions about the science. It is my opinion that you are confusing challenges with attacks.

Re Data Ownership:
We seem to have different experiences in requirements associated with government money. Whenever I have had the opportunity to use
government money (NSF/NRC, US Navy, DoD) to collect data it has always been with the understanding that the data would be available to the government and that anyone would be free to reuse the data – in other words there was no presumed ownership rights accruing to the contractor. On the other hand if you used pre-existing methods not developed with government money you could assert and maintain an ownership right to that data and method – and not disclose it. Universities I believe have similar policies regarding ownership of intellectual property. My partners owns my intellectual property, just as I
own theirs. However, they do not own my achievements in other non-business related fields!!

It is the understanding and inferences supplied by the scientist that funding agencies are interested in, not her or his raw data.

I would say the agencies should be interested in both. I like to delve in old mycological papers, often 100-120 years back. What the mycologists thought and what inferences they have made is interesting from the historical point of view, however scientifically most of it is obsolete. But their ‘raw” observations, drawings and later photos, tables and measurements are not – in fact it is often the only thing that remains valuable after all these years.

On the other hand, blogging seems to bring out the worst in most of us and we write things that we would seldom say face to face.

That is true. It is just the nature of anonymous communication. Having said that, most of us that are experienced in communicating through blogs or message boards develop a thick skin for these offhand comments. Using sensitivity to such comments as a reason not to participate with the majority of posters who generally treat others with respect is a bit of a cop out, imo. An easy way to retreat from the serious discussions concerning methods, data and conclusions.

#41 bender, fair point. I agree that the ITRDB is a remarkable accomplishment. Indeed without such an exemplerary archive, I could not have got any leverage whatever to analyze the Hockey Team studies. When I’ve been in Washington, I’ve usually made a point of praising WDCP.

While the dendro community has created an excellent archive, my issues have been with the Hockey Team – who have disdained the traditions of the dendro community. Briffa hasn’t archived measurement data for key sites used repetitively in study after study: Yamal, Tornetrask update, Taimyr; Esper hasn’t archived anything to my knowledge; Jacoby and D’Arrigo have archived selectively; Luckman hasn’t archived anything except a chronology and that only after CA.

My entry point has been the canonical multiproxy studies and it has been fiendishly hard to extract working data sets to see what they’ve done.

It’s too bad that members of the community don’t see fit to see fit to intervene with the Team. But you’re right – it’s not fair to tar the entire community with the same brush and I’ll make an effort to ensure that people understand the considerable accomplishment of the ITRDB – something that people who have followed this dialogue know that I have, but needs to be clarified for people picking up the conversation in the middle/

Enough high dudgeon over tone and ad-hom. Rise above it and answer Willis’ post about the science.
CA’s several threads focussed on the dendro community have so far only elicited quibbles. Where are the scientists?

Say you do a study, and you describe your method: you go to a location (lat/long provided) and core trees with certain properties (certain species, with certain facings and altitude). Then you perform some kind of statistical analysis on it (which is specified) and you get a given result.

Now, say I want to replicate this. I could go to the same location, use the same sample selection, core trees, and run the result through the same statistical analysis. Say I come up with a completely opposite result from you. How do I know why, if I don’t have access to the data you sampled? I don’t know if it’s because you made a mistake, or I made a mistake, or I misunderstood your description of some of the methodology, or simply because I sampled different trees, or because the rings have changed since you sampled them, etc.

It makes much more sense to me, to first get a hold of the data you used and make sure I can replicate your results and understand the methods and their properties. Then, when your study has been properly replicated and its properties understood (and verified to be reasonable), at that point it makes sense to go out and collect more data and run the result through the same methods to see what comes out.

Anyway I’m sure you realize this and I can’t understand why you wouldn’t be in favour of as much data disclosure as possible, since it makes the search for the truth a lot easier and more efficient. Isn’t that what we’re all after, really?

I don’t understand what the argument about data is. US scientists can do whatever they can get away with with the data, but under US Federal Law, no US Agency can use it unless it conforms to:

OFFICE OF MANAGEMENT AND BUDGET
Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies; AGENCY: Office of Management and Budget, Executive Office of the President.
ACTION: Final guidelines.
SUMMARY: These final guidelines implement section 515 of the Treasury and General Government Appropriations Act for Fiscal Year 2001 (Public Law 106’€”554; H.R. 5658). Section 515 directs the Office of Management and Budget (OMB) to issue government-wide guidelines that provide policy and procedural guidance to Federal agencies for ensuring and maximizing the quality, objectivity, utility, and integrity of information (including statistical information) disseminated by Federal agencies.” By October 1, 2002, agencies must issue their own implementing guidelines that include administrative mechanisms allowing affected persons to seek and obtain correction of information maintained and disseminated by the agency” that does not comply with the OMB guidelines. These final guidelines also reflect the changes OMB made to the guidelines issued September 28, 2001, as a result of receiving additional comment on the capable of being substantially reproduced” standard (paragraphs V.3.B, V.9, and V.10), which OMB previously issued on September 28, 2001, on an interim final basis.
DATES: Effective Date: January 3, 2002.

Government Agencies that publish anything based on data that the dendro’s haven’t archived etc, are breaking the Law, simple as that.

#47
In my opinion, Bender’s #49 is correct. A little courtesy will go a long way. Steve and others have raised genuine issues and the professionlism of many in the dendroclimatology community will come through if we are reasonable in our tone. Let’s pose questions that are scientific rather than condemnatory. Gore is fair game because he is self-promoting plus I personally have no interest in his responses. The other guys know stuff I want to understand and they are trying to do science.

I’ve updated this post to show averages of ITRDB larch and Engelmann spruce sites – species said by Woodhouse to be temperature proxies. I’ve also shown the average of 19 ITRDB Engelman spruce chronologies taken from over 3000 m. In this calculation, 1993 yielded the lowest value in nearly 600 years – something that seems hard to explain if there is a monotonic positive relationship between ring widths and temperature.

10. Reproducibility” means that the information is capable of being substantially reproduced, subject to an acceptable degree of imprecision. For information judged to have more (less) important impacts, the degree of imprecision that is tolerated is reduced (increased). If agencies apply the reproducibility test to specific types of original or supporting data, the associated guidelines shall provide relevant definitions of reproducibility (e.g., standards for replication of laboratory data). With respect to analytic results, capable of being substantially reproduced” means that independent analysis of the original or supporting data using identical methods would generate similar analytic results, subject to an acceptable degree of imprecision or error.

The dendroclimatological community already concedes that (1) tree-rings respond nonlinearly to climatic parameters, and (2) climate is a multivariate entity. There is no debate here. The question is one of degree: to what extent are climate reconstructions compromised by the use of linear univariate response models? My understanding is that this is currently an open question. In other words, there is no satisfying answer available. That is one of the reasons you’re not going to get much traffic on “dendroclimatologists answer back”. What is there to say that has not been said before?

Gawd! What a waste of time it is getting through all the personality issues before, if ever, we get down to the business of discussing the methodologies of using tree rings in temperature proxies. I wish we could all (all sides in this debate) focus more on these issues and forgo the temptation to speak our minds on motivation.

There are lists of questions that have been put forward (summarized in a recent post by Willis E) for a so inclined dendroclimatologist to answer. We know full well that one line answers will not suffice to sate our curiosity nor be satisfying for the thoughtful responder, but simply a POV from individual dendroclimatologists would go a long way in informing the people who post and read at this blog and more importantly perhaps minimize the temptation of the critics of dendroclimatologists to generalize their criticisms.

As for personality issues I think we all know who we and others are and thus we can dispense with the process that seems to frequently get in the way of learning or at least significantly postponing it.

Dr. Peter Brown I am happy to have you appear at this blog and I would not question your scientific and/or personal integrity. Could you please acknowledge the list of questions that Willis E presented and either respond to them directly or tell us why you will not? I have questions, many of which are included in the Willis E list, of my own, but a current pressing one is the lack of a satisfying response that I have heard and/or read by dendroclimatologists to the divergence problem ‘€” so for my preferences you could start there.

The availability of the data on which published reports are based would appear to me a side, but critically important, issue here and one that only the group of scientists involved will solve, however, until they do, please do not criticize those who continue to campaign for more availability and also please excuse us for thinking less of that group that does not vigorously pursue such an effort. My thoughts are that a group that is reluctant in this effort is perhaps not as competitive as I would prefer in debating the scientific issues in seeking the truth.

#56. bender, at this point, I have little expectation of responses. But it won’t be because I’ve not offered them a facility to make a comment in a thread reserved for them.

The issue with the reconstructions is not just the use of linear univariate models – although that’s an issue. The issues are just as much about selection protocols – which are either unstated or embarrassing. It’s also about model mis-specification – use of things like bristlecones or Qilianshan junipers.

The excuse at listserv for not discussing the “little” problems is that engaging on these issues will potentially divert focus away from the “big” issue of 2xCO2. I got the impression that their concern about CA was not so much that I was wrong about the particulars of anything that I presented (although they thought otherwise about the readers). I’ve repeatedly told Rob Wilson that I would correct any “misinformation” that I was responsible for and he’s not come up with any requests.

There’s little upside for them in trying to contradict me on the statistical issues that I’ve addressed here. First of all, it’s not easy to do or else people would be doing it. Yeah, there’s the odd little nit, but, as you’ve observed, I quickly remedy such things. Second, they’re not that strong on statistical matters; they’re using traditional dendro statistical recipes which the individual applied scientists have little understanding of and even the originators (e.g. Ed Cook) had little theoretical understanding of. So I presume that they are a little wary of engaging on such topics. I don’t think that it’s a matter of being nice or not nice.

Steve:
Again kudos for rapidly responding to all meaningful suggesions to the limits that the available data allows. Any signals in these arrays of data are well buried and will need a strong a priori basis for any effort to extract a PC that is somehow correlated with the temperature record.

Valid questions are regularly asked within these blog pages. Failure to answer (not you in particular, but your field in general) raises questions about the field. The “little” things, may seem so to folks such as yourself, but to those of us that deal with such little things every day, we know different. The problem of causality is one. Misuse of statistical analysis methods to extract “signals” is another. These seemingly little things are the basis for the “big picture,” which relies on the accuracy of such methods and assumptions to form valid conclusions. I.e., failure in the little things invalidates the big conclusions.

I should point out that if any of these “little things” types of problems cropped up in a radar or comm system, one of several things would happen. Should you actually review your work in front of peers the way we normally do, it would not make it past the design stage (literally, design reviews that take all day with an audience much more experienced, and from a variety of backgrounds). If, somehow, you got past this stage and managed to get a product out the door, phones wouldn’t work, the missle would hit the ship and, in general, you’d never be used as anything other than an assistant for the rest of your career. We’re a small enough community that we remember who screwed up by not paying attention to the “little things.”

When a study is being used as part of the basis for a economic/political program that seeks nothing less than the complete redesign of the world’s economy, then yes, I want the data behind the study to be publicly available, so that everyone, including skeptics, can review ever aspect of that study.

If you aren’t willing to publish your data and methods, then the study should be withdrawn.

“Yes, there are things the dendro community can do better, in terms of archiving more data and better metadata. But where does it stop? Fact: the dendro people are doing a FAR BETTER job of making their work accessible than are the climate modelers. When are THEY going to get around to archiving ALL their work so that it can undergo audit?”

I wish to congratulate those who have managed to cut down to eating only two babies a week. The rest of you should look up to these as role models.

Peter Brown is indignent. Many climatologists including Michael Mann are likewise indignent. In my experience those who are audited for the first time are routinely indignent. My job used to be defending my organization against the auditors (Pete Marwick, etc..). I never tried standing on my own dignity as a defense. Something like, “No I don’t keep records but you have my word as a gentleman”.

Peter Brown and Michael Mann have occupied a world where the scrutiny of others especially the scrutiny of outsiders was a violation of good manners. Such people get indignent and get touchy. They are likely to classify any criticism as an ad hominem attack.

Alas its the filthy lucre. If at lunch you told me my share was $15.00 I would take it on faith even if I suspected that my share was less. If there were hundreds of thousands of dollars at stake I might very well check your math. In the Global Warming public policy debate there are at least billions at stake. As the late Everett Dirkson famously said, “A billion here, a billion there and pretty soon it starts to amount to real money”.

At the risk of being accused of being ad hominem, Peter Brown seems a bit naive. He seems to long for the olden days when everyone could be taken at their word – the days before the Internet. I’m truly sorry but Steve McIntyre or someone like him is the inevitable result of stakes getting so high. The Carbon Trading venture is deemed to be worth $2.4 billion. The single most potent presidential campaign issue may be Global Warming. That means that tree ring data can effect the destiny of nations and the fortunes of millions. When the stakes get that high people become “business like”.

Steve routinely gives business examples. Dentrochonologists and Climatologists don’t seem to get the point. They seem to resent the concept of good business pracices being applied to the practice of science. Again I’m sorry but that’s the way it will be from now on. There is just too much money and power involved now. Science auditing’s time has come.

…using traditional’ individual series detrending methods, as done in this study, the lowest frequency of climate information that can be realistically recovered is 3/n cycles per year (where n = the mean sample length).” This means that the minimum frequency from most dendro studies will be on the order of a hundred years. Why do so many studies, including Rob Wilson’s, report much shorter term fluctuations? Shouldn’t all the results be filtered with a hundred year filter before being reported, if the higher frequency information is known to be noise?

There looking at the high frequency variability because its at frequencies greater than the minimum frequency!

Pat #67, implies, very shrewdly, that we should follow the money, especially when it is in the trillions. I would remind everyone of the old City (of London) adage that when a man tells you his word is his bond – ALWAYS take his bond!

In response to several requests, I started to try to address some of the science issues that Willis Eschenback asked back in #13. However, just checking up again on the thread after doing some chores (just mowed the grass for the first time this season, earliest date I’ve had to do that in 14 yrs in northern Colorado), I find again the thread has degenerated into name-calling as is the usual wont on this blog. Pat (#65), why in the world would you propose that I am indigent? Webster: “lacking food, clothing, or other necessities of life because of poverty; needy; poor; impoverished.” That is not me. But perhaps you are referring to Webster’s archaic meaning: “deficient in what is requisite”? If so what is it that I am deficient in that is so requisite in your opinion? I will admit to being naive about a good many things, although not the one you label me with. And perhaps I am being completely unfair in judging the entire exercise of trying to communicate by this blog by the actions of a few, but what, pray tell, is in it for anyone to try to reach out when they get hit with inaccurate, unfair, and down-right slanderous name-calling when they try? Anyone can do that, even me: Pat, you’re ugly and your feet smell.

Please be aware these are only one person’s opinion on these questions and so far I only got through no. 2, doubt I’ll get any farther.

1) Are the mathematical methods used in dendroclimatological reconstructions valid? In particular, the use of novel, untested statistical methods without any attempt to mathematically justify their validity is worrisome. See, for example, this thread on variance adjustment.

I cannot answer this question, I am not a statistician. I can say that I think your question is mostly a strawman; I cannot think of a single paper that I have read that used what I what I imagine you might refer to as a “novel, untested” method without providing justification for its use. Often this justification comes in the form of reference to existing literature, but it is always there. If you have a specific example, please provide one.

One point on this question, however: mathematical and statistical models are not the beginning of dendroclimatological research. Dendroclimatology starts with mechanistic models of how trees grow and how climate affects soil and atmospheric conditions that in turn affect growth. It appears that a number of folks here have read Fritts, who is an excellent place to start. There are a combination of “top-down and bottom-up” approaches to our understanding of how climate affects growth; top-down approaches are primarily correlational (general growth patterns such as total ring width with long-term instrumental data) while bottom-up would be reductionist in approach, trying to better understand mechanistic details. Most of the conversations on this blog are only concerned with the former but it appears to me that many of the answers lie in better understanding of the latter. A third approach is simulation modeling, and there was a recent book and excellent synthesis of a number of these studies by Vaganov, Shashkin and Hughes.

2) Are proxy reconstructions without ex ante proxy selection rules valid? And if so, what protection is there against “data snooping” and “cherry picking”?

Trees respond to all sorts of climatological and ecological factors throughout their lives (which can, of course, be quite long). Occasionally one climatological factor is very much dominant. For example, Douglas-fir trees growing on a rocky steep slope at a dry site in northern Arizona may exhibit upwards of .7-.8 adjusted R^2 in a simple linear regression model with a seasonal or annual total of monthly precipitation data from a nearby climate station. So even in the driest of dry sites, precipitation totals do not explain all the variance in annual growth. Part of it is temperature (a hotter summer will dry the soil out faster than a relatively cooler one given the same amount of annual ppt), part of it is that monthly ppt may not be a good reflection of the actual moisture the tree sees (does all of it come in one or two thunderstorms in which most runs off or does it come in a steady soaking drizzle), part of it is the fact that the station where the instrumental data come from is over the hill (or well far away from where the trees are growing) and ppt is often very spatially heterogeneous (especially in the SW where thunderstorms may provide a good portion of the annual total), part of it are the results of ecological disturbances that affected trees individualistically or occasionally as a group, part of it are genotype differences within the stand, and part of it is simply not being able to capture in any sort of digital format the variance present within living organisms.

Concerning site selection, generally there is some sort of ex ante expectation of what a particular site and species will show in its growth response to climate. One selects dry sites looking for drought-stressed trees, one selects upper elevation or upper latitude sites looking for temperature-stressed trees. However, in many cases it is not at all a simple response, and if you are at all familiar with the dendro literature you will have come across response function graphs. These are exploratory devices to assess not only the strength and type of principle climate response of a chronology or set of chronologies but also how well the available instrumental data sets may be used to resolve it. Coming back to the SW Douglas-fir, we find that the instrumental data may explain e.g. 70% of the variance in a ring-width series, but if one regresses that same station data against other stations in the region (and in N. AZ they can be quite far apart) one may find a comparable number. Also, especially with temperature, it is not simple at all what the trees are responding to vs. what data are available from a climate station. Generally it is average temperature over the growing season, but this again only explains so much of the variance in the tree growth. The tree may be responding to combinations of growing season length, dates of first and last frosts, etc, but all we have to work with may be monthly (occasionally daily) data sets of min, max, and avg temps, often with missing values, often from stations located in the cold sink in the valley below that are seeing quite different temperature regimes than the trees on the ridge above. And even if one finds a dominant pattern affecting growth (e.g., annual ppt coming back to the SW example) there are often other significant relationships as well. For example, in very dry SW sites, one often finds a negative response to summer temperatures, the hotter the summer the faster the soil dries out and the slower the growth. Thus there is also information on temperature contained within the chronology, but generally the most parsimonious model is selected (i.e., ppt).

I cannot answer this question, I am not a statistician. I can say that I think your question is mostly a strawman; I cannot think of a single paper that I have read that used what I what I imagine you might refer to as a “novel, untested” method without providing justification for its use. Often this justification comes in the form of reference to existing literature, but it is always there. If you have a specific example, please provide one.

If there was an application of a “novel, untested” method, then how would you know? You’re not a statistician.

Rather than being snarky, John A, why not simply point out to Peter Brown that Willis was referring in a back-handed way to methods like “the Mannomatic” / RegEM? There’s a good chance Peter’s not aware of those problems, or even those methods, in which case you can nicely direct him to the apropriate threads, rahter than getting all nasty trying to unclothe the emperor.

PCA was used in time series analysis before MBH; perhaps you refer to its use in dendroclimate data, John A.? If those are your criteria for “novel, untested” then how would science proceed if it did not apply new methods for analyzing data?

PCA was used in time series analysis before MBH; perhaps you refer to its use in dendroclimate data, John A.? If those are your criteria for “novel, untested” then how would science proceed if it did not apply new methods for analyzing data?

PCA has been verified in many applications, but dendro is not on that list. Science proceeds by taking novel and untested methods and verifying all those “little things” through testing, i.e. falsification.

Since you mention it, there are several assumptions in MBH98, some are requirements for PCA, btw, that are glossed over without any supporting evidence. The biggest, requirement #1, is uncorrelated sources. The hypothesis itself assumes CO2 and temperature are correlated, both of which are tree-ring width “sources,” which immediately negates any chance of separating them via PCA.

Oh, thank you! That clears up the comment considerably. I take back my name-calling of Pat, he is not ugly and his feet don’t smell. Pat, I apologize. Although I really would not characterize myself as indignant; more curious to see where this all goes.

Trees respond to all sorts of climatological and ecological factors throughout their lives (which can, of course, be quite long). Occasionally one climatological factor is very much dominant.

This is problem #2 with using PCA. By your own admission, the so-called “mixing matrix” that results in tree-ring width observations is _not_ stationary, i.e. it changes over time. This is why, in my opinion, Mann’s HS has a high validation statistic in recent times, but diverges the farther it goes back (sorry Boris, but the contention that these are valid to 1100 AD is nonsense). The statistics change, precisely because the mixing has changed. PCA allows you to find some correlation during the 20th century simply because we have sufficient records to extract certain forcers.

Mann has also redefined the concept of a vector mean in his latest RegEM method, the purported silver bullet for “believers.” Not only is his method untested in dendro, it’s actually conceptually incorrect _anywhere_ to take the mean of the mean in order to “center” a set of vectors unless you can prove ergodicity. Given that the proxies all represent different things (they aren’t all identical trees in similar locales, for example), that concept is fundamentally flawed. This is one of those things that looks little on the surface, but actually rests at the heart of all these methods. The assumptions in the original literature outlining when the methods work and when they don’t are essential lest you’re happy with meaningless results.

There was a similar problem with the original application of PCA, and Steve has highlighted that as well. Centering your proxies on a single period is flawed, though that actually ties into the stationarity problem I mentioned above.

(just mowed the grass for the first time this season, earliest date I’ve had to do that in 14 yrs in northern Colorado),

I’m in northern IL and my grass will be mown at its earliest date in some 40 years and my tulips are blooming in late March nearly a month ahead of schedule — yet the winter and early spring, on average, have been normal. These plants, with treeline trees not excepted, are evidently complicated proxies. I have been looking at local climate effects on grain crops in the past 40 to 50 years time period and find the reactions as manifested by yield no less complex.

David Smith, not to worry about Peter Brown. It appears he can give as well as he takes. I knew of pat’s occasional lapses in speling proficiency, but being ugly and having smelly feet that’s new information — and totally unexpected. By the way I think you, Peter Brown, are confirming what many of us already suspected: much is yet to be revealed as to how tree rings respond to temperature let alone the other “unknown” factors.

#72
Hi Bender, good to see you back – you certainly bring the standards up; kicking and screaming sometimes, as it were. 😉

You’re right, but alas there seems to be little choice but to put up with such things – after all, the only alternative is to take the RC approach, and frankly the censorship there is *way* too extreme – as an example, I posted there that exageration from *either* side was unhelpful, got a response that said they knew of exageration from the “denialist” side and could I give an example from the “pro” side, and *twice* my replies to this, which was that given the NAS and Wegman, perhaps MBH was exagerated, was refused. It makes it look as though I didn’t have a reply when in fact I did. And no, I wasn’t rude or haughty and I don’t believe that such a comment warrented being dropped.

At the end of the day, we needs must have open debate – if it ain’t open, it’s not much of a debate after all. The best we can aim for, IMO, is that regulars such as yourself will, as you have been, publicly decry ad homs etc as not in the spirit of the site, and hopefully quickly enough that non-regulars will notice before they write the whole site off.

So the real message is, I guess – don’t be disheartened and keep trying to keep it civil and scientific. Given the alternative, what else can one do?

By the way I think you, Peter Brown, are confirming what many of us already suspected: much is yet to be revealed as to how tree rings respond to temperature let alone the other “unknown” factors.

Ultimately, this is why I have a hard time listening to the Manns and Gores of the world bang the drums, when the consequence of these claims result in spending trillions of our tax dollars, and we don’t even know if the answers are temperatures or not. The only way anyone has ever been able to say that anthropogenic causes are the primary forcer now, and that such change is unprecedented, is through these contrived proxy studies using not only untested, but in many cases outright incorrect, analyses.

I’d simply like to see one of the climatologists stick to the questions, so if silence of the hounds helps, I’m all for it.

The 30-day temperature anomaly map is here . Looks like eastern Colorado was indeed warm in March, as was the central US and Europe. Alaska was cool. The rest was blah.
(Note the odd projection on these maps – the polar regions are shown to be as big as the tropics, though they are not.)

I cannot answer this question, I am not a statistician. I can say that I think your question is mostly a strawman; I cannot think of a single paper that I have read that used what I what I imagine you might refer to as a “novel, untested” method without providing justification for its use. Often this justification comes in the form of reference to existing literature, but it is always there. If you have a specific example, please provide one.

Peter, just curious, but have you ever followed a reference back to the original source and that source has a proviso for a use of a methodology that is violated in practice by the paper doing the siting. For example, a scientist uses a rather obscure statistical measure for validation like RE knowing full well that the standard measure r^2 shows no validation and on tracing the use of RE back to its statistical roots, we find that its use has many stipulations that are not mentioned in the citing paper. Do you give the paper a pass because it cited a reference? I’ve always been one to trust but verify.

If a person had to mow his lawn the earliest he can remember (at least for the past decade or so) that is clearly monumental proof of devastating anthropogenic greenhouse gas effect! Call out the guard, release the hounds!
My, how unscientific our “science experts” become when they grasp for the high ground.

In central CO, we had one of the coldest summers on record, and earliest snows on records. If it is like last year, another warm to hot spring, cold and wet summer and early snow again. Not sure why any of this matters because, well, it just doesn’t. 14 years, or even 100 years, is statistically insignificant.

Peter, thank you for your response to questions 1 and 2. I have tried to be as neutral in this as I can, please excuse my lapses. Let me deal with question 1 first.

I had asked:

1) Are the mathematical methods used in dendroclimatological reconstructions valid? In particular, the use of novel, untested statistical methods without any attempt to mathematically justify their validity is worrisome. See, for example, this thread on variance adjustment.

You replied:

I cannot answer this question, I am not a statistician. I can say that I think your question is mostly a strawman; I cannot think of a single paper that I have read that used what I what I imagine you might refer to as a “novel, untested” method without providing justification for its use. Often this justification comes in the form of reference to existing literature, but it is always there. If you have a specific example, please provide one.

What am I missing here? I provided an example in the original question.

If you need further information, take a look at the Wegman report, which highlighted the statistical difficulties in the MBH98 procedures. Dr. Edward Wegman is a prominent statistician, and is chair of the National Academy of Sciences’ Committee on Applied and Theoretical Statistics. The Wegman Report says, in part:

MBH98 and MBH99 focus on simple signal plus
superimposed noise models for paleoclimate temperature reconstruction. Because of
complex feedback mechanisms involved in climate dynamics, it is unlikely that the
temperature records and the data derived from the proxies can be adequately modeled
with a simple temperature signal with superimposed noise. We believe that there has not
been a serious investigation to model the underlying process structures nor to model the
present instrumented temperature record with sophisticated process models.

Perhaps I was not clear enough in saying “novel, untested” methods. It is not enough for a particular statistical procedure to be tested and known to work. It must be tested and verified on the particular application where its use is proposed. If it has not been tested in the manner and on the data in which it is to be used, I call it “novel and untested”.

The Wegman quote above points out that Mann’s method had not been tested on the type of data where it is being used. Had it been tested, as it was by Burger and Cubash, it would have been seen to be fault before, rather than after, the publication of the Hockeystick. As Wegman reports:

Another 2005 paper Are Multiproxy Climate Reconstructions Robust? by Gerd Burger
and Ulrich Cubasch questions whether these methods are statistically significant enough
to be able to make robust conclusions. Burger and Ulrich describe sixty-four climate
reconstructions, based on regression of temperature fields on multi-proxy networks,
which are each distinguished by at least one of six standard criteria of this method. By
combining these criteria Burger and Ulrich define numerous variants on millennial
histories. No one criterion can account for the number of variations and no particular
variant is more valid than another. Even the variant with the best reduction of error
statistic is the furthest variation from the climate history of Mann et al. 1998. Burger and
Cubasch conclude that the regression model is not valid when applied in an extrapolative
manner, as in climate reconstruction.

In other words, a regression model is by no means novel or untested. But its use to create climate reconstructions in an extrapolative manner was certainly novel and untested, and guess what? … When Burger and Cubash tested it, it failed.

Now, I understand that you are not a statistician, which is fine, no one is a master of every discipline, and it seems that few dendroclimatologists are statisticians. This is also fine. But when you folks set out to write papers that depend heavily on statistics (as the majority of proxy reconstructions do), it seems only prudent to walk over to the statistics department and enlist some help. The Wegman Report says:

Conclusion 3. As statisticians, we were struck by the isolation of communities such as
the paleoclimate community that rely heavily on statistical methods, yet do not seem to
be interacting with the mainstream statistical community. The public policy implications
of this debate are financially staggering and yet apparently no independent statistical
expertise was sought or used.

Recommendation 3. With clinical trials for drugs and devices to be approved for human
use by the FDA, review and consultation with statisticians is expected. Indeed, it is
standard practice to include statisticians in the application-for-approval process. We
judge this to be a good policy when public health and also when substantial amounts of
monies are involved, for example, when there are major policy decisions to be made
based on statistical assessments. In such cases, evaluation by statisticians should be
standard practice. This evaluation phase should be a mandatory part of all grant
applications and funded accordingly.

Thank you again for not getting caught up in the peripheral issues that always swirl around all blogs, and answering the questions directly, it is much appreciated.

w.

PS — In passing, I’d like to highlight the first two recommendations of the Wegman Report:

Recommendation 1. Especially when massive amounts of public monies and human
lives are at stake, academic work should have a more intense level of scrutiny and
review. It is especially the case that authors of policy-related documents like the IPCC
report, Climate Change 2001: The Scientific Basis, should not be the same people as
those that constructed the academic papers.

Recommendation 2. We believe that federally funded research agencies should develop
a more comprehensive and concise policy on disclosure. All of us writing this report have
been federally funded. Our experience with funding agencies has been that they do not in
general articulate clear guidelines to the investigators as to what must be disclosed.
Federally funded work including code should be made available to other researchers upon
reasonable request, especially if the intellectual property has no commercial value. Some
consideration should be granted to data collectors to have exclusive use of their data for
one or two years, prior to publication. But data collected under federal support should be
made publicly available. (As federal agencies such as NASA do routinely.)

Note that he recommends that data collectors have exclusive use of their data for a couple of years, but only prior to publication …

And Per, please list some of these public archives from astro/particle physics that you appear to know so well. I just Googled “astroparticle physics public data archives” and the third link (the first was a PDF and the second a PS file) was an article from the CERN Courier (2007, Vol 47, No 3) entitled: Let the data free! Three researchers working in the new field of astroparticle physics argue the case for making the data from astroparticle experiments public. Sounds to me like they are having some issues with data access as well. Should there be better data access?

Most (if not all) of these archives provide science analysis software in addition to the data (So Steve M. can audit the software as well as the DATA). Various “levels” of data product are usually provided online. You can always request raw data offline.

PB:

Second, the existence of online databases and supplemental data sets as part of publications is a relatively new development in science and scientific data management. How long have the internets been around, since the early 90s?

Have a look at the NSSDC Master Catalog. For example should you wish Voyager 2 Solar Wind Plasma Data, (1977-08-22 to 1990-12-31) you could find the details here

PB:

Many journals that I am familiar with do not, in fact, have any capability at present for managing supplemental data.

1. Data sets cited in AGU publications must meet the same type of standards for public access and long-term availability as are applied to citations to the scientific literature. Thus data cited in AGU publications must be permanently archived in a data center or centers that meet the following conditions:

a) are open to scientists throughout the world.
b) are committed to archiving data sets indefinitely.
c) provide services at reasonable costs.

The World and National data centers meet these criteria. Other data centers, though chartered for specific lengths of time, may also be acceptable as an archive for this material if there is a commitment to migrating data to a permanent archive when the center ceases operation. Citing data sets available through these alternative centers is subject to approval by AGU.

2. Data sets that are available only from the author, through miscellaneous public network services, or academic, government or commercial institutions not chartered specifically for archiving data, may not be cited in AGU publications. This type of data set availability is judged to be equivalent to material in the gray literature. If such data sets are essential to the paper and authors should treat their mention just as they would a personal communication. These mentions will appear in the body of the paper but not in the reference list.

….. nota bene ellipses

Data Papers

2. Data sets discussed in data papers published in AGU books and journals must be publicly available and accessible to the scientific community indefinitely. Authors of such papers are required to deposit their data sets in a data center that meets the criteria discussed above. In the event that an appropriate data center cannot be found by the author, AGU will take an active role in recommending the acceptance of the data by a suitable data center. AGU will provide temporary storage services, for a fee, and will facilitate the migration of the data sets to an approved center as soon as practical. (Also see section below on AGU’s role in archiving data.)

3. Data sets that are the basis of data papers are subject to review. A sample of these data sufficient for the review process must be supplied with the submission of the paper. The reviewer is expected to comment on the data as if they were an integral part of the paper and on their usability.

….. nota bene ellipses

5. At the time of submission, authors must supply complete information about the archiving of the data sets. To avoid possible delays in the publication of the data paper, authors should consult with the data center(s) before submitting the paper to AGU. If the data sets have been archived before the paper is submitted, information on accessing them must be supplied to the reviewers.

PB:

Finally, they ARE my data. Funding agencies pay me for my expertise, my imagination, and my insights to be able to make some advance in our understanding of how nature works, not for raw data sets.

Actually I think that you should check with your funding agency. I assure you this is NOT how NASA operates. The PI’s of Hubble, WMAP, Spitzer, SOHO, etc, etc do not refer to the data as “their” data. However, you can test your hypothesis indirectly as well. In your next proposal you could explicitly state you will NOT be providing the data in a public archive since it is “my data.” If the proposal is funded, you will know for sure that it is “your data.”

PB:

Our society has made a commitment to supporting science in the form of funding researchers to do what they do best as determined from their education, experience, and academic achievements.

Sure, but why should society pay several times for the same data? Collect it once and archive it.

PB:

Quite often scientific investigations lead to dead ends, science is not done by recipe, and often
data are used to address more than one hypothesis.

Yes and this is another good reason to archive data. For example, your hypothesis leads to a dead end. However, someone else has a look at the data and decides to test a different hypothesis. If the data isn’t archived, the second study can’t be carried out without duplication of effort (in collecting the data).

PB:

It is the understanding and inferences supplied by the scientist that funding agencies are interested in,
not her or his raw data.

NASA archives raw data. Various levels of processed data are available online.

PB:

As for NSF data archival requirements, from what I understand there are no hard and fast rules as to what data are to be archived: for example, in dendroclimatology, raw data in the form of ring widths are not requested or required, only the final reconstruction.

Seems like archiving raw data is a good idea and good scientific policy. I’m not sure why a scientist would argue against such a policy.

Looks like there are many other responses to your comments (that I haven’t read yet). If I have duplicated someone else’s point(s) I apologize.

Peter, that’s largely up to you. My kind advice: do not underestimate the regulars here. Some have been thinking about this problem for several years. Others may not know much about tree biology, but know a lot about engineering and maths.

Re: the idea of a “Peter Brown omnibus” thread – good idea, David Smith. You’re so sensible! (For a Tiger, that is.)

#70, 74. Peter Brown, MBH98 said that “we take a new statistical approach to reconstructing global patterns of annual temperature back to the beginning of the fifteenth century”. Raymond Bradley credited Mann as follows:

Bradley credits Mann with originating new mathematical approaches that were crucial to identifying strong trends.

Unfortunately John A has not correctly characterized what was actually “novel” about MBH98 methodology – something that I discussed in our presentation to the NAS Panel. I think that there are three points of novelty:

1) a non-standard principal components method – actually, strictly speaking it’s not a principal components method at all within Preisendorfer’s definition – a method that proved to be so strongly biased to mining for HS shaped series that it was characterized in very strong terms by WEgman as simply being wrong.

2) an inverse regression procedure, both in itself and in combination with the biased principal components method. I observed the overfitting problems in a post here

3) exclusive reliance on the RE statistic as a method of validation. This particular “method” was disguised by a misrepresentation about the verification r2 statistic.

3) the introduction of bristlecones into multiproxy studies – something that had been avoided by earlier practitioners because of problems or at least potential problems with this data. Again the dependence on bristlecones was disguised by claims that the method was robust to presence/absence of dendroclimatic data in total.

but hasn’t correctly expressed exactly what was “novel” about MBH98 methodology. As you correctly observe, principal components is not in itself a novel methodology.

Reposting my questions from #32. This thread got quite noisy with a lot of jawing back and forth and maybe someone who knows missed it in the mix. Or maybe it’s just stupid questions on my part and no one wanted to waste time.

When a site is selected, I would assume that the site contains trees of various ages. Some trees may be 20 years old, some 50 years old, some much older. Do trees of various ages all react the same to the same input? For example, would a tree that is 20 years old react the same to various temperature, precipitation, insect infestation, etc., as a tree that is 50 years old? Or 100 years old? If not, are those differences accounted for when obtaining core samples? Or do they only select trees of approximately the same age when sampling? I would think that even amongst trees of the same species at a site, and even at the same relative age, could show different responses, based on physical location relative to each other. For example, one tree might be on more of a slope where greater run-off occurs, thus limiting moisture availability whereas another tree might be in a slightly depressed area, where moisture pools on the ground, leading to too much precipitation. In another example, what about where two trees are in relatively close proximity to each other and fighting over the same moisture pool in the ground whereas two other trees might be farther apart and not have that issue.

Are those important issues and is data recorded with regard to the layout of trees selectd at a site (Steve has mentioned this as meta-data), which could be important when trying to discern various signals from a selected site? If so, how might these factors affect various temperature/precipitation reconstructions? Are the various smoothing techniques used to manage these disparities and are they the appropriate measures to take?

#93 was ignored, probably because, as mentioned in another thread, one can’t spend one’s time answering every little question that pops up. If dendroclimatologists are interested in reconstructing climate on the order of 1000+ years, why would they sample trees 20,50,100,200 years old? Sure, trees of different ages might respond differently to different factors. But I don’t know of any dendroclimatologists who would be ignorant of that. IMO your other point is well-taken. Sample locations and site conditions are not well enough archived, given the multi-trillion dollar importance of the question. Not surprising given the metadata requirements were designed back when GPS technology did not exist and there was not so much of a public policy spotlight on the issue.

1. I have never worked with Larch, unless I have used such data from the IRDB archive. I have worked mainly with spruce.

[Steve: THe point was that none of the chronologies or measurement sets were archived. I agree that it is primarily Engelman spruce sites that have not been archived and I’ve accordingly changed “larch” to “Engelman spruce” in the post. ]

2. w.r.t. to the previous thread, there are TWO tree-lines in the southern Canadian Cordillera:

so – completely different environments, completely different species and completely difference responses to climate.

3. w.r.t.

Rob Wilson’s study says “Cook et al. (1995) state that using traditional’ individual series detrending methods, as done in this study, the lowest frequency of climate information that can be realistically recovered is 3/n cycles per year (where n = the mean sample length).” This means that the minimum frequency from most dendro studies will be on the order of a hundred years. Why do so many studies, including Rob Wilson’s, report much shorter term fluctuations? Shouldn’t all the results be filtered with a hundred year filter before being reported, if the higher frequency information is known to be noise?

Willis, perhaps the text is unclear, but this statement is talking about the limitations of TR data (when processed using traditional methods) in the LOW frequency domain (i.e. > 100+ years). The high frequency is fine.

I cannot answer this question, I am not a statistician. I can say that I think your question is mostly a strawman; I cannot think of a single paper that I have read that used what I what I imagine you might refer to as a “novel, untested” method without providing justification for its use. Often this justification comes in the form of reference to existing literature, but it is always there. If you have a specific example, please provide one.

These days it’s hard enough to be an expert in one area, much less all. So it’s not that big a deal if you are not an expert in
statistics. However, if you are not an expert in statistics, how can you be so certain that others are not making mistakes with how
they use statistics. Especially given the fact that many of them refuse to release their data, and or methods, so that people WHO ARE
experts in statistics can review their work? It’s not enough that other dendro’s have reviewed the work. If it involves the use of
statistics, then someone who is an expert in statistics should review the work as well.

I apologize for not continuing before now to some of the science issues that Willis Eschenback asked back in #13. Here is the rest of a short answer to his question no. 2:.

2) Are proxy reconstructions without ex ante proxy selection rules valid? And if so, what protection is there against “data snooping” and “cherry picking”?

Returning to the SW Douglas-fir example: recall that annual ppt over the 20th century instrumental period explains perhaps 70% of the variance in the ring-width chronology for the same period in a simple linear regression. However, as I mentioned in the previous post, response function analysis shows that there is also a much weaker but still significant inverse response to summer temperatures, explaining perhaps only as much as 10% of the remaining variance (As for the rest, who knows? All those other factors – and others – that I listed in the previous post. Please keep in mind these are only statistical models.). Thus that chronology could be used – albeit with much less confidence – to also examine temperature variations.

This is the basis for using networks of tree-ring chronologies in reconstructing broad-scale patterns of temperature from otherwise drought-sensitive trees. There is almost always some temperature response in the individual sites, and typically some sort of chronology selection process is used to select them to maximize both the individual and well as collective response. A regional (or inter-hemispheric) network would necessarily be compared against regional (or inter-hemispheric) instrumental data to develop statistical models describing the relationship in some sort of objective manner. And I would suggest starting off with Fritts’ 1991 book if you’ve not read that; this has much more detail about the justification for using such a approach on a 65 site network across western NA in the first study of its kind (he developed both temp and ppt reconstructions back to 1600).

Now, whether one calls the process of chronology selection for such broad-scale studies “cherry picking” or “data snooping” I suppose would be in respect to one’s preconceptions and biases against the entire process in the first place. All scientific studies begin with premises that guide model and data selection. So in answer to the second part of your question above, the premise of the question is incorrect; there is no need for “protection” since those terms are irrelevant to the process of development of explicit site selection criteria.

3) Many proxy temperature reconstruction use proxies which have been used previously for proxy precipitation reconstructions. What methods (if any) have been used to control for the other variables? If there is no attempt made to control for confounding variables, is the study valid?

I am not the person to answer this question. I would suggest that the main approach, as I see it, has been blunt force, the law of large numbers; with enough chronologies that have weak temperature response (and again in ppt-sensitive series this is typically an inverse response) the broader-scale patterns will emerge and be strengthened. It is analogous of course to trying to see global warming in a single time series, can’t be done. (And to the commenter in post #85 about my observation of early grass-cutting in Colorado, did I say anything about it being a sign of global warming? Simply an observation…please don’t place your biases into my comments, you sanctimonious SOB.)

4) Temperatures which are too hot or too cold both cause narrow tree rings. Under the assumption of linearity in proxy reconstructions, this means that hot times will be interpreted as cold times. Why is this ignored in temperature reconstructions? At a minimum, it should make for asymmetrical error estimates, but I have never seen even that done, much less any serious discussion of the inherent problems.
This is not ignored in temperature reconstructions, this is an active area of research (e.g., see the recent review by D’Arrigo, Wilson, et al.). The so-called “divergence problem” is a big area of concern, not only for what it means to current tree growth and forest patterns (and indirectly to a host of other issues in forestry, such as widespread and apparently unprecedented bark beetle and other insect outbreaks) but what that does mean for climate reconstructions. In the absence of other evidence, we must assume uniformitarianism in climate/growth response. For example, in our northern Rockies work (Mr. McIntyre’s comments on which started my responses to this blog) one approach we are looking at is to use multiple proxies (ring width and max latewood density) and multiple spp to see how different the later 20th century may be in relation to the rest of the records. Do we need to do more? Absolutely, and two approaches I feel we need more of are mechanistic and experimental studies and simulation modeling (which there is a lot of work being done in the latter).

5) Are proxy reconstructions which do not have a validation period, but only a calibration period, scientifically defensible?

Why not, dendroclimatology is the only discipline that I am aware of that routinely does any sort of calibration/verification process. Validation of a reconstruction may be done through comparison of other reconstructions, historic records, etc, to provide some idea of how well it compares to historical understanding.

6) How can proxies which do not correlate with local temperatures be used as a proxies for global temperatures?

Several of our higher elevation chronologies from here in CO correlate quite poorly with nearby climate stations (for example, check out Lexan Ck with Fraser, CO, temps) because of what I mentioned already, the climate station is down in the cold sink (Fraser’s motto is the icebox of the nation) while the trees are high up on the ridge above. The trees correlate much better with regional temp averages (e.g., the CRU gridded data) because temp tends to be more spatially autocorrelated than ppt and the regional averages reduce the effects of individual station anomalies. Global temps would be the next step up.

7) Fritts divides sites into “complacent” and “sensitive”, depending on whether they respond to a given signal (temperature, precipitation, etc.). However, even within “sensitive” sites, there are trees which respond and trees which don’t, and no one seems to know why. This raises questions, including:

First, your premise is generally incorrect in that “no one seems to know why”. In chronologies that I am familiar with one measure we use to assess in-common patterns between trees is the inter-series correlation. Often there may in fact be one to a few trees that have lower inter-series r than the rest, but usually this is can be explained by microsite variations. For example, in ppt-sensitive sites, the poorer correlating series may have come from trees that were growing in a spot with better soil development or off the top of the ridge where wind did not dry the soil out as rapidly, etc. So I would suggest that in many cases lower correlations can be explained by ecological factors.

a) Does a site remain “complacent” or “sensitive” over a period of centuries, or can it be sensitive for a while and then become complacent?
This can be tested for by running correlations, variances, or other measures of time series characteristics. Occasionally the early growth of a tree does not appear to be responding to climate as older trees nearby; this is likely due to growing conditions in the younger tree (e.g., competition with neighbors until it reaches the canopy). In this case, a common remedy is to remove the early growth from the series. However, I have run evolutive analyses on a number of chronologies and have never found any significant deviations over the length of the series.

b) If a site is “sensitive” to temperature due to it being close to the treeline, will it stay “sensitive” when the treeline changes? And if so, will the “sensitivity” change?
Good question; in our treeline sites in CO we appear to have had a treeline shift about 1250 CE; growth was very restricted before this and lots of reaction wood, which both imply Krumholtz growth forms, followed by a growth release and more regular and concentric growth after ~1250. Also a lot of trees in the stand appear to have started up about that time. Treeline today is about 50 m above this stand. However, this I suggest is a different scenario than if one is dealing with an existing stand of trees already established as trees, and again one method to test if there are possible changes in response would be to do some evolutive analyses over the length of the series to test for changes.

c) If a site is “sensitive” at a given average temperature, will it remain “sensitive” if the average temperature rises? If it falls?

Likely in a general fashion, assuming there is a linear response to temperature. From a purely practical standpoint an assumption in dendroclimatology is that of growth limiting factors; to reconstruct climate one assumes that a single factor is limiting to tree growth. However, obviously, this is not always the case. There likely is a low point in tempwhen a tree may be extremely responsive to it as limiting factor, an optimal range when the tree could care less, and a high point when some other limiting factor (e.g., soil moisture) may be the limiting factor. I’m sure you realize this already, but any type of statistical model that may be developed from such relationships does have confidence intervals associated with it.

In #88, Pete, a quick response to just the first part of your response to my comment: Thank you for the links. I’ve not had a chance to explore these at all, but just quick browses of a few suggests to me that these are mainly databases maintained for individual projects, not for public data repository (and I may be completely incorrect in that observation, but in my quick perusal I did not see any sort of submission page, e.g., similar to that of the ITRDB. Just one quick quote from the TRACE site:

An Open Letter to the Solar Physics Community on TRACE DATA Access
One feature of the TRACE mission is the open data policy. The purpose of this letter is to clarify this policy and to encourage a dialog on how to insure its fair and reasonable implementation.

Our fundamental data policy is simple: All TRACE data will be equally available to everyone on the World Wide Web. An open data policy is an experiment for Solar Physics space experiments. There will be misunderstandings that cause varying degrees of unhappiness. There is no perfect policy and it is naive to expect that deviations from established practice will not result in some problems.

Again this is just a very quick perusal on my part (5 minutes) and in that time I came across this statement that this is an “experiment” in open data. So perhaps many disciplines are in the same boat with regards to making sure data are accesible?

And BTW, the quote of mine you highlight above was by no means “cherry-picking” (appears to be one of the favorite phrases on this blog, cherry-picking this, cherry-picking that), it was completely and utterly random (as random as Google can be), it was, as I already stated, the first real link I came to in a Google search. How in the world is that “cherry-picking”?

Please no more comments about my feet. I wear a size 15 shoe. In high school they used to call me “Yeti”. I’m a little sensitive.

I believe I have prevailed in this exchange. Peter Brown now seems to respond now to questions, which is the first thing that an auditor asks. This is a hopeful sign.

Tree ring data is part of an argument that effects every person on the globe. If AGW is real, large, and serious every poster on this blog will be effected. Some will lose their jobs others may lose their lives. They have a right to examine the validity and cogency of the evidence. Wise professionals like Peter Brown will be patient with the doubts of laymen. They will also be humble toward those whose expertise in other areas may be as relevant as their own.

It’s not enough that other dendro’s have reviewed the work. If it involves the use of statistics, then someone who is an expert in statistics should review the work as well.

This is one of the engineering design review issues. Our reviews (at least in my experience) typically involve those with experience in all of the relevant areas. I’m a signal processing guy, but I have done a ton of digital board design (mostly receivers) which requires digital experts, analog experts, signal processing experts, general board design experts, etc… I.e. most problems are rather broad in their spectrum and therefore require a broad audience.

I sent the following inquiry to Connie Woodhouse about the temperature-sensitive sites mentioned in the 2000 Abstract, but not archived at WDCP.

Dear Connie, thanks for the prompt response. I identified 6 sites listed below as meeting your criteria (one of which is withdrawn). In the 2000 Report listed below, you mention sites in Utah, Oregon and Washington as being part of the temperature-sensitive network, but there are no such sites among the sites meeting the criteria that you listed in your email. Are there other sites in the network or did you change your classification as work progressed?

BTW it’s commendable that you’ve archived so many sites on a relatively timely basis. Would that others did the same. I suspect that part of the problem is sometimes that they don’t do it when the matter is fresh and it’s always hard to go back and pick up old files.

Regards, Steve McIntyre

I received the following prompt reply:

Steve,

We did other collections that turned out not be be useful for this study. In particular the whitebark and limber pine we collected turned out not to have a very good temperature signal. These collections are in various states of processing, having been back-burnered.

You’re right, the longer one let data pile up, the bigger job it is to get it and the associated meta data organized to contribute to the ITRDB. Working for NOAA, it seemed like a good thing to set an example.

RE: #4 – In fact, that’s true for nearly all species here out West, anywhere away from the higher precipitation coastal zones and orographic belts on certain western Sierra and Cascade slopes. Conclusion – most Western species tend to be more responsive to late spring and early summer moisture availability than to temperature.

I would like to use two quotes that Willis E noted in post #87 from the Wegman report and in turn use them to make some points of my own about the engagement at this blog with experts and practitioners in the associated fields of climatology. The quotes are as follows:

Because of complex feedback mechanisms involved in climate dynamics, it is unlikely that the temperature records and the data derived from the proxies can be adequately modeled with a simple temperature signal with superimposed noise. We believe that there has not been a serious investigation to model the underlying process structures nor to model the present instrumented temperature record with sophisticated process models.

..With clinical trials for drugs and devices to be approved for human use by the FDA, review and consultation with statisticians is expected. Indeed, it is standard practice to include statisticians in the application-for-approval process. We judge this to be a good policy when public health and also when substantial amounts of monies are involved, for example, when there are major policy decisions to be made based on statistical assessments. In such cases, evaluation by statisticians should be standard practice. This evaluation phase should be a mandatory part of all grant applications and funded accordingly.

I have thought long and hard about why those of us at this blog who are obviously most concerned about climate policy decision making, and with it its potentially huge costs and repercussions, have apparent difficulty in engaging experts in fields whose work is being used to make that policy. The problem with recent attempts at this blog to engage dendroclimatologists (dendros) may exemplify that communications gap. The dendros do their experimental work with the idea in my view that it is limited in reconstructing past temperatures, but that it can be improved with more study and has an inherent potential and the advantage of a built in dating system. Pre-manipulation of the data and methodologies is second nature to many scientists in fields like the dendros work in. I suspect that some are aware of the statistical difficulties this presents and a number are not, or at least not sufficiently so.

The individual dendros can answer for their specialized niches in the science, but it would appear that we encumber them with the baggage of other dendros and particularly non-dendros like Mann who have used their results and manipulated them with their choice of methodologies which are not necessarily those of the dendros. Further down the line we have advocates and media people who further manipulate the results to their own ends and the baggage here, obviously, cannot be attributed to the dendros.

I would agree that the best shot at adding to our information base, at least in the case of the dendros, is to allow the questions to be filtered through Steve M and other individuals of his choosing. I think many of us participants at this blog see the dendro issues through the eyes of potential recipients of the resulting climate policies and this often is at odds with how the dendros view their own work and results of that work. I would have low expectations of gaining much new information and view the process as a best effort attempt. As a one on one blog effort and not a review of the existing literature, I suspect we would be more interested in how individual dendros view their field and the results coming out of it.

My main questions would be as noted above from the Wegman report: (1) what kinds of efforts are being made to construct the more complex models needed evidently to more fully explain the effects of climate on tree rings and (2) how much appreciation do the dendros have for the statistical effects of data mining and model over fitting and the potential critical value of out-of-sample results.

In #88, Pete, a quick response to just the first part of your response to my comment: Thank you for the links. I’ve not had a chance to explore these at all, but just quick browses of a few suggests to me that these are mainly databases maintained for individual projects, not for public data repository (and I may be completely incorrect in that observation, but in my quick perusal I did not see any sort of submission page, e.g., similar to that of the ITRDB.

No, they are not all individual projects, nor do I understand what you mean by “database.” Anyone can download the data. This is what I have in mind when I use the term “public archive.” Some are individual projects, but this is a NASA requirement; PI’s must describe how they will publically distribute the data as part of the proposal. Fund me so I can collect “my data,” doesn’t fly with NASA (excuse the pun). Since, you have the impression that TRACE is special I won’t use that as an example. Let’s have a look at Michelson Doppler Imager data from the SOHO spacecraft. Here is the entry-point to the archive. Anyone can request and stage data for download. Isn’t this public? Let’s look at HEASARC which is hardly an “individual project.”

This document lists general guidelines to archive X-ray, Gamma-ray and EUV data at the HEASARC. These guidelines are to ensure and maintain the capability of the multi-mission approach of the HEASARC archive as described in the HEASARC charter. Project specific needs have to be agreed on individual cases.
The HEASARC’s general policy is that archival data to be effective must include, in addition to the data, documentation, software and calibration data. The lack of any of these components prevents the full exploitation of the archival data. Every NASA Astrophysics project usually produces a Project Data Management Plan (PDMP) that describes how their data will be analyzed and archived. NASA HQ will ask HEASARC to review this plan, and concur that is acceptable. The projects are encouraged to work with the HEASARC in writing their PDMP.

In other words data-archiving is integral to funding an experiment. Steve M. has mentioned that he places significant blame on NSF for not enforcing their own archiving policies. Perhaps, this is part of the issue. Seems to me that when you submit a proposal to NSF you could request funding for the archiving procedures and/or storage requirements if the archive isn’t free. Anyhow, I don’t see how NSF’s lack of enforcement abdicates the scientist from fulfilling the archiving responsibility that they have implicitly agreed to or ignoring AGU’s archiving policy and AGU’s offer to provide temporary archiving, and AGU’s offer of advocacy to help find an appropriate archive or expand the scope of an existing one when they publish in AGU journals.
PB:

Just one quick quote from the TRACE site:

An Open Letter to the Solar Physics Community on TRACE DATA Access One feature of the TRACE mission is the open data policy. The purpose of this letter is to clarify this policy and to encourage a dialog on how to insure its fair and reasonable implementation.
Our fundamental data policy is simple: All TRACE data will be equally available to everyone on the World Wide Web. An open data policy is an experiment for Solar Physics space experiments. There will be misunderstandings that cause varying degrees of unhappiness. There is no perfect policy and it is naive to expect that deviations from established practice will not result in some problems

Again this is just a very quick perusal on my part (5 minutes) and in that time I came across this statement that this is an “experiment” in open data. So perhaps many disciplines are in the same boat with regards to making sure data are accesible? I think you’ve read too much into the “An open data policy is an experiment for Solar Physics space experiments.” I think the “experiment” in this case was that there was essentially no delay between observation and staging on the WWW. Here is a more informative description:

TRACE will have a completely open data policy: data are available to anyone as soon as they are entered into our database no more than days after the observations. On the other hand, TRACE is not an observatory mission, and the scientific priorities and the observing program will be highly structured in advance. One of the purposes of this workshop is to allow input from the community into the development of the observing program.

Previous missions had a significant delay between measurement and staging. See for example the YOHKOH data policy 1993 versus 2001.
PB:

And BTW, the quote of mine you highlight above was by no means “cherry-picking” (appears to be one of the favorite phrases on this blog, cherry-picking this, cherry-picking that), it was completely and utterly random (as random as Google can be), it was, as I already stated, the first real link I came to in a Google search. How in the world is that “cherry-picking”

You drew conclusions based on one google search that superficially agreed with your preconceived notions.
Wikipedia:

In the literal case of harvesting cherries, or any other fruit, the picker would be expected to only select the ripest and healthiest fruits. An observer who only sees the selected fruit may thus wrongly conclude that most, or even all, of the fruit is in such good condition [or bad in this case].Thus, cherry picking is used metaphorically to indicate the act of pointing at individual cases that seem to confirm a particular position, while ignoring a significant portion of related cases that may contradict that position.

I’ve tried to provide links and references to some other cherries for you. You could use this as an opportunity to learn how archiving is done in other fields, how it is funded, maintained, etc. If you have the interest, maybe you should apply to NSF (or where-ever) to provide support for an archive. This might seem unglamorous…. but few, other than archive maintainers, achieve the overall grasp of the total archive, so there are some scientific advantages to being an archive maintainer…. especially if you can hire a post-doc with the money to do most of the grunt work. 😉

Again this is just a very quick perusal on my part (5 minutes) and in that time I came across this statement that this is an “experiment” in open data. So perhaps many disciplines are in the same boat with regards to making sure data are accesible?

Was not blockquoted above and belongs to Peter Brown with my response immediately following it.

Government Agencies that publish anything based on data that the dendro’s haven’t archived etc, are breaking the Law, simple as that.

This is my understanding, also. They should be sued; the “environmental” NGOs do it all the time and they virtually always win. Maybe we can get Greenpeace to go after the purveyors of tree ring information (a try at humor). Seriously, some of the conservative think-tanks could use the courts to force these agencies to follow the law.

“It is the understanding and inferences supplied by the scientist that funding agencies are interested in, not her or his raw data.”

I don’t think this is strictly true. There are a significant number of cases where there has been scientific fraud, and examination of the raw data proves the fraud; just go look at the Office of Research Integrity for examples, which are mostly in the biomedical arena.

Like it or not, we all have an interest in ensuring the integrity of science. Even without this, there are many examples where inspection of the raw data can reveal issues short of fraud.

In the UK, the research councils mandate archiving of notebooks, etc., for 7 years. That sounds like raw data to me.
cheers
per

“However, as I mentioned in the previous post, response function analysis shows that there is also a much weaker but still significant inverse response to summer temperatures, explaining perhaps only as much as 10% of the remaining variance”

I think this isn’t accurate. What you have is an association between summer temperatures and growth from the statistics. If you wish to posit that the temperature causes the tree growth, that is an assumption which does not necessarily flow from the fact of the association. One can imagine data sets in which there was no causal association between two data sets, or where the association only held true within a particular time series.

I don’t want to lecture on statistics, but the more things you try correlations with (annual temps, summer temps, winter temps, mid-day temps, night-time temps,…), the more likely you are to get a spurious correlation.

Now, whether one calls the process of chronology selection for such broad-scale studies “cherry picking” or “data snooping” I suppose would be in respect to one’s preconceptions and biases against the entire process in the first place. All scientific studies begin with premises that guide model and data selection. So in answer to the second part of your question above, the premise of the question is incorrect; there is no need for “protection” since those terms are irrelevant to the process of development of explicit site selection criteria.

I find it difficult to understand your answer, without the assumption that you misunderstand what is at issue. Willis’ question was whether you can identify temperature-responsive site by explicit prior selection criteria, and without looking at the data itself; or whether the “explicit site-selection criteria are just an irrelevant smokescreen”, and you look at the data and choose (“cherry-pick”) the data that you like.

This is essentially a statistical problem. If we take two random variables, and correlate them, and throw away all those with bad correlation, we are then left with those with good correlation; but this tells us nothing. What is the difference between the theoretical process I describe above, with what is at issue in determining “temperature-sensitivity” in tree stands ? I am failing to find in your writing any theoretical basis for seeing a difference; and it would appear that it is not prior selection of the trees according to explicit criteria.

Steve, I’m having a little bit of a hard time with that too. The Cedar Breaks site in particular is very close to where I live and work. I did quite a bit of work with Picea engelmannii in the stands just to the east of Cedar Breaks about 10 years ago (major D. rufipennis outbreak). Some of the stands I inventoried were within half a mile of the rim, but I can’t imagine any of those trees giving a good temperature signal. Most of the spruce within Cedar breaks itself (IIRC) grow either along the rim which slopes away to the east, or on the steep slopes of the Breaks which fall away to the west with some south and north exposures – the spruce would tend to grow on the north-facing slopes of the breaks where soil conditions are cooler and more mesic. All of the potential growing sites for spruce would be subject to periodic drougths. Winter precipitation in that area, which is critical for earlywood development can vary widely from year to year.

RE: #114 – I think what we are seeing is an issue where Eastern dendro people don’t have lots of intimate familiarity with our generally moisture starved Western climes and vice versa. It’s resulted in a mish mash of misunderstanding. When you throw a bunch of sites together, some of which are in Eastern and coastal Northwestern mega humid zones, with ones that are in the many “upland forest islands” of the intermountain West, none of which are by any stretch of the imagination “humid,” there is trouble a brewin’. No way you’ll get a meaningful signal of any kind from any such compilation. No way, no how. Even if you assume “teleconnections.”

One Trackback

[…] Willis asked him a number of excellent questions, to which Brown gave polite though mostly unresponsive answers (see passim through the post.) When Willis asked him about the use of “novel untested” stastistical methods, Brown replied, and, when the obvious example of MBH was brought up, we heard no more from him on the matter. I cannot answer this question, I am not a statistician. I can say that I think your question is mostly a strawman; I cannot think of a single paper that I have read that used what I what I imagine you might refer to as a “novel, untested” method without providing justification for its use. Often this justification comes in the form of reference to existing literature, but it is always there. […]