Some New Tree Rings in Alberta

8 new tree ring measurement data sets have archived this week at WDCP in northern Alberta by Meko. The sites are around 58N, 111W , well to the northeast of the Jasper site (52N, 117W) which is used in nearly all the multiproxy studies. I did a “standard” type chronology fitting negative exponential curves by core if they fit, horizontal straight line if they didn’t. Here are my calculations for the 8 sites (the authors did not archive chronologies, only measurements).

Figure 1. 8 New Sites archived in June 2006 at WDCP

Here is the average of the 8 series.

To the best of my knowledge, these ring widths were not collected as an industry-funded disinformation tactic or with any specific intent to discredit the Hockey Team.

21 Comments

Discredit the Hockey Team?! Surely not. All one needs to do is to recognize that everything after about 1970 consists of non-response due to anthropogenic pollution effects — umm, wasn’t there a big NH sulfate aerosol effect starting around then? — and truncate the series there. That done, a very fine steeply positive AGW trend is evident after 1950, which everyone just knows is the appropriate ‘signal.’ Really, Steve, I think you ought to rush right into print with it. Your reward will be a warm welcome over at RealClimate.

By the way, how does one weight the average when the time-intervals are not identical?

Steve: These are fascinating time series. If one examined only 1850-2000, it would appear that the character of the variability — magnitude and persistence — changed dramatically around 1925. However, looking back further in time, the 1925-2000 period seems quite similar to the 1725-1800 period. Interesting.

Steve,
what do you mean by this statement “To the best of my knowledge, these ring widths were not collected as an industry-funded disinformation tactic or with any specific intent to discredit the Hockey Team”

A. I would think averaging is pretty simple (when different numbers of series at different times). Just add the series that you have at any particular time and divide by number of data points.

TCO, you can get artefacts if you do that. The number of series suddenly jumps from one year to another, affecting the weightings, and if the series starting/ending starts or ends on a particularly high or low value it can lead to a nasty spike.

I don’t see (m)any artefacts in Steve’s average so I’m not sure if he did something to avoid this or whether it just happens to not be obvious.

Personally if I were averaging a number of time series which start and end at different points I would blur the edges of each set by a number of years by, say, introducing it at 10% per year. So if you have two series, one (a) starts at 1700 and one (b) at 1800, at 1799 I would have x = a, at 1800 I would have x = (a + (b*10%))/110%, at 1801 I would have x = (a + (b*20%))/120%, etc. until at 1809 you are up to a normal average. That should help limit the outliers created by the introduction.

Not being a statistician I don’t know whether this is a bad thing to do, but I think it makes sense.

If I were going to average non-equal time-segments (seat-of-the-pants answer to my own question here), I’d first normalize each of them to the longest section common to them all. Then I’d truncate all the series to be commensurate with the shortest one, and average them. I’d align the remaining n-1 series, again truncate them to the shortest section, and average them. I’d move through the data sets that way until all the sections were averaged, each group averaged according to the number of series bits it contained. There’d be one lonely little piece left at the end, unless the two last series were of identical length.

Then I’d weight each average by the inverse of the fractional number of series bits it contained, relative to the total, so that the noise levels reflected the amount of data actually present. Finally, I’d splice them all back together into one series. I haven’t done this, and recognize there may be a problem with the values of the averages at the splice-points. Maybe it would be necessary (but perhaps not proper) to adjust each section average so that the respective splice-points had the same absolute value. But if the method — or its like with adjusted splice-points — worked, the noise at each point should pretty faithfully reflect the quality of the data. Any model fitted through the data ought to then produce an r-value that was a pretty fair measure of the net uncertainty.

Steve, I know this is rather basic dendro, but I’m interested in how the data looks the more we go back to the very basics, strip off adjustments: is 0.43 good in comparison to other field work? What is the normal expectation? When people look at error estimates, do they ever forget about this part of the error (core to core variation) and just think about year to year?

I haven’t looked at this closely, so I may well be missing something, but isn’t 8 tree ring sequences a small number? In other words, wouldn’t one expect the noise to dominate in which case this doesn’t really say much of anything?

#17. These are 8 sites. If you’re unhappy about conclusions drawn from 8 sites and this is reasonable enought), then you will reject the “other” Hockey Team studies right away. For example, Jones et al has only 3 sites in the 11th century; D’Arrigo et al 2006 only 6 sites in the MWP; Briffa 2000 only 4 sites in the MWP. Now the sites above don’t go to the MWP. But they don’t have the big Yamal spurt either.

In fact, if you don’t like conclusions affected by one site, you can’t accept any study which gets different MWP-modern levels depending on whether they use Yamal or the Polar Urals Updata or whether bristlecones are in or out.

Looking at the 8 time series again they are surprisingly similar to each other and to the overall average. On the other hand, they don’t look anything like the charts of global mean temperature we all know if not love. But I guess you would expect them to correspond more closely to the weather at 58N, 111W which is possibly fairly different that the global average. What’s the nearest weather station with a reasonable history? How similar is the measured temperature history there to the time series you compute?

These are white spruce from the Athabasca River delta. Stockton (1973) claimed that they were limited primarily by water levels of the Athabasca, which are indeed highly variable. You can’t put peaches in cherry pie.

#20. MBH uses any sort of tree ring record – precipitation or otherwise. This was one of the major “innovations” of MBH. Their theory was that these could give information on “climate fields”. Their statistical method was simply data mining.

Where the data mining method is vulnerable is if any of the series have spurious correlations, such as the bristlecones. The falback position on britlecones in Wahl and Ammann 2006 is that they may have a correlation to precipitation or some “climate field”- although they don’t try to provie this,