Kaufman's "Classical" Log Regression

I observed yesterday that I had been unable to replicate the archived version of Kaufman’s Hallet Lake series – something that I thought was due to a change in the archived version (since the NCDC archive noted that a new version had been archived in Nov 2008.) This turns out not to be what happened.

Kaufman archived BSi (%) at NCDC and I innocently assumed that this was what was used in Kaufman et al 2009. It appears that, instead of using the archived BSi%, Kaufman used (a version of ) the temperature reconstruction from BSi flux developed using a “classical logarithmic” regression by one of Kaufman’s students.

I thought that CA readers would be intrigued by the “classical logarithmic” regression. The thesis of Kaufman’s student says:

Quantitative summer temperature reconstruction: BSi flux increases exponentially with temperature over the calibration period (figure 19). A classical logarithmic regression was used to develop a transfer function.

The “classical” logarithmic function is then shown in full as follows (url pdf page 34):

I don’t feel quite so bad for not being able to figure out this “classic” functional form from the information in Kaufman et al 2009 and NCDC.

I tested the above formula against the archived temperatures in McKay A-6 and replicated these temperatures almost exactly. The reconstruction uses BSi flux, not BSi %, BSi flux being (conventionally) defined as follows in the McKay thesis as follows:

BSi concentrations (%) were converted to flux (mg cm-2 yr-1) by multiplying %BSi by total flux (the product of bulk density and sedimentation rate).

The use of 1.000626 as a base in the logarithm seems a little exotic and it’s unclear why this unusual nomenclature was used. The form itself can be somewhat simplified (with one less free parameter) to the following (still not the most elegant functional form in the world) – the estimation of the parameters remains unclear.

Temperature = 12.79*( 0.765718 +log(BSi_flux))^(1/3)-1.082

In other words, temperature is said to be proportional to the cube root of flux_all times BSi.

They go on to say:

To minimize the effects of point-to-point variability, and to emphasize longer-term changes in temperature, a 50-year Gaussian-weighted low-pass filter was applied to the high-resolution record of BSi flux before using the transfer function to reconstruct summer temperatures for the past 2 kyr. The BSi flux-inferred summer temperatures for the past 2 kyr range from 9-14°C; the average of the unfiltered data for the past 2 kyr is 10.6°C (10.7°C for the filtered data), nearly 2°C cooler than the modern average temperature (12.4°C) (figure 21); here defined as 1976-2005, the period of continuous measurement during the current AL regime. Comparisons of the observed and predicted values, along with the residuals show that the transfer function tends to slightly underestimate the highest and lowest temperatures (figure 19).

Earlier today, I thought that I’d managed to replicate the Kaufman 2009 version from the temperature recon in the McKay thesis, but as of now, I’m still stumped as to the provenance of the Kaufman version. Here’s what I get, kaufmanizing the McKay temperature reconstruction:

The NCDC version is the same as the thesis version, but says that a new version was provided in Nov 2008. Perhaps Kaufman used an old version of this series. Another Team mystery.

BSi flux and BSi-flux reconstructed temperature are shown against depth in the McKay Thesis Appendix A-6, with temperatures expurgated below 192 cm. (These would be high temperatures in the early Holocene according to the McKay formula.) The BSi% measurements can be matched to the BSi% measurements at NCDC for all but three years (three high years expurgated from the NCDC record.) The NCDC record is indexed by year rather than by depth. The two records can be spliced to yield an age-depth relationship that (annoyingly) is not otherwise archived.

I don’t see a definition for this in the paper, but I get lots of Google hits distinguishing it from “density” and “concentrations/%”, but also no obvious definition. Considering the terminology (“flux”) and maybe the units (mg per cm-squared per year) is this a measurement of the change in BSi density/concentration/% fromm year to year? Or does “flux” mean we are talking about flow into a pre-existing medium (ie. previous years sediment). Or what?

Well, Mr. McIntyre, I continue to be stunned (and grateful) for all your probings, or audits, here, and this post is another one. Although I consider myself chief among your detractors for your forays into “doubles badminton” (or whatever) and for those occasional irritating arcanities, I value your Feynmanesque scientific and personal integrity. I improve myself as a result. I could keep on extolling your efforts here, but I won’t.

I consider myself chief among your detractors for your forays into “doubles badminton”

squash – puuh-leeze. I’m feeling a little grumpy today as I cut myself for the first time in about 40 years playing squash last night. I stumbled and banged into a wall and got cut over my eye from the squash glasses. I had to go to the Emerg to repaired since I cut myself at about 9.30 pm. Cuts like this are absolutely at the bottom of the triage and I finally got home about 1.40 am. I chatted for a while with a 6’2″ lady from southern Sudan who was sitting beside me – the same tribe as Manute Bol.

I don’t normally stumble like this but I’ve got a sore groin injury that’s been buggng me for about 6 months and making me feel quite sour.

Steve, it’s called “Classical” logarithmic regression in climatology because it dates all the way back to 2007, and because they have not yet moved on to another method. The signal that they are going to move on to a new method is that the old method gets described in print somewhere.

Good lord! I used to come across equation fits like this on occasion when I was studying heat transfer. I thought to myself then that this was some poor sop’s thesis. It was one reason I decided not to go into the field (seriously). The equations used to report “wind-chill” factor look similarly screwy, and were probably obtained in basically the same fashion.

The fit is done on the basis of 26 data points. How many degrees of freedom are there really in this formula? I’m not talking about just the constants, but how many ways such an equation could possibly be structured. I’m quite sure the equation comes from some demented curve-fitting software; I wonder how many possibilities it considers for the fit.

Sorry, I haven’t checked the reference in the post in detail but does it make any comments on the dimensions of the constants and how they result in a degC output? Is this simply as Curt suggests a “curve fit”? Must say it looks a classic to me.

1.000624, not 1?
-0.99, not -1?
Glad for the precision on these parameters. Inspires tremendous confidence.
.
Seriously, if 1 to the power of anything is 1, why on earth would anyone use a base of something very near 1? What a bizarre formulation. I suppose you antilog each side to recover x^temperature on the LHS, with x as parameter to be estimated. But what on earth would justify that formula and not, say, exp(temp) and leave the other parameters free to explain that which is attributed to the base.
.
That’s what happens when you do math while freebasing.

There’s no closed form for that equation, or whatever functions generated those other values? Or is it that there is a closed form after all, and the values are just “good enough for government work” substitutions?

It’s a monotonic highly curved function that starts off very steep at T = -1.08 when flux = .46, and flattens out for high values of flux. (Below flux = .46, T is not a real number.)

Looking at Fig. 19 in McKay’s thesis at the link Steve gives in the post, this curve actually doesn’t provide enough curvature to fit the data in the observed calibration range.

It’s not clear which way he fit the function — with flux responding to T or T responding to flux. However, a simple quadratic would do a better job of fitting flux to T. Extrapolating that below its minimum at around T = 11 dC might be a problem, however.

McKay’s figure 19A shows a lot of serial correlation in the residuals, measured either vertically or horizontally. It’s not clear if he took this into account. In any event, he gives no standrd errors for his “regression” coefficients. Somehow he comes up with SE’s for his reconstruction, however.

Re: Layman Lurker (#45),
Using CO2 as a covariate is not necessarily “controlling” for the actual effect of CO2 fertilization. Show me that there is an effect, then we can talk about how to control for it. I wouldn’t call this “homework”. Looks more like hand-waving to me.

Re: Layman Lurker (#47),
Thanks for teaching me google. “can be affected” is not the same thing as “is affected”.
.
Arguing “burden of proof” is fruitless. So you morally oblige THEM to offer counter-proof. And they morally oblige YOU to offer proof. You know the only way past this deadlock is for one of the camps to break down and do the study. In the meantime, whoever’s published the most gets policymaker’s ear.

I’m not arguing with you. I am not trying to leverage the possible effect of CO2 fertilization to anything beyond this discussion. IMHO the fact that BSi flux “can be affected” by CO2, means that this relationship needs to be understood, and if necessary accounted for, before you can define a legitimate relationship between flux and temp.

What Mackay says is that BSI flux varies exponentially with temperature, which I take to mean that the relationship is of the form BSIflux = a.exp(bT) where a and b are to be determined by a regression analysis. Treating BSI as the dependent variable, and T as the independent, which seems to be implied by the axes in Mackay’s figure 6, I get a=0.0733, b=0.2447, with a curve that by eye looks very much like the fitted curve in Mackay’s Figure 19, but with an Rsquared of 0.654, not 0.69. This equates to T = 4.09 ln(BSIflux) + 10.68.

If I treat BSI as the independent variable, I get T = 2.6735 ln(BSIflux) + 11.001, also with an Rsquared of 0.654, which is not that similar.

In any case, formula in the paper, with the cube root of a logarithm to a strange base, is hard to fathom, as is the eccentric use of the term “transfer function” for a curve fit.

On further investigation, a linear regression of temperature against the cube root of log(BSIflux), which Mackay’s equation implies, gives an even worse fit, due to the fact that if flux < 1 the cube root can be even more negative, so that the departure from a straight line is even more pronounced.

Bizarrely expressed, but I think this is a log-log regression, of the form
log(T-T0)= a*log(BSi_flux)+b
with regression coefficients a=1/3, b=2.805. The only non-regression number to be explained is T0=-1.082C. That may be the freezing point of the lake water.

A log-log regression doesn’t work for me, and is in any case not consistent with Mackay’s comment on an exponential dependence of BSI flux on temperature.

My previous comment about cube roots is incorrect, in that if you divide the BSI flux by 0.465 before taking the log, you will get a positive result. If you divide by something other than 0.465, you will get a quite different result. Quite how these numbers are derived from the analysis defeats me, but then I’m not a climate scientist.

It’s at times like this that I’m reminded of Feynman’s comment: “And this is science?”

Re: Andrew (#32), The log-log form is just obtained by taking SM’s simplification a little further. It’s exactly equivalent (if I haven’t made an error). It’s not clear whether the coefficients were actually obtained by standard regression, but that would be the logical way.

The exponential statement seems to be an impression from Fig 19a, and it is indeed approximately true. (T-T0) varies from about 11 to 14, so its log still behaves fairly linearly with temperature over the range. BSi_flux, on the other hand, varies from about 0.8 to 2.4, so its log is fairly non-linear. So log(BSi_flux) appears as linear in T, which gives the apparent exponential behaviour.

It seems to me wrong to rely on Steve’s simplification in deriving a log-log form.

Mackay, to his credit, provided the raw(ish) data in his table 8. If you can derive a log-log relationship from that table, good luck to you. If you can derive Mackay’s formula, even better luck to you.

Well, you can fit a not totally unreasonable looking exponential curve over the calibration period. But is there any physical basis for an exponential dependence? And, exponents behaving as they do, what BSi flux would you expect if you calculate the value of an exponential fit at 20 degrees? Around 10 cm-2 yr-1 by my reckoning, or around four times the largest value in Mackay’s table. Has that ever been observed? Who cares?? We’re climate scientists!! You’re not!!!

Perhaps I am missing something here but surely this is the wrong way around:

To minimize the effects of point-to-point variability, and to emphasize longer-term changes in temperature, a 50-year Gaussian-weighted low-pass filter was applied to the high-resolution record of BSi flux before using the transfer function to reconstruct summer temperatures for the past 2 kyr.

Wouldn’t one apply the mapping of BSI flux to temprature and then filter the resulting tempertures to get a filtered temperature?

Filtering the flux and then appyling the mapping seems odd.

To take it to its concluion averaging the flux and mapping the result, is quite different to taking the average after mapping.

I realise the flux is probalby an average over a short time period persumably a year but that I presume one has to live with.

It appears to me that the equation was fitted inversely. If you solve for BSI, you get (if the tex works right):

a = .465
b = 1.000624
c = .99
d = .914

Presumably the reference to “classical logarithmic regression” implies that the fit is done after taking the log of the equation.

I also have a quibble over Figure 19C from the McKay thesis. This is supposed to be a diagnostic plot of temperature residuals vs. temperatures meant as an evaluation of the regression process. The McKay plot uses Observed Temperatures rather than the more appropriate Predicted Temperatures for the horizontal variable in the plot. This is not a good choice since an observed value includes the residual as part of itself and this induces a “false” slope into the plot.

Re: RomanM (#36),
A four parameter model seems like overkill to me. “b” in particular. At this point I don’t care if the model is “physical” or not, and neither should anyone else. An empirical fit is fine – as long as it can be validated. The parameters probably vary among sample locations, and that’s fine too. As long as there is a real relationship with temperature that can be proven to be robust.

It actually looks like this is a three parameter model, since the b value can be subsumed into the expnent and then into c and d (note that Steve had reduced the four estimated values down to just three in his post):

where

c* = c (ln(b))1/3 and d* = d (ln(b))1/3

The way that McKay wrote the equation is a trifle bizarre and unintuitive. I spent an hour trying to chase down the the equation (referenced as Birks, HJB (1995) in the thesis), but some of the publications were unavailable through our university library electronic resources and varvologists seem even less descriptive than the team in setting out the details of their calibrations.

The fact that four “constants” are given suggests that for fitting the nonlinear regression model, a two step (recursive?) procedure could have been applied: An ad hoc linear transformation of the temperature along with by a logarithmic linear regression on the cube of the trnsformed temperature.

To me, using a cubic suggests “volume” and logarithms imply “multiplicative” effects, but I really can’t ascribe any obvious real physical meaning to this model.

Re: RomanM (#39), As I said in (#31), I think it is just a 2 parameter log-log regression, with the third number being the freezing point of the lake water. The form quoted in the thesis is just a (strange) rearrangement of the result.

You’re pulling me leg,right? Do you really believe that they got a “regression coefficient” of exactly 1/3, not .333 … and that -1.082 is a “natural constant” equal to the freeing point of lake water? I have a good imagination, but not THAT good that I could believe all of these things for a moment.

My guess that it is a “transfer function” used by someone else and it was selected as is for the thesis. But, hey, I’m waiting to be convinced.😉

Re: RomanM (#73), No kidding. All I’ve done is rearranged the formula, SteveM style, to a form where the coefficients could be obtained by regression on the logs. McKay says the formula comes from log regression. I believe this is what they did.

T=-1.082 is the temperature corresponding to zero BSi-flux, even in the original expression. Flux goes to zero in ice. The original formula makes no sense below -1.082. If they didn’t get that number from the measurable freezing point, they should have. The physical range is for T+1.082>0, so taking the log is reasonable.

I’m sure someone rounded the coefficient to 1/3, perhaps thinking that had some physical justification.

Re: Nick Stokes (#77), Oops, T=-1.082 doesn’t correspond to zero BSi-flux. But it does correspond to the point where the fractional power becomes zero, which is the limit of where it makes physical sense.

Re: romanm (#81),
No, it was the wine I had with dinner. I should have walked it backwards as you did, to check. It should be
log(T-T0) = a*log(.7657 + log(BSi_flux))+b.
So yes, there is a third parameter which couldn’t come from regression. Apologies. And it’s not exactly log-log. But I still think T0 makes sense as the freezing point.

I really suspect that the model they are using (which I don’t at all view as intuitive) is a result of previously cobbled together ad hoc approach to modelling the data. During my previous consulting experiences, I have seen many cases where models with strange forms and estimated coefficients were passed wholesale into a new environment and used as is with the same coefficients as if they were gospel truth. Interpreting the coefficients was an impossibility.

During my previous consulting experiences, I have seen many cases where models with strange forms and estimated coefficients were passed wholesale into a new environment and used as is with the same coefficients as if they were gospel truth.

Before I read this comment, I was going to post on what I see as very different tones and concerns when one reads the papers that describe the possible proxy properties to climate factor relationships and then finally the end user who cobbles together varied and numerous proxies for a reconstruction. I see a tendency for many reservations and qualifications by the original reporters of proxies that are then ignored by the reconstruction authors or only given passing attention. What puzzles and bothers me is that the original proxy modelers appear frequently to be not that interested in publicizing their original concerns once a reconstruction paper is published.

At this point I don’t care if the model is “physical” or not, and neither should anyone else. An empirical fit is fine – as long as it can be validated. The parameters probably vary among sample locations, and that’s fine too. As long as there is a real relationship with temperature that can be proven to be robust.

They should definitely care. That “as long as it can be validated” is a major if. You need out-of-sample data to truly verify in the absence of a reasonable a prior physical model. Most model testers are also leery of the calibration and verification periods used by some modelers and both periods are considered in-sample data. Who is going to publish a model that works in the calibration but not in the verification period and thus the modelers obviously peek at the verification period – if they have not already used the entire period covered by both that of the calibration and verification and then divided it after the fact. Most do not even make the claim that the verification data was scrupulously withheld and not observed during the model construction.

One, I suppose, could claim that the work is preparatory to “finding” a physical model, but then one would judge such work not ready for publication or if published call it what it is: a fitted model with in-sample data that has not been tested out-of-sample.

Sure it is. So focus on that and stop the whining about whether an equation is “physical” or not.

Bender, you need to reply with more details.

The ultimate reasoning behind a model is that it has a “physical” connection and that that connection can be finally demonstrated for a conjecture to take on the status of theory.

One can model without a prior physical rationale and “validate” that model with out-of-sample data – however long that might take into the future. Without a physical model, however, one can more easily statistically validate a spurious model out-of-sample, using the standard probabilty limits for significance, 5% of the time on average.

Actually I am not whining about a model needing physical underpinnings, that is too obivious, but instead I whine about your replies.

When it comes to hypothesis testing, I would assume that one has a physical model in mind to test.

Re: Kenneth Fritsch (#72),
I didn’t say YOU were whining. But now you ARE.
.
If I say any more than “sure it is” then I’m just “piling on”. There’s nothing more to say. I’m not convinced these things are a valid proxy – and I’ve already said as much – until someone does exactly what you said with out of sample testing etc. What more would you like me to say? Bears suck?

The ultimate reasoning behind a model is that it has a “physical” connection and that that connection can be finally demonstrated for a conjecture to take on the status of theory.

Don’t lecture ME! I just spent a dozen posts explaining a different, “physical” theory of declining radial growth over time, pointing out the conditions under which ring age might not a valid proxy for tree ring width. This has implications for the most important ingredient in all these recons: the HS shaped conifers. And all I hear is piling on claptrap about “CO2 fertilization effects”. Why don;t you go pester Hu for “more details”.
.
Don’t lecture ME about the need for detail.
.
Bears suck.

Bender, please accept my apology. When you said the following, dumb old me took this as meaning you were downplaying the need for a physical model. Pardon my impudence for asking for a more definite and detailed statement on this matter.

At this point I don’t care if the model is “physical” or not, and neither should anyone else.

I have clearly learned my lesson and now to the following I must say I understand and with all the implications that statement entails.

In the labs where I’ve worked, use of that odd regression equation without justification, and the absence of a confidence region for its parameter values, would have implied inadequate supervision of the student. And, since I assume the thesis was accepted, inadequate examining of it too.

This kind of monotonic non-linear transform may fit the data well, but it has the drawback that when you try to extrapolate it to new points outside the calibration range, the extra parameters make the extrapolation even more uncertain than it would be with just a linear equation. It’s hard enough to calculate CI’s with a linear model, but even harder when you go non-linear. McKay did some sort of jackknife procedure to try to take this into account, but I’m not sure it was adequate.

A further problem is that when you take the transformed data, which has been transformed by fitting 4 parameters to make it have a linear relationship to temperature, and then pass it along to another study down the road (Kaufman’s) that just takes the transformed data as is, the second study will find a marvelously linear relationship to temperature, without having to add any extra parameters to make it fit, and therefore may be led to think the fit is better than it really is.

This “parameter mining” introduces the same type of size distortions as outright “cherry picking” of the series themselves, but is more subtle. Fitting 2 extra parameters and then not taking them into account is kind of like adding dummy variables targetting the 2 worst points of your original data so that they will now have 0 errors, but not including these dummy parameters when counting degrees of freedom.

I’m mostly with bender on this. While CO2 fertilization was posited as an explanation for Graybill bristlecone chronologies, in Pete Holzmann and my opinion, the pulse in Graybill chronologies is more likely due to wild 6+- sigma growth pulses after strip bark formation (and a bias to selecting such samples – explicitly stated by Graybill.)

The connect to varvochronology is that it also throws up wild distributions with hugely fat tails. There’s a similar style to Graybill chronologies: throw a few 6-sigma series into a 20-series composite and, lo and behold, you get a stick.

HAving said that, I’ve seen an article on Swiss varves connecting varve thickness to the number of tourist visits. Not an effect that seems applicable to Alaska.

Re: Steve McIntyre (#51),
Reading up on tree growth these days. You know how the practise of negative exponential detrending (background signal of narrower growth of outer rings) is defended in dendroclimatology? Age of cambium. But what if the causative variable is actually vertical distance from the tree crown (not radial distance (=age) from pith) and what if the mechanism is modulation of growth hormone, not just availability of photosynthate? If you had a physical model of tree growth where cambial meristem activity is determined both by photosynthates (affected by CO2 + T + H20) AND hormones, you might well be able to generate wildly vigorous xylem growth in response to strip-bark wounding.
.
Sometimes physical models ARE better.

Re: bender (#53),
age of cambium = meaning that the older the cambium is when the ring is formed, the less active it is; so cambial age shifts over a radial chronology (but so does vertical distance to apical meristem at time t)

Re: Steve McIntyre (#55),
I will look. Xylogenesis will be my first search term. Wouldn’t be surprised if the tree physiologists have accomplished quite a bit in the 11 years since MBH98 was published.

They’ve even studied this effect in pine, of all species:
Iliev and & Savidge. 1999. Proteolytic activity in relation to seasonal cambial growth and xylogenesis in Pinus banksiana. Phytochemistry 50: 953-960

Abstract. The secondary xylem, namely, the wood, in trees is induced and controlled by streams of inductive hormonal signals which shape wood quality and quantity. Auxin is the primary hormonal signal which controls wood formation; it is mainly produced in young leaves, moves downward through the cambium and induces the wood. Cytokinin, from root tips moves upward, increases the sensitivity of the cambium to the auxin signal and stimulates cambial cell divisions. Gibberellin promotes shoot elongation and induces long fibres and tracheids. Centrifugal movement of ethylene from differentiating-xylem cells outwards to the bark induces the radial vascular rays. In conifer trees, jasmonate-induced defence response, which is mediated by ethylene, induces the traumatic resin ducts. Large traumatic resin cavities that damage the wood in response to wounding and stress can be prevented by lowering the sensitivity to ethylene. Along the tree axis, gradients of decreasing auxin concentrations from leaves to roots induce gradients of cell width, wall composition and density in the wood. The juvenile wood in trees is induced by young leaves, while the adult wood can be produced further away from these leaves. Therefore, to swift the transition from juvenile wood to the high quality adult wood at the base of the trunk, fast steme longation of young trees should be promoted. This can be achieved by growing young trees in high densities and by minimizing competition with annuals and grass species. Likewise, as root tips provoke juvenile effects in the shoot, fast root growth should be endorsed to minimize their effects. Rapid stem elongation can also be achieved by manipulating endogenous hormonal concentrations in transgenic trees. Elevating endogenous bioactive gibberellin concentrations in transgenic trees promotes stem elongation, increases long fibre and tracheid production, and modifies lignin biosynthesis.

Auxin from above. (Cytokinin from roots is needed to bolster physical support because the trunk is so damn far from the source of auxin: the apex). So it’s not entirely cambial age that drives the narrowing of rings, it’s the increasing distance from the apex. And this is exactly the kind of physiological mechanism that can go berserk when you wound a tree. Age can’t go berserk. Cambial activity can.

Auxin from adult leaves moves into the phloem, where the IAA moves rapidly in a non-polar fashion, up and down through intact sieve tubes (Morris et al. 1973; Goldsmith et al. 1974). This non-polar IAA transport in the sieve tubes is not involved in cambium activity and therefore does not induce wood formation in intact trees.

Here in the US southeast, you probably couldn’t easily measure the stress induced ethylene (fruit growers use ethylene to induce ripening after harvesting early so that you get less spoilage) because the photolysis occurs too rapidly (Apr to Oct), but you sure can smell pinene and the ground level ozone🙂 and see the photochemical haze. I think that you could (at least for pines) measure the volatile terpenes released concomitantly as a result of the stress response. Hmm, need to do a google search re southern pine species such as the loblolly pine pyrolysis oil project I worked on in the late 1970s.

Abstract: When branches of white pine are stressed by tying into loops, marked increases in the ethylene content of the internal atmosphere are noted. When an ethylene-generating paste is applied to localized regions, growth in diameter is increased there. It is suggested that ethylene may serve as a natural stimulator of radial growth associated with physical stress such as results from wind action.

The fact that shoots from older trees showed no sign of reduced growth when compared with those from younger trees suggests that diminishment of annual shoot increment is not a definitive sign of aging in Great Basin bristlecone pine.

Age of a tree’s ring appears to be a proxy for hormone concentrations at the time the ring was formed. The proxy breaks down when, from physical disruption, the cambial activity goes wild in response to spikes in ethylene and/or methyl jasmonate.
.
So much for “CO2 fertilization effects”. What you need to control for is C2H4.

The great ages of some bristlecone pines seems to depend on their ability to adopt and maintain a special growth habit. Restriction of the cam- bium to one sector of the stem circumference, de- scribed by Schulman and Ferguson (1956) as “cambial dieback” and “cambial-edge retreat,” is invariably found in trees more than about 1,500 years old (Table 1). Radial growth along a narrow longitudinal strip ultimately produces a slab- like stem in which the annual increments do not form complete rings in transverse section, but are only arcuate segments. Because this growth geometry does not require an ever-increasing volume of annual wood production to maintain a constant increment thickness, as is the case with a symmetrical, fully bark-covered stem, the aging bristlecone pine can maintain a constant ratio of green to non-green tissue almost indefinitely (Wright and Mooney 1965).

In particular, the tendency for uplsope cambial persistence where roots don’t die back:

Root exposure could also contribute to asymetrical stem development. Because of the damming effect of the stem, the roots are uncovered most rapidly and deeply on the downslope side of a tree. The roots in this sector have often died (LaMarche 1964). This might lead to concen- tration of crown and stem growth on the opposite, or upslope, side of the tree.

I question the authority of Wright and Mooney (1965). While a slab of xyxlem might require less photosynthate than a cylinder of xylem, by the same token a slab of cambium would have higher hormone concentrations than a cylinder of cambium (from a given fixed source of IAA and CK from apical & root meristem). Given the universal role of hormones in rejuvenation and senescence, my money is on hormones.
.
The cambium dies back during a drought and IAA spikes. Voilà: Graybill HS.

Conclusion: This wound response literature is so old that MBH should be absolutely ashamed what they did with Graybill’s data and what they refuse to do with Ababneh’s. Makes you wonder what’s REALLY going on in Yamal. Some kind of wounding in the 1950s?

Attainment of an age greater than about 1,500 years apparently depends on the adoption of a strip-growth habit, which permits the aging bristlecone pine to maintain a nearly constant ratio of green to non-green tissue. Slow growth rates, wind damage, and soil erosion may be conducive to cambial area reduction. Other features of old-age stands, such as the wide spacing of the trees, the compactness of their crowns, the sparsity of litter, and the low density of the accompanying ground-cover vegetation, would also provide a measure of protection from fire and from competition, permitting the older trees to survive.

Bender, this is fascinating! I have wondered why so many of today’s dendro experts seem blind to this. Glad to see the previous generation could see it.

Attainment of an age greater than about 1,500 years apparently depends on the adoption of a strip-growth habit, which permits the aging bristlecone pine to maintain a nearly constant ratio of green to non-green tissue.

Is this saying what it appears to say? In essence, the normal response of a tree to major bark-stripping events would be radically increased growth in the remaining green tissue, to recover a balance with non-green.

I wonder how long the Ethylene remains in a sample? Ethylene data from Almagre tree #31 would be quite interesting here. I think we also sampled both green and non-green tissue from at least one tree.

Modern dendros are more interdiscplinary than ever. They have to range from geography to physiology to biochemsitry to statistics to climatology to ecology. I think it is a real challenge to cover all of those bases. There is not much time to read; so much to do, planets to save.

the normal response of a tree to major bark-stripping events would be radically increased growth in the remaining green tissue, to recover a balance with non-green

That’s how it reads to me. Like any tree that accelerates its radial growth to close a fire scar or a beetle puncture.

I wonder how long the Ethylene remains in a sample?

There might be some sort of lasting chemical signature. Not necessarily ethylene. I think this is a good question. Don’t know if anyone has looked.
.
Similarly, I don’t know for a fact if anyone has ever actually measured hormone concentrations up the cambial region of a pine stem. I presume so based on what is stated in the literature; but I haven’t seen a graphic yet. Probably not in bcp.

A further question along the path that bender is exploring, if old BCP adopt such a strip growth pattern, wouldn’t that imply that some parts of the tree are not growing every year? Especially if as it sounds that the area of growth can change over time this wouldn’t seem to be highly problematic, the tree’s age may in fact be under-valued even as the value as a proxy is over-estimated.

Re: Soronel Haetir (#89),
Read the threads and read the older literature. Missing rings occur some small percentage of the time. But the problem is overcome by crossdating among many long samples. The ensemble (master chronology) will thus have no missing rings.
.
The real problem here is the sharp uptick in Graybill’s chronology, which caused NAS to say that strip bark bcps should NEVER be used in climate reconstructions. Errors in dating and subtle dilations in ring width patterns are an academic, non-issue. The issue for Kaufman et al (2009) is why should Yamal #22 qualify when NAS rules Graybill bcp do not? What evidence is there that Yamal is not goofy?

Footnote. I place the burden of proof on the proponent because if goofy growth has a universal mechanism – spiking hormone concentrations whenever vascular cambium is disturbed – then ALL samples may be subject to such demonic intrusions, and NO sample can be accepted uncritically. So: what happened with Yamal?

Briffa never published his Yamal (RCS) chronology in a stand-alone paper. It was inserted passim in Briffa (2000) and thereafter inhaled like crack by the reconstructionists.

Yamal as a site was published by Hantemirov and Shiyatov (Holocene 2002), who published a very different chronology than Briffa (one without a HS – this is never used.) Shiyatov is the Shiyatov(1995) who said that the MWP had the largest tree growth at Polar Urals – see CA post on this paper. H and S reports very substantial migrations of Yamal treeline. As far as I can tell, Briffa ignores such migrations (like other dendros, who fail to record such relevant metadata.)

Briffa refused to archive the Yamal measurement data for years. Last summer, Phil Trans B said that they would require Briffa to provide Yamal measurement data, but he asked for more time. I checked earlier this year and it still wasn’t online. I re-checked just now and part of it has materialized after 10 years http://www.cru.uea.ac.uk/cru/people/melvin/PhilTrans2008/ (not that anyone had the courtesy to notify me that the data was now online.)

Re: Steve McIntyre (#92),
I wonder why Briffa and Bradley don’t just work together publishing …[snip] Has there ever been a strictly B&B paper?

Steve: Jones is Briffa’s patron. Bradley and Jones worked together in the first CRU temperature history (the 1985 project for the DOE Oak Ridge nuclear lab.) Bradley and Jones, 1993 was the first multiproxy study (not MBH98, though its supposed “firstness” has been used an excuse.)

Reading the largely illogical and uninformed comments about those seductive bark strips of bristlecone pine is kind of annoying, since I have published unrebutted mechanistic explanations of them since the 1980s, yet have not been able to penetrate any minds in the so-called “dendro” community (to a forester “dendro” has meant dendrology since before the first dendrochronologist was birthed. Most recently in my book “The Bristlecone Book — A Natural History of the World’s Oldest Trees” (Mountain Press, 2007). LaMarche, not a biologist, was clearly putting the cart before the horse in attributing old age to death of vital tissues. It’s the other way around, guys.

Thank you for your comment, Ron.
.
We are all open to being educated on this very interesting topic, so fire away! Any comments you have to add would be appreciated. You are also more than welcome to list publications of yours relevant to the issue. (If they’re all relevant, list them all!)

THank you for commenting. We cited one of your articles in McIntyre and McKitrick 2005 (E&E) on the theory that, if bristlecone strip bark widths were supposed to be a magic thermometer, the authors should survey the relevant botanical literature – which they hadn’t done. Regards, Steve Mc

Indeed. I may be wrong, but much of the reality of biological organisms is presented as its inverse. Whenever I see a vast conclusion that seems to have been made from half-vast data, invert the relationship and the premises and see if the logic works.

We think of a tree as a unitary organism whose roots bring up water and distribute it to all parts of the tree. This is oversimplified. There are numerous examples known of water being transported only straight up into a spatially well defined trunk sector from which emanate several limbs. Think of this tree as being made up of several partial trees, each represented by a major lateral root, a trunk sector, and that sector’s limbs, with limited or no tangential diffusion of water into the neighboring sectors. Now kill a major root by exposing it through soil erosion to dessication, rolling-rock impacts, cooking of its cambium through its thin bark, or rot subsequent to injury. No water flows up its sector, so the sector then dies. After a while the bark covering the dead cambium of that sector flakes off, exposing the wood surface. We now have a strip of bare wood with dead limbs coming out of it. Now do that again to another big root. That gives us two dead strips (cup half empty interpretation) or a living bark strip between them (cup half full interpretation). Since soil erosion takes time, we notice that “bark-stripping” (ugh!) is found on trees that have been around awhile (= old); and very often on downhill sides of trees. As long as there is an unkilled sector, and the rest of the tree is sufficiently rot-resistant to support it, the slowly dying tree will still stand.
This can be tested empirically using a shovel or saw. It requires no wisdom on the part of the tree, calculating how much foliage it can afford to make given its metabolic needs, and no special behavior of hormones.

It has been quite a few years since I studied biology, but I thought that is was commonly acknowledged that many of the world’s most successful living organisms were compartmentalized, segmented, or both.