The Gift That Keeps On Giving

As I mentioned before, Hans Erren’s digitization of the Ababneh Sheep Mountain data has prompted me to pick up the MBH files to re-examine the vexed matter of Mann’s CO2 “adjustment”. Each step is fraught with problems. I’m going to try to pull something together listing the steps in the adjustment and the issues involved – but it makes a very long post. Today I’m going to post on one of the later steps in the process, which will also nicely illustrate a point that bender emphasized: that superimposing HS shapes involves only a a couple of degrees of freedom. The superpositions may or may not be “right”, but they have negligible statistical significance – a point that has completely escaped Wahl and Ammann (for example).

Here’s Figure 1b from MBH99, together with its original caption. It purports to show the difference (grey) between the 75-year smooth of an average of Jacoby chronologies in northern North America and a 75-year smooth of the Mannian PC1 (rescaled somehow). I’ll re-visit this calculation on another occasion. The dashed line purports to show the “secular trend” from the difference of the smooths. Again, I’ll revisit this calculation on another occasion.

Here’s what I want you to pay attention to here: the graphic purports to show “relative variations in atmospheric CO2 …for comparison”. The y-axis is labeled only “Relative Amplitude” – an imprecise term that we see often in the Mannian canon. CO2 values are estimated in ppm – so this graph has required a re-scaling and re-fitting of CO2 measurements to fit onto the graphic. But what were the steps? A point that I’d like readers to bear in mind whenever they see one of these re-centering and re-scaling is that re-centering and re-scaling involve the estimation of 2 coefficients. Univariate linear regression also involves the estimation of 2 coefficients: so the types of concerns and caveats involved in estimation of linear regression coefficients carry forward into the estimation of rescaling and recentering values, even if the proponents don’t discuss it.

From inspection of MBH98 data at the FTP site of Virginia Mann (no longer online but which I have), I have digital versions of the series used in these plots. The CO2 “version” here starts with CO2 from 1610-1995 measured in ppm (I haven’t verified the provenance of this data yet).. Mann sets a “reference” value of 278.5 ppm, which appears to be drawn from minimum values in his data in the 18th century. He then divides the observed values by the reference and takes the log.. Thus x= log (CO2/278.5)

If these values were plotted up, one gets the following graphic.

Figure 1. Without Rescaling

For the above graphic, there was no effort to rescale the CO2 series to visually match the “residuals”. The most obvious way to re-scale the CO2 to the residual scale is to regress the “target” against the CO2 and then “predict” the target. I do this all the time in experiments and it’s an efficient method. In this case, I didn’t want to change the zero point and so I did the fit without an intercept, just to get a re-scaling coefficient. The code for this type of operation is as follows:

fm=lm(target~logco2-1,data=Z);
predict0=predict(fm,newdata=Z)

These “predicted” values then rescale the log Co2 series to plot in a visually more convenient manner. Good practitioners will typically put a 2nd scale on the other (right) vertical axis so people can tell what’s going on. In this case, a fit based on the period 1700-1980 (and 1700-1995 would be similar) yields the following graphic. This doesn’t look much like the Mannian graphic.

Figure 2. Rescaling based on 1700-1980

The only way that you can re-scale the CO2 series to match the “residuals” as shown in MBH99 Figure 1b is to coerce the fit by limiting the fit to the period 1700-1900. So applying the same re-scaling method, I calculated a re-scaling factor by limiting the regression to the period before 1900 as follows:

This yielded the following graphic which is a very close match for the MBH99 Figure 1b (this is not saying that this is “Correct” or that the target “residuals” have any meaning.) I’m just looking at this step.

Figure 3. Re-scaling based on 1700-1900

Does this fit have any meaning or significance relative to the fit on the 1700-1980? I don’t think so. From this exercise, Mann concluded in MBH99 the following:

The residual is indeed coherent with rising atmospheric CO2 (Figure 1b), until it levels off in the 20th century, which we speculate may represent a saturation effect whereby a new limiting factor is established at high CO2 levels. For our purposes, however, it suffices that we consider the residual to be non-climatic in nature, and consider the ITRDB PC #1 series “corrected” by removing from it this residual, forcing it to align with the NT series at low frequencies throughout their mutual interval of overlap.

How many issues can one count? If one looks at the “residuals” (whatever they are) there is no leveling off in the 20th century. The “leveling off” occurs only when Mann does a second-stage smoothing. Is the “secular trend” in the residuals something that has any meaning? I ‘d be surprised. And is the residual “coherent” with rising CO2? If you coerce the fit so that the trends match, then it looks coherent. If you don’t coerce the fit, is it coherent? Doesn’t look that way to me. It looks like the residuals increase before the increase in CO2 if one doesn’t coerce the fit to the 19th century.

Does this sort of coerced fit have any statistical meaning? I can’t think of any.

This isn’t science; it’s tailoring. Pull and stretch the fabric until the suit fits the Mann. I cannot fathom how this torturing of the data would ever be allowed in even an undergraduate paper in the natural sciences.

“faking” in respect to this is probably too strong and motive doesn’t really matter. And there’s little point even forming an opinion on whether it is or isn’t, since you won’t find out. All that’s relevant to us is what was done and whether it has any meaning.

I don’t think that was the intention. The way I read this graph is that he expanded (rescaled) the log CO2 and provided this graph so he could talk about forcing it to align and justify removing something. He speaks of the “overlap”. It appears all he had to do was keep the zero as zero and expand the scale without labelling. Looks more sloppy than anything else. I am not commenting on his forcing it to align and removing.

Obfuscation and hand waving so once again nobody knows what’s really going on and nothing can be replicated. (Much like leading people on a wild goose chase looking at data that’s not where they said it’s from…)

Somebody please correct me if this isn’t the intent, but isn’t the idea supposed to be that you adjust for CO2 fertilization? If so is there any plant physiological reason to do this with a log?

Once that’s done (assuming that it was done according to a reasonable procedure), then you have something that has to be calibrated with the surface measurements, no? And then having done that, then you get the hockey stick? Right?

Or am I completely out in left field, and he’s trying to use log(CO2) as a proxy for temperature, to make his point that there’s a hockey stick fit in unphysical temperature proxy?

Maybe this isn’t the right place to ask. Glad to be told where is the right place.
Is there a reconstruction of temperatures that does not have all these problems? That is, has anybody done it correctly?

Have I got this right? Mann is saying that the rise in CO2 between 1800 and 1900 (280 to ~300 ppm) had a strong and dominating effect on plant growth, but that effect ceased to apply after 1900, even though CO2 has since risen to 380 ppm. Is there any scientific basis for his claimed “saturation” effect, and why does it kick in at 300ppm?

“The residual is indeed coherent with rising atmospheric CO2 (Figure 1b), until it levels off in the 20th century, which we speculate may represent a saturation effect whereby a new limiting factor is established at high CO2 levels. For our purposes, however, it suffices that we consider the residual to be non-climatic in nature, and consider the ITRDB PC #1 series corrected by removing from it this residual, forcing it to align with the NT series at low frequencies throughout their mutual interval of overlap.”

Sounds to me like he’s comparing/correlating the temperature (tree-ring) residual to the log(CO2)…isn’t that how temperature is supposed to vary with CO2? Then is making the adjustment by removing the residual since it is now considered non-climatic.

If I’m reading that right, then he removed 0 deg to 0.3 degrees from the temperature anomaly from about 1750 to 1900. Wouldn’t this effectively improve the hockey sticks performance for showing warming starting with industrialization and not before?

After looking back at http://www.climateaudit.org/?p=2335 it becomes obviously then, the reconstruction would not have matched the instrumental record without the adjustment. Why was “smoothed NT [Jacoby]” selected?

Rewritten with just as much accuracy:
“The residual is indeed coherent with the cumulative rainfall from 1700 in the plains of Spain (Figure 1b), until it levels off in the 20th century, which we speculate may represent a saturation effect whereby a new limiting factor is established at high cumulative Spanish rainfall levels. For our purposes, however, it suffices that we consider the residual to be non-climatic in nature, and consider the ITRDB PC #1 series corrected by removing from it this residual, forcing it to align with the NT series at low frequencies throughout their mutual interval of overlap.”… gives saturation a new meaning doesn’t it.

#1, et al: The expected temperature rise due to increases in CO2 (without feedback) is expected to have a response that is close to a log function. Also, as Hans Erren (#17) showed, chemical reactions may also follow a log function depending on the nature of the reaction.

All they have shown is that two series trend upwards for some time and then one of them doesn’t. Instead of taking the position that the lack of correlation over the past century means anything they wave their arms and ignore it. You could do the same with practically any series (the rain in Spain, for example). Indeed, normally the statistical meaning taken from this would be that the lack of correlation over the 20th century means that there is no relationship here. Instead they appeal to a deus ex nihilo to say “…we speculate…”.

So, yes, this does have statistical meaning. In the language of “withholding data” and “validation periods” they have demonstrated that their hypothesised link between CO2 and the residuals is not robust. What do you think the RE/CE/R2 statistic would be for these series?

(These sorts of explanations remind me of the sound-bites you get from forex traders on the evening news. Every day they have a different reason for why the exchange rate went up or down as it did on that day. The reasons are not consistent or logical, but they do sound “plausible”.)

In a minute there is timeFor decisions and revisions which a minute will reverse.

Hans re#17 what the graphs show is 3 things
1) The response of photosynthesis is not logarithmic- so why did Mann use a log transformation?
2) No evidence of CO2 saturation even at 2000 ppm- so why did he suggest this?
3) That increased CO2 increases both the rate of photosynthesis and its temperature optimum. This is of course another “dirty secret” that the High priests of AGW would like to keep under wraps- plants love high CO2.

1) Coerce the CO2 measurements to match the period before 1900 with essentially an arbitrary rescaling.
2) Conclude that the CO2 measurements match the residuals before 1900 but not after 1900.
3) Make conclusions based on the divergence, ignoring the fact that other arbitrary (or less arbitrary) scalings would not support that conclusion.

Probably get snip snip but here goes. I am a retired MSG Army specializing in intelegence. (conflict in terms) We were not allowed to coerce any data from any individual. There for with the majority of the AGW croud being on the liberal side it follows that if you must coerce the data to make it fit then it must not be allowed. Dang I did like the old army way don’t strain get a bigger hammer.

#17, Hans, Not being a reader of Dutch, but of German, what I make out of this graph from the referenced webpage is that the 3rd graphic, which the one you posted here shows the growth (rate of photosynthesis) of paprika plant in response to various CO2 levels as the temperature rises. I find it interesting that not until you get higher than 2000 umol m2 s1 that an increase occurs as temperature rises from 15 to 30 C. It looks like from the graphic if one were interpolate in reverse, lower CO2 levels favor cooler temperatures for higher photosynthesis rates. Wouldn’t it be also reasonable to infer that high CO2 levels allow for higher rates of photosynthesis at higher temperatures for this specific plant? Given that water is integral to plant growth, wouldn’t this support the finding that a CO2 level causes plants to use less water, i.e. are more drought resistant?

The 2nd graphic shows the effect of increasing light intensity with various levels of CO2.

Maybe I’m ignorant, but since when do logarithms bend down like that graph?

The way I read the graph: on the y-axis is the yield the coloured lines are ambient CO2 isolines.
The yield peaks for higher ambient CO2 at a higher optimum temperature .
The optimum yield is logarithmic to ambient Co2 concentration:

I gotta be missing something here, anyone please jump in where I’m wrong.

We take a bunch of Jacoby tree chronologies, and we average them (bad start, but oh well).

We take a bunch of Mannian PC chronologies, and take their PC1. One hopes correctly.

We subject them both to 75 year smoothing.

We subtract one from the other to get the residual.

We smooth the residual with a 50 year filter.

We filter out everything with a timescale less than 150 years to get the “secular trend”.

Now if the null hypothesis were that Jacoby and the PC1 were both accurately measuring the same thing, I’d say we could confidently reject the null hypothesis … but beyond that, what does this residual mean? My only conclusion from this is that one of these is true:

a) Jacoby and Mann PC1 are accurately measuring different things.

b) Jacoby and Mann are inaccurately measuring the same thing.

c) Jacoby and Mann are inaccurately measuring different things.

My money’s on “c”, but your mileage may vary.

But the real question … what on earth does the difference between Jacoby and PC1 have to do with CO2? Is the claim that one of these is affected by rising CO2 and the other is not?

Willis #31 —
See my comment #63 on the next thread (#2355). As a CO2 adjustment, the MBH99 procedure is entirely bogus. All they did was hand-craft the secular trend by replacing that of PC1 with that of the Northern Treeline series, while preserving the desired higher frequency HS blade at the end of PC1. I think the purpose was to generate a modicum of LIA-like action.

Hu, thank you for a most interesting post. That’s the most convoluted piece of logic (Mann’s, not yours) that I’ve seen in a while. It seems like the long way around to prove anything about anything, using triple-smoothed residuals …

UC, if I understand you, that is quite astounding. Mann takes the 150 year smooth of the 75 year smooth of two 50 year smooths, one of which is the average and the other the PC1 of two different hand-selected groups of trees. He asserts (without confirmation) that this triply smoothed curve represents a pulse of growth, attributable to CO2, which starts around 1700 and ends around 1900. He then uses that hypothesized growth pulse to cool his reconstructed temperature … is that a correct summation of the plot to this montebank’s novel?

this PC1 adjustment warms the AD1000 reconstruction. But it doesn’t affect 1400-1980 reconstruction (stepwise reconstruction, u know, new proxies enter the game at 1400). Total effect is this long-term cooling trend .

Splitting the linear trend fits to two parts (1000-1399 and 1400-1850) helps to visualize what happens: