Detrended in Amherst

The VS04 results have been interpreted to cast serious doubt on the MBH reconstruction. … However, these results are in large part dependent on a detrending step not used by MBH, which is physically inappropriate and statistically not required. The take-away message for the climate community should be strong encouragement for more vigorous cross-comparisons of the various reconstruction implementations, based on real-world proxy series, model emulations, and simulated modifications to real-world data. Such a step would help eliminate unnecessary confusion that can distract from the crucial contributions of climate change research to important scientific and policy questions.

Quite aside from the issue of whether trending or detrending makes any difference to the VZGT results (I’m convinced that the issue is immaterial to their principal point), it grates me no end that Mannians can seemingly with a straight face suggest to others that cross-comparisons are a good idea as a means of avoiding confusion, after it has required years of quasi-litigation to gradually unpeel details of Mannian methods. I want to go back over some of our correspondence with Nature. I started doing this because I recalled some curious issues of trended-vs-detrended arising in our Materials Complaint. If the implementation of trending-detrending matters to anything, then Nature should step and take its share of the blame for failing to respond to very specific issues for methodological clarification. There is also a rich irony in this, because Mann’s justification for not providing proper methods was that Zorita et al 2003 had managed to replicate their results – a claim still extant in the Corrigendum SI.

Von Storch et al. claimed to have tested the climate reconstruction method of Mann et al. (1998) in model simulations, and found it performed very poorly. Now, Eugene Wahl, David Ritson and Caspar Amman show that the main reason for the alleged poor performance is that Von Storch et al. implemented the method incorrectly. What Von Storch et al. did, without mentioning it in their paper, was to remove the trend before calibrating the method against observational data – a step that severely degrades the performance of Climate Field Reconstruction (CFR) methods such as the Mann et al. method (unfortunately this erroneous procedure has already been propagated in a paper by Burger and Cubasch (GRL, 2005) where the authors refer to a personal communication with Von Storch to justify the use of the procedure). Another more recent analysis has shown that CFR methods perform well when used correctly. (See our addendum for a less technical description of what this is all about). How big a difference does this all make? The calibration error in the temperature minimum around 1820, where one of the largest errors occurs, is shown as 0.6ºC in the standard case of 75% variance in the Von Storch et al analysis. This error reduces to 0.3ºC even in the seriously drift-affected ECHO-G run when the erroneous detrending step is left out.

I’m more inclined to think that detrending is an option and that one would want results to be robust to such methodological decisions. But let’s look at some the history just for amusement. Prior to the publication of MM03, I politely asked Mann if there was a more detailed description of methodology – correspondence here. (I also asked him to confirm that the data was the data actually used in MBH98.) Mann cited Zorita et al 2003 as a reason why he didn’t feel obliged to describe methods in more detail:

Owing to numerous demands on my time, I will not be able to respond to further inquiries. … Other researchers have successfully implemented our methodology based on the information provided in our articles [see e.g. Zorita, E., F. Gonzalez-Rouco, and S. Legutke, Testing the Mann et al. (1998) approach to paleoclimate reconstructions in the context of a 1000-yr control simulation with the ECHO-G Coupled Climate Model, J. Climate, 16, 1378-1390, 2003.]. I trust, therefore, that you will find (as in this case) that all necessary details are provided in the papers we have published or the supplementary information links provided by those papers.

MM also appear to inconsistently use standard deviations of un-detrended data, while MBH98 had normalized their EOFs by detrended gridpoint standard deviations.

Yes, you read that correctly. Here Mann is accusing us of using non-detrended data, whereas the "correct" Mannian approach was using detrended data. Now the procedure is different, but one can see why anyone encountering Mannian controversial material might conclude that detrending was part of Mannian methodology. Shortly afterwards, Mann et al submitted an article to Climatic Change (rejected by Stephen Schneider if you can imagine) in which they described "Important Technical Errors" alleged to have been committed in MM03, including the following:

MM03 also appear to have estimated gridpoint standard deviations from the un-detrended surface temperature data, while MBH98 had normalized their EOFs by detrended gridpoint standard deviations.

Again, you read that right. After publication of MM03, we tried again, quite politely, to obtain source code so that any avoidable methodological questions could be avoided and were once again rebuffed – correspondence here

We then filed a complaint with Nature – anyone remember Mann’s 159 series? We asked for results of the individual steps and to this day there is no information on the results of Mann’s individual steps in calibration-verification. Here is our complaint to Nature, in which we said:

Prior to the publication of our article, we requested other particulars on the computational methodology from Professor Mann and were refused. Accordingly, we attempted to assess the impact of the data problems by following the methodology publicly disclosed in MBH98. Professor Mann then criticized us for failing to replicate previously undisclosed details of his methodology. We once again requested particulars on his methodology, including copies of the computer programs used to read in the proxy and temperature series and to produce the Northern Hemisphere temperature index”¢’¬?but we have been categorically refused.

The policies of Nature rightly place a burden on authors to disclose data and methods to any interested readers. We have been systematically and deliberately stymied by Professor Mann on the most elementary requests: a proper listing of his data series and the exact computational procedures used. In the process of trying to obtain this information we have concluded that the disclosure at the Nature SI site is not merely inadequate, but in some cases it contradicts what is now revealed at the University of Virginia FTP site.

Among the listed items in our Materials Complaint to Nature was the following:

10. The disclosure of methodology for calculating tree ring principal components is inaccurate. Again MBH98 methodology is not "conventional". In this case, the FTP site contains computer programs which show that the data was transformed in ways not disclosed in MBH98. These undisclosed transformations have a material impact on the final results.

Nature promised to have our Materials Complaint independently reviewed, but failed to comply with that promise and dealt with it internally. In February, Mann replied to the Materials Complaint including the following response to # 10 above (obviously I disagree with the answer, but I’m trying to move along):

Each of these statements in incorrect. A conventional PCA was indeed used. The authors apparently failed to take note of the stepwise procedure used by us, and described in our paper. This procedure allows PC series to be calculated independently for each sub-interval (e.g. 1820-1980, then 1780-1980, …, 1400-1980) to allow for the use of an increasing number of data in the different sub-networks increasingly later in time. The misunderstanding of this procedure led to them eliminating roughly 80% of the proxy indicators used by us prior to AD 1600, the primary reason for the spurious result that they have reported. Precise details regarding how the data were standardized are provided in the revised supplementary information. We have shown elsewhere that the MBH98 reconstruction is in fact entirely robust with respect to whether or not the proxy series were standardized by the detrended or raw calibration period variance.

Once again, all of the original proxy data used, and all of the PC series used, were available on the public ftp site from July 2002, though the complainants did not download and use the correct data. The new, revised ftp site provides the data and listings of data in a thoroughly documented manner such that similar mistakes should not be possible in the future.

Here Mann confounds the de-centering issue with the unusual stepwise PC calculations, which were being disentangled at the same time. Anyway, shortly after receiving Mann’s response, Nature advised us that they would require a Corrigendum and in March sent us a copy of the Corrigendum (which differed in one important detail from the one finally published in July 2004). We provided details comments on the proposed Corrigendum and even discussed detrended-nondetrended as follows:

2. MBH98 stated that for the temperature data “the mean was removed, and the series was normalized by its standard deviation”. Recently, Mann et al. stated that they used “de-trended gridpoint standard deviations” to normalize temperature data. Again, in view of the inaccurate prior description, a line item in the Corrigendum would appear to be warranted.

The issue also came up in Mann’s Reply to our submission to Nature, which would have been read by the referees, as follows:

We also show that our re-standardization of all indicators in the MBH98 network by their detrended standard deviation during the calibration period, prior to calibration and reconstruction did not significantly influence the MBH98 reconstruction (line 3, Figure 1c). This latter step was motivated by the fact that 20th century trends in instrumental and proxy data typically far exceed the expectations for a ‘red noise’ null hypothesis (6). Normalizing by the detrended standard deviation therefore more properly weights the data series with respect to their estimated noise variance….

MM04 criticize the PC representation of the North American ITRDB data which was based, for the period AD 1400-1450, on an EOF analysis of the 70 constituent series which were standardized, as discussed earlier, by their detrended calibration period standard deviation.

Caption to Figure 1: Alternative versions of MBH98 reconstruction (shown for AD 1400-1500 period) in which (3) indicators have not been restandardized by detrended calibration period variance and (4) time series of the reconstructed instrumental eigenvectors have not been standardized to have same variance as the corresponding instrumental eigenvectors during the calibration period.

Anyone reading this might conclude that detrending was part of Mannian methods. In the Corrigendum SI, detrending was again mentioned as follows:

All predictors (proxy and long instrumental and historical/instrumental records) and predictand (20th century instrumental record) were standardized, prior to the analysis, through removal of the calibration period (1902-1980) mean and normalization by the calibration period standard deviation. Standard deviations were calculated from the linearly detrended gridpoint series, to avoid leverage by non-stationary 20th century trends. The results are not sensitive to this step (Mann et al, in review)

Here we have another example of Hockey Team cheque-kiting. The citation "Mann, in review" presumably was to the MBH submission to Climatic Change, which was probably rejected by the time that Corrigendum SI appeared (and this has not been changed in the Corrigendum.)

We’ve described the schmozzle with Nature in which they cut and cut and the word limit down eventually (without notice) to 500 words and said that our article was too "technical". Referee #2 of the second review here said:

I would encourage them to pursue their testing of MMB98,and by the way other reconstructions. As I wrote in my first evaluation, this should be a normal and sound scientific process that should not hampered. For instance, questions that seem to be quite critical, such as the sensitivity of the MBH98 reconstructions in more remote periods to changes or omissions in the proxy network or the dependency of the final results to the rescaling of the reconstructed PCs, have become clearer to me now. From the reply in MBH04 I am now afraid that they were not sufficiently described in the original MBH98 work. In particular the PCs renormalization, could have been included as clarification in the recent Corrigendum in Nature by MBH.

I found the last comment intriguing as that evidenced to me that the Corrigendum had not been peer reviewed by our referees and, if not by them, then it was NOT independently peer reviewed. I tried to get Nature to answer this, but they evaded the question, but Marcel Crok eventually got them to admit that the MBH Corrrigendum was not peer reviewed. BTW Referee #3 is someone familiar. I also know who Referee #2 is. I’m guessing that Referee #1 (who was added only after referees #2 and #3 recommended our first submit) is Osborn or Briffa or someone in that crowd.

After seeing the inadequate information in the Corrigendum SI, we tried once again with Nature, this time buttressed by the seeming support, we re-iterated our request to Nature for Mannian details, still unavailable in the Corrigendum SI:

the referees expressly encouraged us to continue our analysis of MBH98 and of multiproxy calculations generally and one of them expressly stated that our efforts should not be “hampered”.

Under the circumstances, we believe that the full data set and accompanying programs for MBH98 should now be included in the Nature Supplementary Information, along with an accounting of any discrepancies between what has been listed at Nature.com to date and what was actually used in MBH98…. [including] "the results of the 11 “experiments” referred to in MBH98" ….

Nature refused even to require Mann to disclose the results of the individual steps in the calibration and verification periods. This remains shocking to me.

And with regard to the additional experimental results that you request, our view is that this too goes beyond an obligation on the part of the authors, given that the full listing of the source data and documentation of the procedures used to generate the final findings are provided in the corrected Supplementary Information. (This is the most that we would normally require of any author.)

All series were linearly detrended prior to analysis, and spectra computed using a standard Tukey window with the window width (maximum lag used in the estimate) set to one-fifth of the series length,

The issue of whether to choose detrended or non-detrended correlations can be traced back nearly a century in economic literature. To the extent that VZGT were guessing at Mannian procedure, it was not an unreasonable guess, especially based on then-contemporary excoriations by Mann against us for using non-detrended standard deviations in an emulation, excoriations which would (ahem) have been familiar to a Nature referee, who might have been misled by them if he went on to consider such matters.

But no one should be required to guess. I don’t think that the Wahl et al point matters a damn. Even if there is an error in methodology and even if it mattered to the conclusion, Mann has had lots of opportunities to provide accurate information on his methodology and failed to do so. To this day, no one even knows the results of the individual steps. But the most preposterous aspect of this dispute is the realclimate suggestion that Mann is owed an apology.

The people who should be apologizing right now are Nature – they had an opportunity and a justification for requiring Mann to archive both source code and intermediate results necessary to verify results and they failed to do so.

26 Comments

This highway leads to the shadowy tip of reality: you’re on a through route to the land of the different, the bizarre, the unexplainable…Go as far as you like on this road. Its limits are only those of mind itself. Ladies and Gentlemen, you’re entering the wondrous dimension of imagination. Next stop….The Twilight Zone.”

It is a common view that the term "AlGorithm" derives from Arabic, but I presume that the term is actually derived from the name of the erstwhile inventor of the Internet (also developer of the computer language AlGol). So when Mann said that he would not be intimidated into revealing his AlGorithm, think Austin Powers and his MoJo.

#5. That’s submission to Climatic Change by MBH in 2004. I was asked to review. I’m sure that I’ve reported on all this. In my capacity as a reviewer, I asked for the supporting data and code. Schneider said that no one had ever asked for such things in 28 years of editing and it would require a change in editorial policy. I asked that the policy be re-considered. They agreed to establish a data policy requiring authors to provide supporting data but did not agree to code and would not even ask for code. So I asked for supporting data. Mann refused once again; so I pointed out to Schneider that Mann had failed to comply with their newly minted data policy and that was the end of the article. By this time, Mann had kited a check on the submission (and it seems another check.) Jones and Mann 2004 cited the rejected submission to supposedly repudiate MM03 and then could cite Jones and Mann 2004. What do the facts matter when you can cite something?

If Mann were running a group of banks in the US and pulled these sorts of games, the Federal banking examiners would shut him down, sieze his assets and throw him into a Federal prison. Jake Butcher used to shuffle money from bank to bank just before the Federal bank examiners would arrive at one of his banks. From the perspective of that one bank, the books would balance. One day they hit all his banks simaltaneously. I believe that he is still in prison.

If Mann is so convinced that he is correct, then it would be a reasonable expectation that he be held to the same standards of proof as Jake Butcher. Jacob Franklin Butcher

Wahl seems so forthright about not “de-trending” data. As your co-author pointed ouit in another thread, it doesn’t really matter whether you “de-trend” or not. You just need to be able to saisfactorily account for the variance in the data over time.

re #13: In the caliberation period, MBH98 used a highly reliable test to confirm that residuals are indeed i.i.d. (Gaussian and uncorrelated). The test used is a standard Mannian procedure known to the rest of the world as “inspection”😉 This is described in MBH98 as follows:

The spectra of the calibration residuals for these quantities were, furthermore, found to be approximately “white’, showing little evidence for preferred or deficiently resolved timescales in the calibration process.

#21. But let’s suppose that someone wanted to check the "inspection" through an actual statistic. We requested these very residuals in December 2003 or alternatively the results of the individual steps. Mann refused, the NSF refused, Nature refused.

The properties of the 1820 step differ from the properties of the 1400 step. Mann presumably noticed this in 1999 – hence the different confidence intervals in MBH99.

Did he even carry out a Mannian "inspection" on the residuals from steps other than AD1820? Or is this one more time, like the verification r2, where the results for the 1820 step wre reported but not the results for the problematic early step?

I wish that others would write to Nature and ask for the results of the individual steps so that these "inspections" can be checked.

A couple of days ago, realclimate was huffing and puffing about how an amendment published by VZGT in an "obscure" journal was an insufficient correction – leaving aside the issue of whether detrended-nondetrended is germane to the point. GRL is not "obscure", but neither is it Nature. If Mann found that the confidence intervals published in NAture were wrong, isn’t that the place that he should have pointed this out? And even if he had failed to previously do so, how could he justify not doing so in the 2004 Corrigendum?

Now the reduction of error (RE), usually called the coefficient of determination and called “conventional ‘resolved variance'” in MBH98, is usually denoted with the capital letter, i.e., R^2. So RE=R^2🙂 Also, even if you take R^2 to be the squared sample correlation coefficient, the result holds if “calculated correctly”! This is because in the simple linear regression (one predictor) R^2=r^2.

Mann to NAS Panel Mar 2006: I did not calculate the verification r2 statistic. That would be a foolish and incorrect thing to do.

Again correct. His computer may or may not have calculated those, very likely he did not do that himself🙂 It would be foolish since his computer can do it much better, and incorrect because it just might show that there is no statistical skill in his recostructions.