What Nick Stokes Wouldn’t Show You

In MM05, we quantified the hockeystick-ness of simulated PC1s as the difference between the 1902-1980 mean (the “short centering” period of Mannian principal components) and the overall mean (1400-1980), divided by the standard deviation – a measure that we termed its “Hockey Stick Index (HSI)”. In MM05 Figure 2, we showed histograms of the HSI distributions of Mannian and centered PC1s from 10,000 simulated networks.

Nick Stokes contested this measurement as merely a “M&M creation”. While we would be more than happy to be credited for the concept of dividing the difference of means by a standard deviation, such techniques have been used in statistics since the earliest days, as, for example, the calculation of the t-statistic for the difference in means between the blade (1902-1980) and the shaft (1400-1901), which has a similar formal structure, but calculates the standard error in the denominator as a weighted average of the standard deviations in the blade and shaft. In a follow-up post, I’ll re-state the results of the MM05 Figure 2 in terms of t-statistics: the results are interesting.

Some ClimateBallers, including commenters at Stokes’ blog, are now making the fabricated claim that MM05 results were not based on the 10,000 simulations reported in Figure 2, but on a cherry-picked subset of the top percentile. Stokes knows that this is untrue, as he has replicated MM05 simulations from the script that we placed online and knows that Figure 2 is based on all the simulations; however, Stokes has not contradicted such claims by the more outlandish ClimateBallers.

In addition, although the MM05 Figure 2 histograms directly quantified HSI distributions for centered and Mannian PC1s, Stokes falsely claimed that MM05 analysis was merely “qualitative, mostly”. In fact, it is Stokes’ own analysis that is “qualitative, mostly”, as his “analytic” technique consists of nothing more than visual characterization of 12-pane panelplots of HS-shaped PCs (sometimes consistently oriented, sometimes not) as having a “very strong” or “much less” HS appearance. (Figure 4.4 of the Wegman Report is a 12-pane panelplot of high-HSI PC1s, but none of the figures in our MM05 articles were panelplots of the type criticized by Stokes, though Stokes implies otherwise. Our analysis was based on the quantitative analysis of 10,000 simulations summarized in the histograms of Figure 2. )

To make matters worse, while Stokes has conceded that PC series have no inherent orientation, Stokes has attempted to visually characterize panelplots with different protocols for orientation. Stokes’ panelplot of 12 top-percentile centered PC1s are all upward pointing and characterized by Stokes as having “very strong” HS appearance, while his panelplot of 12 randomly selected Mannian PC1s are oriented both up-pointing and down-pointing and characterized by Stokes as having a “much less” HS appearance.

Over the past two years, Stokes has been challenged by Brandon Shollenberger in multiple venues to show a panelplot of randomly selected Mannian PC1s in up-pointing orientation (as done by the NAS panel and even MBH99) to demonstrate that his attribution is due to random selection (as Stokes claims), rather than inconsistent orientation. Stokes has stubbornly refused to do so. For example, at in a discussion in early 2013 at Judy Curry’s, Stokes refused as follows:

No, you’ve criticized me for presenting randomly generated PC1 shapes as they are, rather than reorienting them to match Wegman’s illegitimate selection. But the question is, why should I reorient them in that artificial way. Wegman was pulling out all stops to give the impression that the HS shape that he contrived in the PC1 shapes could be identified with the HS in the MBH recon.

Stokes added:

I see no reason why I should butcher the actual PC1 calcs to perpetuate this subterfuge.

When Brandon pointed out that Mann himself re-oriented (“flipped”) the MBH99 PC1, Stokes simply shut his eyes and denied that Mann had “flipped” the PC1 (though the proof is unambiguous.)

In today’s post, I’ll show the panelplot that Nick Stokes has refused to show. I had intended to also carry out a comparison to Wegman Figure 4.4 and the panelplots in Stokes’ original blogpost, but our grandchildren are coming over and I’ll have to do that another day.

What Nick Stokes Refused to Show

Figure 1 below shows a panelplot of randomly selected Mannian PC1s in consistent orientation, a figure in the form that Brandon challenged Stokes to provide, but which Stokes refused.

In order for readers to get a better sense of the relationship between HSI and the visual HS-ness of the Mannian PC1s, as some readers have asked about, I’ve done the random selection within three stratifications: left column – selected from 10-30th percentile HSI (1.35- 1.53), middle column – from 40-60th percentile (1.58-1.62) and right column – 80-95th percentile (1.75-1.85.) If readers wish for some other column allocation, I’ll be happy to comply. In the top left corner of each panel, I’ve shown the index of the PC1 in the dataset, together with its HSI. In the middle column, I’ve also plotted the MBH98 AD1400 NOAMER PC1 (red): it has an HSI of 1.62, almost exactly equal to the median HSI of the simulated Mannian PC1s. In the right column, I’ve also plotted the PC1 shown in MM05 Figure 1. Brandon has also done a panelplot of randomly selected Mannian PC1s in consistent orientation (see here), but I think that the figure below is easier to read (CLICK TO ENLARGE):

Figure 1 (click to enlarge). Mannian PC1s from MM05 simulations using arfima model of North American tree ring network. Data is from http://www.climateaudit.info/data/MM05.EE/sim.1.tab which was saved in 2004. In the top left corner of each panel is the column number and “hockey stick index” (comparing the “1902-1980” mean to the “1400-1980” series mean). The corresponding MBH98 PC1 (red) has an HSI of 1.622, a value that is indistinguishable from the median absolute value of the HSI (1.617). The series shown in MM05 Figure 1 is shown in blue. All series are oriented up since they are oriented in Mannian regression against increasing 20th century temperatures.

Stokes’ assertion that the supposedly attenuated (“much less”) HS appearance of his inconsistently oriented figure was not attributable to inconsistent orientation, but to random selection is refuted (as one would expect from the information in the histograms). The HS appearance of the panelplot of randomly selected Mannian PCs is just as “very strong” as the HS appearance of Stokes’ consistently oriented panelplot, which he characterized as “very strong”.

Nor can one reasonably contest that PCs in the 40-60th percentile range and even in the 10-30th percentile range have distinctive HS shapes – a point that I’ll return in fisking other Stokes’ comments.

From time to time, Anders of the ATTP blog has attempted to understand the dispute, but uncritically accepts ClimateBaller doctrine, as for example, his following comment at Brandon’s:

This is completely untrue. MM05 did not imply that Mannian PCs “typically” produced hockeysticks: it stated it. And despite ClimateBaller assertions to the contrary, Mannian PCs applied to networks with the persistence properties of the NOAMER network do so “typically” rather than “rarely”. (ClimateBallers also contest the persistence used in these networks, a topic that I’ll discuss on another occasion.)

In future posts, I will further discuss the connection of the Hockey Stick Index to t-statistics and the relationship of the above figure to Figure 4.4 of the Wegman Report, Nick Stokes’ panelplots and other related issues. (The grandchildren are arriving now.)

UPDATE: Stokes asked that the same diagram be drawn without stratification. Here are the the first 15, oriented pointing up as in the NAS panel diagram. According to the standard applied in Stokes’ diagram of top-percentile centered PC1s, the HS appearance of these series is clearly “very strong”. [CLICK ON IMAGE TO ENLARGE]

Lance Wallace, while I approve of including relevant links in posts, you can find links to both MM05 papers on the left side of your screen. In the Articles list, look for MM05 (GRL) and MM05(EE). The GRL paper is the one which focused on the PCA calculations.

The EE paper is longer and covers issues with the reconstruction as a whole. It’s the one I’d recommend people read if they want to understand Michael Mann’s hockey stick.

“Some ClimateBallers, including commenters at Stokes’ blog, are now making the fabricated claim that MM05 results were not based on the 10,000 simulations reported in Figure 2, but on a cherry-picked subset of the top percentile.”

Are you denying that Wegman’s Fig 4.1 and 4.4 are showing results that had been selected by an undisclosed step wherein only the top 100 of 10000 were sampled?

Are you denying that the curve in Fig 1 of the GRL paper, described as“Sample PC1 from Monte Carlo simulation using the procedure described in text applying MBH98 data transformation to persistent trendless red noise “
was also # 71 in your set of 100 selected from 10000 on the basis of HS index?

Are you denying that the set of 100 PC1s placed on the GRL SI, described thus“Computer scripts used to generate simulations, figures and statistics, together with a sample of 100 simulated ‘‘hockey sticks’’ and other supplementary information, are provided in the auxiliary material “
were also the result of this 100 from 10000 selection procedure?

I think there are things you are not telling us. I need hardly mention the awesome disapproval of the commercial world for this sort of thing. Sarbanes-Oxley and all that.

Re“Stokes knows that this is untrue, as he has replicated MM05 simulations from the script that we placed online and knows that Figure 2 is based on all the simulations;”

I said at the start of my original post“I should first point out that Fig 4.2 is not affected by the selection, and oneuniverse correctly points out that his simulations, which do not make the HS index selection, return essentially the same results. He also argues that these are the most informative, which may well be true, although the thing plotted, HS index, is not intuitive. It was the HS-like profiles in Figs 4.1 and 4.4 that attracted attention.”

SM: “In today’s post, I’ll show the panelplot that Nick Stokes has refused to show. “
No, it’s not that. It’s again stratified by HS index, which is your artificial creation. Why not just show, as I did, a random sample, unselected, as output by your program? You could undertake the artifice of inverting by HS index, as Brandon has been demanding. I don’t think it’s the right thing to do, but it won’t make much difference.

MikeN,“do you think there is a substantial difference between the panelplot of 12 that Wegman showed, and Brandon’s version of your plot”

Yes, and in HSI terms, I’ve quantified it here. With selection and decentering, for that random run, the average HSI for the top 100 was 1.96. The average abs(HSI) for 100 unselected was 1.61. To put that in perspective, if you use centered means, but the same selection process, you get an average 1.60. IOW, decentering and the 1% selection contribute about equally to the HSI enhancement in PC1.

Steve: I hope that this is all an elaborate practical joke on your part, because your reasoning is so Mannian. In a comment at Judy Curry’s you observed: “If you make statements like “Indeed, the effect of the transformation is so strong that a hockey-stick shaped PC1 is nearly always generated from (trendless) red noise” then it makes a difference if you mean in a representative sample, or in one where you have collected the most HS-like 100 out of 10000.”

Your panelplot of centered PC1s with HSI of ~1.6 is top-percentile, whereas this is median for Mannian PC1s, which have a highly distorted distribution. As you said at Judy’s, it makes a difference whether the sample is cherrypicked as your centered PC1 panelplot was, or whether the sample was representative as in MM05 Figure 2.

Nor do I understand why you or anyone else are defending Mannian principal components as a method or claiming that its pathologies were “exaggerated”. If anything, they’ve been understated. It is hard to imagine that you would accept such flaws in any other field. Nor, as I’ve said over and over, can MBH be patched just by using centered PCs. Once you follow the biased PC methodology and see it mine for Graybill stripbark chronologies, you can’t just put them back into the mix and pretend that you didn’t notice them.

Also, as I’ve said over and over, if the proxies were consistent, then the method doesn’t matter much. The problem is that the proxies are inconsistent and until you establish a set of valid temperature proxies that can yield results that can be replicated in fresh sampling on new sites, the field can’t really progress.

I was surprised to see you make that argument at your blog, thought you were pulling a Tamino. Now you are arguing it here as well.

Saying that it is the same effect, because 1.6 in both cases, is hard to fathom. The effect of selection, which is the primary issue as you said, in the decentered case goes from 1.61 to 1.96, even less of a difference if the 1.61 is the average of 101-10000. The other is .65 to 1.60. And I’m not sure exactly what those numbers mean, but I get the impression that there is also a CO2 like effect where the smaller differences matter ore.

I definitely prefer the panelplot done in this post for being informative. The one I did was much cruder, but it was also for a different purpose. My panelplot was the the ones Nick Stokes had posted, just with upside down hockey sticks flipped right side up.

I wanted to show the figure he presented as being so different would look almost the same as the figure he criticized had he shown all the series with the same orientation. It’s not as informative as the figure in this post, but I think it highlights the problem with what Stokes did a bit more directly.

The really strange thing in all this is that if you think about it for even a minute, it becomes pretty obvious that persistence in the data almost guarantees that there will be a subset of noise-only synthetic proxy series which will mimic most ANY target trend, and spaghetti searching methods like used by Mann and others will for certain find those series and identify them as ‘real thermometers of the past’. The only question in my mind is if a realistic noise model (AR(1) or ARMA) based on real proxy data generates hockey stick like reconstructions that are comparable in scale to a Mannian hockey stick. If you can generate a comparable hockey stick reconstruction with nothing but selection of realistic noise series using Mannian methods, then the entire exercise is very doubtful. I have tried to point this out to Nick, but to no avail. Nick does not appear to want to hear.

Mann took a crack at this concept in his 07 article. The problem was that he couldn’t think of realistically high auto correlation values. Actually matching persistence in the data was not achieved despite his paper’s claim to the contrary. How it got through review is a question to me.

“The only question in my mind is if a realistic noise model (AR(1) or ARMA) based on real proxy data generates hockey stick like reconstructions that are comparable in scale to a Mannian hockey stick.”

I’ve done this over and over at my blog. There is no question as to whether noise creates comparable reconstructions, it absolutely does. Also, the actual proxies can create any other pattern you ask them to simply by choosing the right ones.

Jean S: Interesting, it seems that the word “blog” was indeed the trigger, which put this one to the moderation, and the one with “blog” replaced by “b$$g” went through.

Steve: when I was getting overwhelmed by spam a couple of months ago, I added some seemingly non-controversial words which were heavily used in spam. I then decided to close comments on old threads. Because CA had thousands of open threads, this gave spammers many targets. The closing of old threads has stemmed the amount of spam that I have to examine manually – I’ll have to revisit some of my new trigger words which aren’t necessary any more.

But that’s not what is happening here. We’re seeing PC1, not a reconstruction. My latest post at Moyhu shows that these are very different things. Short centering aligns PC1 with HS behaviour, but all other PCs are then aligned in a near-orthogonal direction, so it has little effect on a recon, provided you use a reasonable number of PCs.
Steve: if you believe that proxies are temperature plus low-order red noise, then it’s hard to provide a mathematical justification for lower order PCs. Nor, as we discussed at the time, does Preisendorfer’s Rule N (see our contemporary discussion of Preisendorfer) say that Rule N establishes a lower order PC as a temperature proxy. Nor,
contrary to disinformation from Mann and others, now apparently including you, did we argue that the MBH hockeystick arose from red noise. We observed that Mann’s assertion that the HS-shaped PC1 was the “dominant pattern of variance” was false and attributed this false belief to him misleading himself through his incorrect methodology. We then analysed what the erroneous method did when applied to the NOAMER network in controversy: it pulled the Graybill stripbark chronologies into the PC1. Given the importance attributed by Mann to this pattern, we then analysed specialist literature to see whether it agreed that stripbark chronologies were uniquely accurate measures of world temperature and found that specialists had said that the growth pulse was not due to temperature. If Graybill stripbark chronologies are not valid proxies, then Preisendorfer’s Rule N cant make them so. At the end of the day, Mann’s salvage techniques, which you are again espousing, are nothing more than backdoor efforts to include the stripbark chronologies. And despite your repeated false statements otherwise, we discussed the permutations and combinations relating PC retention schemes to reconstructions in MM05-EE. By the way, there is convincing evidence that Mann did not use Preisendorfer’s Rule N in MBH98 as it does not replicate other network retentions. If Mann v Steyn goes forward, I anticipate that Mann will be asked to produce evidence supporting his claim to have used this method for tree ring networks and evidence of his actual methodology will be non-existent by then.

Since I see that this got through, I’ll note that I have had a substantive reply in moderation since 5.25 pm. Or at least I had a comment – it seems to have just disappeared.
Steve: I’ll look for it tomorrow. Closing up now.

Except that PC1 from Mann’s proxies was the bristlecones no? The rest of the proxies did not have nearly the degree of autocorrelation of the BCP’s. Not likely going to get an offsetting effect in 2nd and 3rd order PC’s unless the remaining proxy pool had the same characteristic persistence.

Steve: it’s curious that the bristlecone shape is preserved as much as it is in the PC4 using covariance PCs. This indicates that the bristlecone HS is orthogonal to first three PCs. This is very hard to explain if the tree ring chronologies are temperature proxies – not that Mann or Stokes worry about details such as this.

It should be pointed out that the first three PCs would not be “noise”, but rather would need to be physical features of the proxies that are more pronounced in their effects than the supposed “temperature” PC.

The principal components have the property that PC1 is the best single linear least squares predictor of the data, PC1 and PC2 are the best pair of linear predictors, etc. Thus we must have at least three factors operative in the proxy data which are stronger than the “temperature” – none of which as far as I know identified as to their physical interpretation by Mann and his cohorts.

Roman,
In MBH98, he shows details of the first five PCs of instrumental temperature. They correspond to spatial variation, and he shows the mapped EOFs in Fig 2. Proxy temperatures will also have spatial variation.

Your response does not really provide an explanation of the statement that the “temperature is in the fourth PC”.

Yes, there are 5 PCs shown for the instrumental temperatures, but the temperature is “in” all of them. They are merely expressions of the differences of the temperature patterns of the various regions. However, the proxy data are not “temperatures”. There They are various measures of the “growth” (for the lack of a better term) of the proxy due to various physical factors over a period of time. Temperature may be just one of the factors that can affect the proxy.

If “proxy temperatures will also have spatial variation” then this this will manifest itself in different PCs to reflect the regional differences just as it does in the temperature PCs. As pointed out to you earlier by Jean, the Mannian decentering procedure also produces non-orthogonal proxy PCs with a correlation structure contaminated by differences in the mean levels of the individual proxies so that any regional patterns could likely be lost.

To state that “the temperature is in PC4” is a conclusion based on a non-scientific expectation of a desired result – a modus operandi which we have seen too often in many climate papers.

RomanM: A typographical correction was made to this comment for clarity purposes.

Thanks RomanM. We hashed all this out at JeffID’s. Nick has ben busy at CE not understanding the selection process. The a priori and Jeff’s comment above just don’t register no matter how it is presented.

Romanm,“If “proxy temperatures will also have spatial variation” then this this will manifest itself in different PCs to reflect the regional differences just as it does in the temperature PCs.”
Indeed so.

“As pointed out to you earlier by Jean, the Mannian decentering procedure also produces non-orthogonal proxy PCs with a correlation structure contaminated by differences in the mean levels of the individual proxies so that any regional patterns could likely be lost.”

The decentering does not produce non-orthogonal PC’s. They are still eigenvalues of a positive definite symmetric matrix. II plotted them here for the NAS example. There are actually analytic solutions here, which I’ll post soon.

I should have been a bit more careful in my statement since you do not seem to have a handle on the statistical angle of data analysis. The PCs of of a set of data whose variables have not all been centered at zero may be orthogonal in a vector sense, but are definitely NOT in a statistical sense. The correlation matrix of the PCs is not the identity matrix. Separating noise which is assumed to be independent of what the climateers call the “signal” will become considerably more difficult and calculating accurate error bars for estimates based on the PCs will also be more complicated.

Nick, three weeks ago I showed that it was impossible for Mann to actually have used Rule N (or any “objective” rule for the matter of fact) in MBH9x PC calculations. I don’t see any protest from you there. So please go there and explain the PC retation rule in SOAMER AD1450, SOAMER AD1750, STAHLE SW AD1750, and NOAMER AD 1500 steps before continuing this stupid excuse that one should keep 4 or 5 PCs in the NOAMER AD1400 case.

BTW, in Mannian short-centered “PCA” no PC selection rule has any meaning as the eigenvectors have lost their connection to the correlation structure of the data.

Whenever he could use more than 2, he did. But even using 2 PCs is very different to 1.

Yes, he thought two was the right number in the NOAMER AD1400 step. Not three, four, or five. The number of PCs to keep is always a function of the total number of series. In this case 70, which is unchanged whether you use a true PCA or totally flawed Mannian calculations.

On spatial variation, he shows the distribution in Fig 1a. It’s wide. Dense observation does not affect the expression of spatial modes in PCs.

Try a bit harder, you should already know you can’t bluff me by this BS. Fig 1a shows distribution of all the indicators (which includes already calculated PCs), not the distribution of proxies going into PC calculations. In Fig 1a the whole NOAMER AD1400 network (of 70 series) is indicated by two black “+”-signs (very hard to distinguish from blue ones).

In fact, Steve has linked to his earlier discussion of the Stahle network. This is in a quite restricted region of New and old Mexico. Yet PC2 clearly takes up the N-S spatial variation. Over a hemisphere, there will be several spatial modes.

Steve: but why do you think that it has any meaning as a proxy beyond armwaving?

Because the ability to produce patterns of global temperature change were as important as the ability to produce a global temperature anomaly curve. By comparison with local records, the regional maps supported the conclusions of the paper.

I believe the figure he has up right now is a random sample, not a stratified sample (though it is labeled as such). That’d be funny if so as Nick Stokes complained the figure didn’t show a random sample.

The Mannian PC1 simulations had to have been generated from a model of the red/white noise of the actual data. The graphs suggest that the data has long term persistence of an arima model (ar,d,ma) of the autoregressive fractionally integrated moving average variety were d<=0.4. Is there a link to the code and data for doing the simulations?

In my analysis of individual proxy series I see a wide variety of red/white noise levels in combination with secular trends at various levels from none to large. In the eigenvectors of these individual series, as generated in a singular spectrum analysis, I do not see the noise structure I see in the Mannian PC1 simulations. That difference is probably to be expected as the Mannian PCs are 2D with time and space while mine were with time only and the Mannian PCs combine proxy data.

Glad someone else asked that! I’m grateful though for this thread and Kevin O’Neill’s “Fraud” Allegations four days ago, because the scientist I sat next to on Tuesday in Bristol was full of both memes when I engaged him is discussion after Michael Mann’s presentation.

Steve wrote on that thread”

So far, I’ve taken little interest in such efforts because, as far as I’m concerned, the defectiveness of Mannian principal components is established beyond any reasonable cavil. My attitude towards such efforts is probably not unlike Andrew Lacis’ attitude towards skydragons and their supposed slayers.

But the rhetoric of such efforts has increased in both volume and intensity.

It’s a term stemming from “Climateball,” a word I believe was invented by the user willard. Here is a post about it by Anders which I think captures the idea. An excerpt:

It would be wonderful if we could have thoughtful discussion amongst people who broadly disagree, but who are willing to listen to what the other person has to say, give it some thought, and maybe actually agree with some – if not all – of it. Instead, it’s more about scoring points. Find a way to undermine the other person’s argument. Find a way to undermine their credibility. Find a way to dodge their arguments against your position. Don’t necessarily apply the same standards to yourself as you apply to everyone else (of course, you then make out that you hold a higher moral ground). Again, to be clear, I certainly don’t think this is how it should be conducted; it just appears as though this is – sadly – how it is often conducted.

While I believe the term was originally conceived as a way of referring to people on the skeptical side of the debate, it fits people like Nick Stokes and Michael Mann quite well.

Given how I’ve seen Climateball described, I’m inclined to think that it originated as (Climate sci. + Calvinball*). It seems to strike the right tone of a completely informal game for scoring points in which the rules are made up as you go, often in a pretty smug manner, and with a mixed level of external logic.

* – Calvinball from the “Calvin and Hobbes” cartoon strips by Bill Watterson – 1985 to mid-1990’s.
Steve: an apt description of discussions with Nick Stokes.

The link for calvinball that I used back then is now dead, but there was a regular commenter and climateaudit who adopted the moniker. So the comparison is not new, although the neologism “climateball” is.

The lack of objective rules for handling paleoclimate samples has not improved since I made that comment in 2007, with rules seeming to vary for replication count, combining with neighbouring sites, and the problems of Yamal falling very much into the “calvinball” camp.

Steve McIntyre: if you believe that proxies are temperature plus low-order red noise, then it’s hard to provide a mathematical justification for lower order PCs. Nor, as we discussed at the time, does Preisendorfer’s Rule N (see our contemporary discussion of Preisendorfer) say that Rule N establishes a lower order PC as a temperature proxy. Nor, contrary to disinformation from Mann and others, now apparently including you, did we argue that the MBH hockeystick arose from red noise. We observed that Mann’s assertion that the HS-shaped PC1 was the “dominant pattern of variance” was false and attributed this false belief to him misleading himself through his incorrect methodology. ……. (etc. etc.)

As I understand it, there is no end-to-end research product archive of any of Mann’s temperature reconstructions now in existence which could allow one to start at the very beginning of his analysis and then to follow the sequence of his reconstruction step by step through all of its various processes.

Nor, as I understand it, is it possible to directly and precisely follow the flow of data inside one of his temperature reconstructions from its initial input through its various transformations on through to the reconstruction’s final data outputs.

Further, there exists no annotated process road map for any of Mann’s reconstructions which:

— Provides a high level summary overview of the approach he uses to perform his reconstructions.
— Illustrates the end-to-end process flow of the sequential process steps.
— Describes the function and purpose of each process step, including references to key assumptions and to key literature citations which affect that process step.
— Illustrates the end-to-end process flow of data from initial input through transformation through final output, including references to key assumptions and to key literature citations which affect how the data is being employed (or processed) within a specific data flow.
— Describes the historical pedigree of the input data, and the key assumptions being made about the input data.
— Describes the methodological purpose and use of transformed data at each step where it is being generated or used.
— Describes the physical meaning or other methodological attributes of the data, either the raw input data or the transformed data, as appropriate for the process step.
— Provides a summary overview and description of the output products of the reconstruction; e.g., data output files, graphics, interpretations of outputs, etc. etc.

Let us guess that no such process roadmap as described above exists for any of Mann’s temperature reconstructions, either a road map he may have generated himself, or one which was generated by either his colleagues or by his critics.

If such a road map actually did exist, and if the assumption is being made at the very beginning of a Mann temperature reconstruction that proxies are temperature plus low-order red noise — one example of a supposed temperature proxy being a tree ring chronology — then there should exist peer-reviewed scientific literature for each proxy type which justifies this most fundamental of assumptions for that proxy type.

If no such literature can be found which deals directly with the topic of “proxies are temperature plus low-order red noise” for a given proxy type, then the archive of Mann’s research ought to include a fairly detailed discussion as to why it is appropriate to be making this very fundamental assumption.

Why? Because if this foundational assumption is wrong, or else if it cannot be adequately supported using proper citations to prior research, then Mann’s temperature reconstruction has failed methodologically at Step 1, and all that follows is by definition methodological chaff, not temperature reconstruction wheat.

Not to say there aren’t any number of other significant mistakes to be found inside a Mann temperature reconstruction. These might include the inappropriate application of statistical methodologies; the inappropriate transformation of data; and the misinterpretation of raw or transformed data.

Assuming a road map as described above doesn’t exist, suppose such a road map were to be generated by critics of Mann’s temperature reconstruction work. It is my personal opinion that if such a road map existed, it would form a body of clear and convincing evidence that Mann’s temperature reconstructions are contrived research analysis products whose data inputs and process methodologies have been consciously cherry picked and integrated in such a way as to produce only one possible outcome — the Hockey Stick shape that has become such an essential tool in promoting AGW theory.

I guess I’m late and new to this discussion, but do you mean that Mann has still not released his data and “road map”? How can we call this science?
Steve: don’t over-interpret this. You’re undoubtedly thinking in too simplistic terms. Mann has released data and code for much of his work. There are some important lacunae. The failure to show the validity of principal components as applied to tree ring networks is an entirely different sort of sin than his earlier refusals to provide requested information. Also there are some far worse offenders than Mann on data archiving e.g. Ellen Mosley-Thompson and Lonnie Thompson.

Eli had tried to put a couple of comments over at the Auditor’s lair, but alas they have not appeared nor are they likely to. The point was that by defining the Hockey Stick Index as McIntyre did and then selecting the 100 with the highest scores for the random draw, those pointing downwards at the end (which would have negative HSI) would not even be considered. Thus this whole nonsense about flipping the birds is just that. For a bit more detail, see RR.

Jean S: I checked both the spam & trash folders, but didn’t see any comments from you. Update: I checked further to older ones, and did find them. Have a nice day.