Ian Jolliffe Comments at Tamino

Ian Jolliffe, a noted principal components authority, has posted a comment at Tamino’s, which repudiates Tamino’s (and Mann’s) citation of Jolliffe as a supposed authority for Mannian PCA. He wrote to me separately, notifying me of the posting and authorizing me to cross-post his comment and stating that we had correctly understood and described his comments in our response here: :

I looked at the reference you made to my presentation at http://www.uoguelph.ca/~rmckitri/research/MM-W05-background.pdf *after* I drafted my contribution and I can see that you actually read the presentation. You have accurately reflected my views there, but I
guess it’s better to have it ‘from the horse’s mouth’.

Here is Jolliffe’s comment in full:

Apologies if this is not the correct place to make these comments. I am a complete newcomer to this largely anonymous mode of communication. I’d be grateful if my comments could be displayed wherever it is appropriate for them to appear.

In reacting to Wegman’s criticism of ‘decentred’ PCA, the author says that Wegman is ‘just plain wrong’ and goes on to say ‘You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.’ It is flattering to be recognised as a world expert, and I’d like to think that the final sentence is true, though only ‘toy’ examples were given. However there is a strong implication that I have endorsed ‘decentred PCA’. This is ‘just plain wrong’.

The link to the presentation fails, as I changed my affiliation 18 months ago, and the website where the talk lived was closed down. The talk, although no longer very recent – it was given at 9IMSC in 2004 – is still accessible as talk 6 at http://www.secamlocal.ex.ac.uk/people/staff/itj201/RecentTalks.html
It certainly does not endorse decentred PCA. Indeed I had not understood what MBH had done until a few months ago. Furthermore, the talk is distinctly cool about anything other than the usual column-centred version of PCA. It gives situations where uncentred or doubly-centred versions might conceivably be of use, but especially for uncentred analyses, these are fairly restricted special cases. It is said that for all these different centrings ‘it’s less clear what we are optimising and how to interpret the results’.

I can’t claim to have read more than a tiny fraction of the vast amount written on the controversy surrounding decentred PCA (life is too short), but from what I’ve seen, this quote is entirely appropriate for that technique. There are an awful lot of red herrings, and a fair amount of bluster, out there in the discussion I’ve seen, but my main concern is that I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances? An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics. Misrepresenting the views of an independent scientist does little for their case either. It gives ammunition to those who wish to discredit climate change research more generally. It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp, but I remain sceptical.

Ian Jolliffe

As an editorial comment, the validity of Mannian PCA is only one layer of the various issues.

For example, Wahl and Ammann approach the salvaging of Mann overboard from a slightly different perspective than Tamino. Their approach was to argue that Mannian PCA was vindicated by the fact that it yielded a high RE statistic and thus, regardless of how the reconstruction was obtained, it was therefore “validated”. I don’t see how this particular approach circumvents Wegman’s: “Method Wrong + Answer ‘Right’ = Incorrect Science”, but that’s a different argument and issue. Also if you read the fine print of Wahl and Ammann, the RE of reconstructions with centered PCA are much lower than the RE using incorrect Mannian PCA, but, again, that is an issue for another day.

It would be nice if Jolliffe’s intervention were sufficient to end the conceit that Mann used an “alternate” centering convention and to finally take this issue off the table.

232 Comments

The Mannomatic is junk. Try using it to pick stocks from within a portfolio that are likely to rise and you will end up penniless. Processes that appear to be strong proxies for some underlying process often turn out to be much weaker than advertised in out-of-sample trials. Divergence is the telltale early-warning sign.

The appeal to the authority of Ian Jolliffe was an essential climax of a whole series of Tamino posts about PCAs and how Mann had simply been following widely accepted techniques.

Just how full with the apology from Tamino be? Will he now admit that Mann’s method had absolutely no precendent and no justification? Would it not be gracious of him to say, OK guys I got it wrong – Mann did just make up a dodgy method so that he could come up with the results he wanted.

My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

How many times have I suggested Schmidt should dump Mann and divorce the Hockey Team? AGW is about climate models primarily – as Spencer Weart’s post at RC today makes clear. (Yes, these models are being tuned to the paleo recon record, but that is a separate problem.)

Current temperatures may or may not be “unprecedented”, however no one disagrees that we are in an interglacial and that the question is not the recent past, but the near future. AAGW is all about physical models of the sort that lucia is analysing.

IMHO, seeing Spencer Weart’s post at RC was mind boggling. For years I have put up with RC’s insufferable arrogance because I wanted to learn the physical basis for AGW — to receive the engineering-quality exposition, the deep physics — from those experts who claimed to possess it (despite their reluctance to share it with mere mortals).

And now this: Weart pulls back the curtain and reveals a bunch of bumpkins pulling levers on untested GCMs?

The Mannomatic is junk. Try using it to pick stocks from within a portfolio that are likely to rise and you will end up penniless.

We now have it from Jolliffe himself that your first sentence is valid.

However, I think your second sentence must be qualified to read that if you used it to pick stocks, you would end up not necessarily penniless, but just no better off, on average, than if you had just used a dart board. In fact, a dart board has historically done pretty well on average, at least for US markets.

However, a real problem with Mannomatic stock selection would be that it would leave you with a highly undiversified portfolio, and of high-variance stocks at that. Although the expectation of your return would be about normal, the variance would be unnecessarily much too high. A dart board, with lots of darts, would yield a much less uncertain return.

Re: Hu McCulloch (#6) I thought about it before saying it, but decided to leave “as is”, even though I get your point. If competing investors are making smart decisions, and you are ignorant about how stupid your decisions really are, how long do you think you can last in that game? Your overconfidence will get you in serious trouble as you foolishly take on greater risk in anticipation of greater reward.

Looking through the comments on that post on Tamino’s I realise that I challenged him on how he randomised his data. Never did get a reply, and if I was wrong he could have easily made me look very stupid.

I know one shouldn’t read too much into his last paragraph, but he sounds like a lot of scientists — his “impression” is that there is a lot of science out there to support AGW. I imagine that he has a life and more work than he has time, so he has to rely on general impressions for anything in science that is outside his specific area of focus. He’s heard a lot of claims of consensus and eventually it filters through. I would bet he would be shocked if he ever took the time to examine seriously the climate models, Hansen’s “data” adjustments, the shocking siting failures in the temperature record, etc.

What this also illustrates is that Tamino may not have the competence required to decide who he ought to support and who he ought not support. He is vulnerable to errors of faith.

Long ago I realized I had to back the outsider maverick Steve M in this debate, because I saw that his arguments were correct (although inexpertly presented). I did this, not because I wanted to, but out of a sense of obligation to the truth. I think outsiders should be treated with greater respect, not lesser. That is where I part company with team members. Steve M spotted a problem in public policy accountability and set out to fix it.

Jolliffe is correct – the hockey stick debate is a distraction form the real problem: the GCMs. The GCMs are in need of audit. Check how many times I have said that in the past three years.

What this also illustrates is that Tamino may not have the competence required to decide who he ought to support and who he ought not support. He is vulnerable to errors of faith.

I have had discussions on this issue on non-climate forums with people with a science background. I am disturbed at the number of people who sound otherwise technically competent yet are not willing to accept that the scientific process failed in the case of MBH and its derivatives. It get the impression that they believe Tamino’s and Gavin’s arguments simply because they find the alternative too uncomfortable. IOW – I think a lot of scientists are vulnerable to “errors of faith” when it comes to this issue.

A question I would like answered is by people like Gavin is: if we can’t trust you opinion when it comes to the choice of a methods for analyzing paleo-proxy data then how can we trust your opinion when it comes to the choice of methods for validating GCMs against real data?

Re: Raven (#14),
Raven, maybe Schmidt and Tamino et al. took a leap of faith based on the “precautionary principle”? i.e. Better to believe in your IPCC colleagues than to be seen questioning them too hard?

In which case the answer to your questions would be: “well, we really had no expertise in statistical paleoclimatology, but he have LOTS of expertise in statistical theoretical climatology”. To which I would raise an eyebrow … and start dialing Dr. Wegman.

It is interesting to read the comments posted after Tamino’s PCA4 article.

JeanS made a very good critique and this was Tamino’s response:

Response: I must have really hit the bulls-eye with this post, because now I can *smell* the desperation.

No matter how often, brashly, or rudely you repeat the assertion that “you loose [sic] meaning,” it won’t make it true. Your own words are unambiguous: you assert that non-centering itself is not valid (there’s no other interpretation of “by not centering, you loose [sic] all your theoretical properties…” etc.). But Jolliffe flatly contradicts you (that’s what his presentation is about), so you brush that off with the pathetic statement that he doesn’t specifically mention using a partial average.

Jolliffe’s entire presentation is about the fact that centering is not an essential aspect of PCA. But because he doesn’t specifically mention choosing a partial average, you feel free to ignore his point. That of course enables you to repeat the crap about non-centered PCA being invalid. Now *that* is rubbish, and particularly stinky rubbish to boot.

I suspect that you *know* your amateurish claim is bulls**t, but you’re *counting* on readers not having the expertise to catch your lie.

Now we have Jolliffe’s own words we now know that JeanS’s post was not BS nor a lie.

it is very amusing to read the rest of Tamino’s responses to pertinent and accurate criticisms. As well as apologising to Jolliffe will he also apologise to JeanS, BoulderSolar etc?

#21. Not in his most recent, but he uses it in Mann et al 2007. Also Wahl and Ammann 2007 use it as their “vindicated” case.

IF you do ex post picking of proxies based on correlation excluding like proxies with low correlations, it’s a different method of biased picking. IMHO, what is needed is for the Team (or whoever) to pick a class of proxies e,g, white spruce at treeline or ocean sediment Mg-CA, whatever and take them all. If you only know if a tree ring chronology is a temperature “proxy” after the fact, then you’re into biased picking. I think that this is going on in Mann 2008, but it will take a while to really understand what’s going on with it.

What we do know is that MAnnian EIV emphasizes HS-shaped series (which is what Mannian PCA did), so there’s a good reason to examine this method for subconscious bias.

What they also need to do is develop and actual theory that definitively links the proxy to temperature, rather than simply assuming the correlations that result are strictly a temperature indicator. Even if their methods are flawless, there needs to be an underlying physical relationship in order to make any claims about what the analysis represents.

Betcha’ micro-environmental factors, C02, and water have more to do with tree growth than small increases in “average” ambient temp.

Show me the research and physical or biological mechanism from other than the Team that proves there can be temperature teleconnection. I.e. that the entire NH annual or growing season average land temperature can somehow affect tree rings even when there may be evidence that those same rings don’t correlate to local temperature.

Gotta wonder why that hasn’t been done. Betcha’ micro-environmental factors, C02, and water have more to do with tree growth than small increases in “average” ambient temp.

I believe there is a physical theory for d018 proxies, and perhaps some others, but not explicitly for tree-rings. At the very least, we know the tree-ring relationship is a) dependent upon a variety of factors and b) non-linear. It hasn’t been done simply because it is nearly impossible to do and at the very least, would take a long time.

I have just finished looking at the correlation between proxis in “itrdbmatrix.txt” and the instrumental record in “HAD_NH_reform.txt” in the 1850-1931 period (where the instrumental temperature shows more or less no trend). I used Kendall’s tau as a correlation measure. I got 328 (198+, 130-) significant correlations using 90% CI and assuming iid. However, since both the temperature and proxy data show strong LTP, the number of significant correlations drops sharply to only 141 (105+, 36-) after accounting for LTP, where the other 187 seem to be only due to coincidence of LTP in both series. Now given the large type I error of 10% used, one could expect as much as 0.1×1209 = 121 cases to arise only due to chance. The fact that we have only 141 significant ones sheds great doubt on whether the group as a whole can be considered as proxies at least in the period 1850-1931. I am very confident that repeating the same analysis with the ordinary product-moment correlation coefficient r would reveal the same once LTP is taken into account (which means that the limit on r of 0.1?? would be larger due to variance inflation).

My lay-lurkers gist of the article… and everyone should read it that is interested. Centered PC analysis of “weather data” from UK stations has the First PC measuring the total temperature, and the Second PC measuring the seasonal temperatures. As you de-center the PC analysis, the First PC begins to look a lot like the Second PC.

“It seems unwise to use un-centered analyses unless the origin is meaningful. Even then, it will be uninformative if all the measurements are far from the origin.” From “To center or not to center… or perhaps do it twice.” Ian Jolliffe

I have asked this question before, but let me try it again. Why reduce the dimensions at all? Why not simply create a reconstruction of the closest temperature series (be it grid or station) for each potential proxy, and then average for all reconstructions that apply to the particular temperature series? Oh, you still have all the time series issues of reconstruction from serially correlated series, but any weighting — and, of course, one can legitimately argue one proxy should be more heavily weighted than another — is done explicitly, not implicitly in the dimension reduction.

Obviously there is a little programming involved, but the task appears quite feasible, at least if one knows how to get all the data series correctly.

I know if I saw a hockey stick in the majority of the reconstructions based on individual proxy series, I would be inclined to accept the hockey stick as the best estimate of today vis-a-vis the past. And if I only saw the hockey stick in a few percent of the individual reconstructions, I would say the hockey stick view is empirically wrong (as opposed to now where I think that it has been artificially created but still might be correct).

Why reduce the dimensions at all? Why not simply create a reconstruction of the closest temperature series (be it grid or station) for each potential proxy, and then average for all reconstructions that apply to the particular temperature series? Oh, you still have all the time series issues of reconstruction from serially correlated series, but any weighting — and, of course, one can legitimately argue one proxy should be more heavily weighted than another — is done explicitly, not implicitly in the dimension reduction

Because then Mann fails to produce a Hockey Stick, and with that the necessary headlines about “anomalous warming in the 20th Century” and the attendant publicity that he desperately craves.

What is a proxy that does not reflect local climate conditions but mysteriously and magically somehow reflects a statistical index called “Global Mean Temperature”? I’d call it a fluke – and its combination with other flukes, a delusion.

Re: John A (#49), Going a step too far. Again. Data compression makes sense when you are trying to apportion the global signal into its “principal components” – hence the name of the analysis. The hope is that the first PC will load evenly across the globe and that it will match the well-mixed global-scale forcing signals, like CO2, solar, and volcanic ash. In which case the secondary PCs should be more regional in scope, some proxies of some types responding more strongly than others, for example. Another set of PCs would be regional anomalies that for whatever reason did not fit the globabl pattern. For example areas where climatic feedback processes are exceptionally strong.

The problem in MBH98 was (1) their false elevation of a low PC to a higher level of significance (PCs are numbered according to how much variance they account for) – which suggested it merited interpretation and (2) their failure to recognize and admit this was not in fact representative of a strong global signal, but rather a weak regional signal, loading very heavily onto one proxy – California bcps.

To dismiss the PCA approach altogether is wrong-headed. It’s the idiosyncratically Mannian flavor that is so distasteful. One must be careful not to conflate the two.

Going a step too far. Again. Data compression makes sense when you are trying to apportion the global signal into its “principal components” – hence the name of the analysis. The hope is that the first PC will load evenly across the globe and that it will match the well-mixed global-scale forcing signals, like CO2, solar, and volcanic ash. In which case the secondary PCs should be more regional in scope, some proxies of some types responding more strongly than others, for example. Another set of PCs would be regional anomalies that for whatever reason did not fit the globabl pattern. For example areas where climatic feedback processes are exceptionally strong.

With respect, you are making the same assumptions that Mann is making – that the processing of data through a PCA process of whatever kind gives a physically meaningful result.

snip – an issue I’ve asked people not to get involved in here

Certainly with Mann upping the number of candidate proxies by a factor of 10 certainly improves the chances of more than one or two proxies having a significant correlation with the ‘global mean temperature’ entirely by chance. If those series have significant autocorrelation then the chances are definitely enhanced.

Using statistics like this without any regard to the underlying physical processes reminds me very strongly of shamanism. Neither Preisendorfer’s “Rule N” nor scree analysis nor running the whole gamut of statistical tests can replace this sort of analysis.

I would also interject that climate models are trained upon assumptions that the Hockey Stick (and its derivatives) imply, hence the extraordinary behaviour of people like Gavin Schmidt to defend Mann and the Stick.

The Hockey Stick is most importantly the key piece of evidence in a brand new theory of climate which would have us all believe that the Earth’s climate is (or was) in an unstable equilibrium and that the addition of greenhouse gases has perturbed the climate so much that it has reached one or more “tipping points”.

Show me the research and physical or biological mechanism from other than the Team that proves there can be temperature teleconnection.

Why are you asking me that? My intent was that nobody has done much in terms of clinical research that would begin to explain temp/tree ring correlations. A tree is more likely to respond to other factors…mostly within yards of the trunk…than to global temperature variances. ????

Show me the research and physical or biological mechanism from other than the Team that proves there can be temperature teleconnection.

Why are you asking me that? My intent was that nobody has done much in terms of clinical research that would begin to explain temp/tree ring correlations. A tree is more likely to respond to other factors…mostly within yards of the trunk…than to global temperature variances. ???

Betcha’ micro-environmental factors, C02, and water have more to do with tree growth than small increases in “average” ambient temp.

Look around your own yard and you’ll find examples that validate the above summary.

All physical phenomena and processes are functions of the local, instantaneous conditions at the spatial and temporal domains in which they occur.

The driving potential for growth of green plants is photosynthesis. That process is used to convert CO2 and water into the foods plants use for growth. Plant growth is a proxy first for sunlight and then for CO2 and water.

Full Disclosure: I am not a Certified Climatologist. I am not a Certified Botanist.

Re: Mark T. (#50), I explained the teleconnection principle before and clearly none of you understood. It is not magical or mysterious. It is a matter of the time-scale over which the proxy responds to the input. Proxies that integrate the response over time may provide responses that are better matched to the low-frequency climate variability that interests us. Local proxies that respond hyper-sensitively to annual fluctuations or local proxies where responses are heteroskedastic will not perform as well as non-local proxies with a homogeneous response and a more favourable time constant. Got it?

Re: Mark T. (#50), I explained the teleconnection principle before and clearly none of you understood. It is not magical or mysterious. It is a matter of the time-scale over which the proxy responds to the input. Proxies that integrate the response over time may provide responses that are better matched to the low-frequency climate variability that interests us. Local proxies that respond hyper-sensitively to annual fluctuations or local proxies where responses are heteroskedastic will not perform as well as non-local proxies with a homogeneous response and a more favourable time constant. Got it?

We are kind of OT here, Bender, but I think what keeps people here like myself unclear about climate teleconnections is that we probably all understand that the climate in one region can be affected by the climate in another (ENSO) and if we had a proxy from one locale that could be shown to be connected to another historically then sans a temperature record in the dependent climate we could consider using the climate in the independent climate.

I believe your point of a noisy high frequency proxy signal in reaction to local climate is well understood here and even the implications that some this variation could be missed in a calibrated regression. The low frequency imprint of a regional or global climate change on the proxy response is a little bit more difficult to comprehend and thus the resulting feeling that it is in the magical, or at least very hypothetical, domain. Instead of the hypothetical presentation would not it make better sense for understanding this phenomenon to simply demonstrate it using the instrumental record available over the past 150 years? I have looked at historical temperature series from stations here in Illinois and noted that they vary significantly within a relatively small separation in distance. I have had a difficult time attempting to visualize how these various locales could be reacting to regional or global climate change. Perhaps Dr. Ben could help us on this one.

It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

It’s not crazy, it’s modern marketing.

What’s in it for the scientists, I don’t know. New grant money? Merely pride? Maybe they’ll be partly vindicated. But the driver is that the hockey stick, like rumors of drowning polar bears, has been adopted as integral imagery for the global warming branding of cap and trade, green taxes, and other tag along interests hoping to make a buck (or a loonie) off the public relations company-constructed fad. The hockey stick is a powerful, sciency image indicating to the observer a recent and rapid change, either bad or good. In “climate change” it is a negative association of startling import.

I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

With all due respect to the judgments of many scientists and their potential for not only specialist knowledge but general knowledge as well, I think one of the problems, or least an interesting observation, we see in the consensus view of AGW is that a scientist can admit to weaknesses and flaws in the prevailing theories/methodologies in their specialty and continue to accept theories/methodologies in other areas where they do not possess the degree of expertise of their own specialty. It then could easily become a preference for what might readily be viewed as the consensus being the “scientific” view while the deniers/skeptics hold the non, or at least less, scientific view and even perhaps their views being considered more political orientated then the consensus view. I judge that is why one must analyze the validity/certainty of the individual components of AGW one at time.

So Tamino misrepresents the science. Tell me something I didnt know. Now, if there was a Tamino post that was accurate and not misleading, that would be news indeed. I gave up looking at his Closed Mind some time ago, after he deleted my comment where I pointed out one of his many errors.

I just plotted the accepted data on top of the total average. You can clearly see the amplification by mann’s process on local temperature. I then subtracted the two, I am absolutely surprised by the result. It was much clearer than I expected. There is a sharp vertical increase in the difference between Mann and the average with a total rise equal to about 1 standard deviation.

Re: Jeff Id (#43), your graphs are very interesting but off-topic in this thread, so I have instead replied at #44 of the directly relevant “Proxy Screening by Correlation” thread. Please continue this discussion over there.

I am by no means a climate change denier. My strong impressive [sic] is that the evidence rests on much much more than the hockey stick.

I wonder if this attitude is the result of the IPCC “review” process – the underlying assumption being that the content of the IPCC emanations have been properly reviewed. However, based on the account of the process by Steve McIntyre and others, it seems to me that the review process wasn’t/isn’t a review process at all. It’s a filter! But because it has many of the trappings of a review and is formally called a review, it slides under the radar.

I checked back to see if you guys took notice of my post. There is a clear amplification of the signal in the calibration period when sorted for the best fit as Mann did. I realize you are discussing a slightly different subject but it is clear evidence that you cannot associate the Mann proxies to satellite data. The scales are different.

OK, folks. Enough piling on. No need to try to re-hash issues about whether there’s a physical theory for the “proxies”. Not that the matter isn’t important; it’s just that it’s not relevant to this thread and one paragraph opinions don’t illuminate.

IF the data contained a consistent “signal”, then the “signal” can be recovered through a variety of methods and I doubt that it would really matter much which method you used. The problem with Mannian grab-bag networks – and this is why it’s important to plot them – is that there is no consistent “signal” and that’s why one method yields one answer and another method another. And claims that one method is “right”, as presented to date, are merely puffs.

But again, let’s avoid piling on. It’s nice that Jolliffe has weighed in the matter and this pretty much speaks for itself. Unless anyone has anything new to add to this, I doubt that much new can be added.

Tamino has made an inline comment in Open Thread # 5, but so far has not published Jolliffe’s comment at the closed thread in question or posted a retraction or apology at the thread in question.

A reader of that thread would be very unlikely to locate the inline comment on an Open Thread, which says:

Response: I apologize for having misrepresented your opinion, but I hope you realize that it was an honest statement of my interpretation of your presentation, in no way was it a deliberate attempt to misrepresent you.

In your presentation you state: “It seems unwise to use uncentred analysis unless the origin is meaningful.” I took this to mean that you endorse uncentered analysis when the origin is meaningful. If you disagree, I accept your disagreement, but it seems to me that I can hardly be blamed for thinking so. It also seems to me (and I’m by no means the only one) that the origin in the analysis of MBH98 is meaningful.

I certainly agree with this statement from your comment: “… the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence …”]

In the last sentence, Tamino notably failed to specifically agree with the rest of Jolliffe’s sentence:

… or that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

As so often the issue is not whether Tamino’s interpretation was “honest” or not, but whether it was reasonable. For example, an alternative interpretation of the document in question was the one that Ross and I presented here. Instead of considering the possibility that that plausible interpretation (now endorsed by Jolliffe) was correct, Tamino yelled louder and louder at people who disagreed with him, e.g. his rude comments to Jean S:

Response: I must have really hit the bulls-eye with this post, because now I can *smell* the desperation.

No matter how often, brashly, or rudely you repeat the assertion that “you loose [sic] meaning,” it won’t make it true. Your own words are unambiguous: you assert that non-centering itself is not valid (there’s no other interpretation of “by not centering, you loose [sic] all your theoretical properties…” etc.). But Jolliffe flatly contradicts you (that’s what his presentation is about), so you brush that off with the pathetic statement that he doesn’t specifically mention using a partial average.

Jolliffe’s entire presentation is about the fact that centering is not an essential aspect of PCA. But because he doesn’t specifically mention choosing a partial average, you feel free to ignore his point. That of course enables you to repeat the crap about non-centered PCA being invalid. Now *that* is rubbish, and particularly stinky rubbish to boot.

I suspect that you *know* your amateurish claim is bull**, but you’re *counting* on readers not having the expertise to catch your lie.

Tamino

Tamino did not retract or apologize to the various people who had previously disagreed with his incorrect interpretation and has so far failed to post a correction notice on the thread in question.

I took this to mean that you endorse uncentered analysis when the origin is meaningful. If you disagree, I accept your disagreement, but it seems to me that I can hardly be blamed for thinking so. It also seems to me (and I’m by no means the only one) that the origin in the analysis of MBH98 is meaningful.

I am aware of the arrogance and condescension that marks much of RealClimate and their supporting blogs but this has taken it to a new extreme. An anonymous blogger lectures a recognized world authority in his filed of expertise. he even asserts, without evidence, that the analysis of MBH98 is meaningful when the authority has indicated that he is unaware of how it can be justified. This arrogance knows no bounds.

How/ why did Tamino not address what I saw as the main point of Jolliffe’s statement, that “It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp”?
To me he is saying decentering PCA makes it useless. If the foremost authority hasn’t worked out a way to use it effectively why does Tamino think he knows better?

If the foremost authority hasn’t worked out a way to use it effectively why does Tamino think he knows better?

What we’ve seen over many years is the irrational will to believe in the Hockey Stick despite its many obvious flaws. It requires a rare intellectual honesty to reject a strongly held belief in the face of unambiguous evidence to the contrary.

Tamino looked at Jolliffe’s presentation and saw in it justification for Mann’s use of decentred means. As Jolliffe points out, and others did on the original thread on Open Mind, this interpretation of the talk is completely wrong. The caveats were very clear and the warnings against misuse of this technique spelt out in detail, yet Tamino did not see them.

This is an interesting example of “confirmation bias”. Tamino wanted support for Mann from Jolliffe, and – without bad faith or any intention to distort Jolliffe’s views – thought that he had found it. Confirmation bias is at the heart of most of the mistakes perpetuated by the climate alarmists.

This is probably a dumb newbie observation but it seems to me that the hockey stick correlates very well to CO2 concentration. Has anybody calculated the r2 for tree ring width versus CO2 concentrations?

Actually, CO2 fertilization was exactly what Graybill and Idso were investigating with their bristlecone pine research, if I remember correctly. They were also intentionally selecting for strip-bark trees to look at the tree’s response to that. In other words, they weren’t looking for a temperature proxy and never said they had one, but that didn’t stop Mann from using it prominently as though it were.

Sorry, Steve, I know this isn’t the right thread for this, but I was answering the question. Back to lurking.

I am by no means a climate change denier. My strong impressive [sic] is that the evidence rests on much much more than the hockey stick.

This begs the question as to what this other evidence consists of and why he believes in that evidence. Perhaps Steve could ask him…
Steve: John A, this is exactly the sort of piling on that deters people like Jolliffe from commenting. His specialist views on principal components are all that is relevant for this. P.S., such a question is asked of Jolliffe only in his capacity as a non-specialist; why use up his limited time on such generalized questions which would go nowhere. Better to ask specialists questions pertaining to their speciality.

# 63 # 66
yes indeed,
And as a long time ‘lurker’ I would say that I, (and nearly all here) are not ‘climate change deniers’ far from it, the climate changes, thats wahat it does. It gets warmer then cooler.
AGW is what we have issues with.

Patrick, #61, that’s what I was thinking. In fact it reminds me of a famous quote:

“I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

What this episode also illustrates is how science can drift when not subject to experimental testing. In the absence of reality checking, one can concoct all sorts of fancy analyses, building castles in the air and claiming that they are best or robust or optimal in some way. Steve M and numerous publications have shown that in the business of making hockey sticks, all sorts of results can be obtained with different “reasonable” choices of proxies and methods. If the result is indeterminate, the science is still in the exploratory stage, and not the stage where you issue press releases.

It might be relevant to mention that Ian Jolliffe, as well being the leading authority on PCA, has frequently applied his skills to weather and climate change. He has published many papers on the subject and taken part in UK Met Office working party on climate change.http://www.secam.ex.ac.uk/index.php?nav=staffInfo&sid=527

Furthermore, [my] talk is distinctly cool about anything other than the usual column-centred version of PCA. It gives situations where uncentred or doubly-centred versions might conceivably be of use, but especially for uncentred analyses, these are fairly restricted special cases. ….

An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

Clearly if one knew the population mean of a series a priori, one should use that true mean, and not the empirical mean from the sample at hand. I think this would be one case where the “uncentered” analysis Jolliffe mentions might be appropriate.

In the present case, however, it is a purely empirical issue where the population mean of each series is, (if it even exists), so that the “sample mean” is the only appropriate way to go. However, there is a semantic issue here of what is meant by the “sample mean”. MBH98 and 99 might argue that for the calibration itself, the only relevant sample is the calibration sample, so that centering on the full reconstruction mean would in fact be “decentered” relative to this sample at hand, while centering on the calibration period is the truly “centered” approach.

However, I think the appropriate answer to this is that we want to compute the PCs using the best available estimate of the mean and sd, ie using the longest available sample. A small subsample like the calibration period could give a very bad estimate of both of these, particularly when there is a lot of serial correlation present.

But what then should we do if some of the proxies go back much farther than even the reconstruction period? Should we compute each mean and standard deviation from the entire series, but then each covariance from the available overlap period of the two series in question? This might not even give a positive definite matrix. Or should we just use the longest overlap period? Or just the reconstruction period of interest?

I don’t see that the non-stationarity mentioned by Jolliffe is necessarily a problem, however, if PCA is just taken as an ad hoc descriptive way of reducing the number of series. If instrumental temperature is non-stationary (as it at least approximately is), then any valid proxy will be equally non-stationary. But as long as the serial correlation is correctly modeled in simulations to determine which PC’s to keep (as in Preisendorfer’s Rule N), I don’t see there’s a problem using a finite sample variance to normalize the covariance matrix.

If using the particular analysis algorithm you have selected on a data set can produce significantly different outputs if you change the mean of the data, and if the mean of the data is for all intents and purposes arbitrary, then you are using the wrong method. I’m not sure if this is actually the case, but I assume we wouldn’t be discussing whether it’s valid to use decentered PCA vs. regular PCA unless it had some kind of effect on the output.

However, there is a semantic issue here of what is meant by the “sample mean”. MBH98 and 99 might argue that for the calibration itself, the only relevant sample is the calibration sample, so that centering on the full reconstruction mean would in fact be “decentered” relative to this sample at hand, while centering on the calibration period is the truly “centered” approach.

Yes, one can argue that, but then there is also a “semantic issue” what is meant by the “sample covariances”. In other words, if one argues that the only relevant sample is the calibration sample, why would you then estimate mean on the caliberation period and covariances on the full period?

Since Tamino was really quoting Mann from an early Realclimate post does any of this criticism go back to Mann himself?

Mann wrote:

Contrary to MM’s assertions, the use of non-centered PCA is well-established in the statistical literature, and in some cases is shown to give superior results to standard, centered PCA. See for example page 3 (middle paragraph) of this review. For specific applications of non-centered PCA to climate data, consider this presentation provided by statistical climatologist Ian Jolliffe who specializes in applications of PCA in the atmospheric sciences, having written a widely used text book on PCA. In his presentation, Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand. In the case of the North American ITRDB data used by MBH98, the reference means were chosen to be the 20th century calibration period climatological means. Use of non-centered PCA thus emphasized, as was desired, changes in past centuries relative to the 20th century calibration period.

It seems all the criticism is going to those who “doggedly defended a piece of dubious statistics” as Jolliffe said but not to Mann himself, the origin of the Jolliffe reference.

Dr. Jolliffe stated his views on Mannian decentered PCA more strongly in a follow-up comment:

Thanks for the apology, Tamino.
Some further clarification: a lot of the confusion seems to have arisen because of the terminology. Uncentred PCA and decentred PCA are completely different animals. My presentation dealt only with uncentred PCA (and doubly centred PCA). I’ve just looked at it again and it seems completely unambiguous that this is the case. Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else. Using anything other than column means or row means to centre the data wasn’t even on my radar. It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it…

“…Better to ask specialists questions pertaining to their speciality….”
Steve

It seems to me that we are looking at the last days (months, years?) of the hockey-stick, at least in its current mathematical form. It has been about 10 years since it was initially published. During that time I would have thought that other world-renowned experts (apart from M&M) would have looked at it. And yet Jollife’s comments suggest to me that he has only recently become aware of the details of the controversy.

I wonder how this fits with the IPCC assertion that all of their output is carefully checked by all the best specialists in the business. Perhaps someone had better make sure that other specialists who may have something to contribute are aware of some of the other controversial aspects of AGW?

It seems to me that we are looking at the last days (months, years?) of the hockey-stick, at least in its current mathematical form. It has been about 10 years since it was initially published. During that time I would have thought that other world-renowned experts (apart from M&M) would have looked at it. And yet Jollife’s comments suggest to me that he has only recently become aware of the details of the controversy

After Mann’s ‘PCA’, the data is fed to multivariate calibration algorithm. Are experts in that field, Brown & Sundberg , for example, aware how Mann did the uncertainty calculation ? . MBH99:

In contrast to MBH98 where uncertainties were self-consistently estimated based on the observation of Gaussian residuals, we here take account of the spectrum of unresolved variance..

“After Mann’s ‘PCA’, the data is fed to multivariate calibration algorithm…You can try to find discussion about Brown & Sundberg 87 from RC..”

Alas, no joy at RC. To tell the truth, it wasn’t only the HS I was thinking about – I see that as dead but refusing to lie down. I was thinking about the other aspects of AGW proof that Dr.Jolliffe feels give it a suitable foundation.

This episode has shown that incorrect science can easily pass straight through the ‘climate science’ peer-group review system, so there are likely to be more major blunders in totally unassociated fields waiting to be found. A pity that each one seems to take 5 years of someone’s life to put to bed….

#86. In this respect, you might examine the comments of Referee #1 in the First Review and Referee #2 in the Second Review of our Nature review, a reviewer who identified himself as an authority on principal components.

Also IPCC has refused to release some review correspondence, despite such an obligation under their procedures adopted by constituent governments.

Tamino does have a point, of sorts, the bit before what you posted. But beyond what you said.

Here’s the entire paragraph:

In your presentation you state: “It seems unwise to use uncentred analysis unless the origin is meaningful.” I took this to mean that you endorse uncentered analysis when the origin is meaningful. If you disagree, I accept your disagreement, but it seems to me that I can hardly be blamed for thinking so. It also seems to me (and I’m by no means the only one) that the origin in the analysis of MBH98 is meaningful.

I would also to an extent accept that Dr. Jolliffe is saying uncentered analysis can be used if the origin is meaningful.

But beyond that paragraph and the arrogance and condescension question, here’s my take in general.

However, the real issue here is that I don’t see anywhere that he said the origin in MBH98 was meaningful and therefore uncentered PCA appropriate in that case. So it boils down to Tamino making an interpretation, and then attaching it to Dr. Jolliffe as if he had specifically stated that. Coming to the conclusion is one thing; attributing it to Dr. Jolliffe is another.

Or in other words, there’s the disconnect between the very large difference between ‘seeing a use for uncentered analysis when the origin in meaningful’ and stating that ‘this thing does have a meaningful origin’.

But this is easy to clear up. In his clarification explicitly Dr. Jolliffe states:

my main concern is that I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances? An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

And even more to the point:

It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp, but I remain sceptical.

Or in other words, perhaps Tamino might ask if his interpretations are correct and his attributions applicable before he puts up an expert as agreeing with him…

Hey Sam, “Not Sure” a few posts up hit the nail on the head with the posting Jolliffe’s follow up comment. Tamino thought Jolliffe’s discussion of “uncentered” PCA in certain circumstances applied to The Team’s use of “de-centered”. Jolliffe’s response appears to be “what the hell is de-centered PCA ?”.

The problematic properties of Mannian PCA re the ones that we identified in our 2005 papers:
1) mining for HS shaped series;
2) over-statement of the significance of the PC1

These were both highly relevant to Mann’s characterization of the NOAMER PC1 as the “dominant component of variance” in his response to MM2003.

There are other perverse properties: for example, as we observed in MM 2005 (EE), if you increased the early 15th century levels of non-bristlecone series, Mannian PCA will flip them over and interpret that as a “cooling”. So it is a VERY bad method.

Because it mines for HS shaped series, it picks out the bristlecones and the key issue in all of this, as we’ve discussed here, but which Tamino and Mann ignore, is whether the Graybill bristlecones are valid temperature proxies. While CO2 has been raised as a problem,. my own view is that the issues are with mechanical aspects of strip bark formation and some problems with Graybill’s networks. Here it’s notable that Mann et al 2008 ignored the Ababneh data, preferring the obsolete and non-replicable Graybill version. I wonder why.

While CO2 has been raised as a problem,. my own view is that the issues are with mechanical aspects of strip bark formation and some problems with Graybill’s networks. Here it’s notable that Mann et al 2008 ignored the Ababneh data, preferring the obsolete and non-replicable Graybill version. I wonder why.

Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

This is, IMO, the reason the verification fails as the test goes back in time. The further away from the calibration period the test is performed, the less likely the weighting is correct. PCA carries with it an explicit assumption that the relationship of the vectors describing the space does not change with time (read: is stationary).

Thank you, Mark T, for explaining the meaning of the word “stationary” in this context. I wonder if you can be a little more explicit for those of us who are novices in this field.

In Tamino PCA5 there is a claim in the comments from Lazar that the coefficients are near stationary boundaries and that a function S[t] which is stationary can be produced. That discussion is beyond me (despite my having many years ago got a degree in Pure Maths).

Incidentally in that article Tamino dealt with Non-Centred PCA. If someone could explain the differences between non-centred, uncentred, double-centred and decentred PCA I would be most grateful.

If someone could explain the differences between non-centred, uncentred, double-centred and decentred PCA I would be most grateful.

Let me try. First, you should understand that the standard covariance based PCA (“centered PCA”) is essentially performed in the following manner: you substract the sample mean (over the series) and then PCA is performed on the matrix of 2nd order sample moments (“correlation matrix” in engineering jargon; not to be confused with the same term in statistical use). This is to say that you use the sample covariance matrix to perfrom the PCA. In the correlation PCA, you additionally divide the entries of the covariance matrix by the corresponding sample standard deviations (i.e., you perform your analysis on the matrix of the sample correlation coefficients, i.e., on the sample correlation matrix in normal statistical use of the term).

Now “non-centered” and “un-centered” PCA seem to refer to analysis where nothing is substracted from the original series, and PCA is performed directly on the matrix of the 2nd order sample moments. “double-centered” refers to the analysis where first sample means over the time and then also (from time zeroed data) the sample means over the space are substracted. That is, if your data is given in a matrix form, sample means calculated over rows and columns are all zero. For these two methods, there exist some (little) theory, and they were the topic of Jolliffe’s original presentation, where he gave some examples where such analyses might be appropriate instead of the standard covariance or correlation based analyses.

Now “decentred PCA” seems to be a term coined by Tamino for the “Mannian PCA”. There instead of substracting the sample mean of the full series (as in the standard analyses), one substracts the sample mean over an arbitrary chosen partial time period (the calibration period 1902-1980 in Mann’s case). The effect is that those series whose sample means over the calibration period differ from the sample mean over the full period get highly weighted for the first “PC”. In MBH, this means essentially that instead of summarising the tree ring network (as with the usual PCA) the bristlecones get cherry picked from the full tree ring network (in “PC1”) to the next stage. Additionally, Mann divides also by the detrended sample standard deviation over the calibration period before calculating his 2nd order moment matrix, but that is not so crucial to this “debate”.

The effect is that those series whose sample means over the calibration period differ from the sample mean over the full period get highly weighted for the first “PC”.

I want to point out that the reason this happens is because the operation induces an offset which appears as a sort of DC bias in the resulting “variance” of a given series (I quote variance because it is not when the series is not zero mean). It does not matter, btw, if the offset is positive or negative since it is a second order calculation, i.e., it gets squared.

Steve, I know this is completely off-topic for this thread, but it seems that it has now been officially admitted that what you have long asked for — the engineering-quality document that provides the detailed calculations showing how 2xCO2 = 3.5C of temperature increase (or whatever the claim is) — does not exist and is not going to exist. See this at RealClimate: http://www.realclimate.org/index.php/archives/2008/09/simple-question-simple-answer-no

Re: Michael Smith (#98), Pielke Pere calls the admission a ‘confession’. I’ve asked several adversaries when Spencer Weart would write ‘The Discovery of Global Cooling’ and it seems maybe he’s working on the outline.

I’ve observed previously (See Preisendorfer threads e.g. http://www.climateaudit.org/?p=287) that Mann’s citation (PReisendorfer) explicitly says that non-centered singular value decomposition of a matrix [let’s stop even using the contradictory term “decentered principal components analysis:) is not “principal components analysis” since it is not an analysis of variance.

CSI for math geeks…and I do mean the “C” in CSI. By the way, thanks Jean S for a helpful description of what is being done. What ever happened to demonstrating that your new creative method actually works on some well-behaved test data?

Since MM had carefully tried to replicate the MBH calculations, they are probably correct.

Yes, they are right and the method boils down to division by the detrended std as you state later.

However, dividing by the standard deviation of the residuals about a trend will greatly accentuate a HS series, while dividing by the standard deviation of the series itself will instead often moderately attenuate a HS series. This therefore could be an important point.

Possibly somewhere, but at least for the crucial NOAMER 1400-network there is no practical difference.

Tamino’s statements are actually correctm and this is often misunderstood. AFAIK, PCA is sopposedly used for the tree ring networks in the first place in order to reduce the dependency of a particular spatial region. MBH98:

Certain densely sampled regional dendroclimatic data sets have been represented in the network by a smaller number of leading principal components (typically 3–11 depending on the spatial extent and size of the data set). This form of representation ensures a reasonably homogeneous spatial sampling in the multiproxy network (112 indicators back to 1820).

The purpose of the few leading PCs is to summarise the “main features” of a much larger number of series. Now, the “hockey stick shape series” (in essence the bristlecones) are few in the NOAMER network, and therefore using centered PCA the “pattern” is pushed down to a lower PC. When actually applying real objective PC selection rules, this lower order PC does not get picked. Hence the bristlecones (the few bad apples) are avoided! Tamino’s first statement is still correct, since there are few additional “hockey sticks” (the Gaspé series; Mann’s “backup”) which get much larger weight in Mannomatic if there is no hockey stick in PCs. Also, it should be noted that the Team claims (incorrectly IMO) that even the lower order centered PC (containing the bristlecone information) should be kept as commented by “Not sure” above. What comes to Tamino’s second claim, of course it is true: you don’t need to cherry pick the bristecones by flawed PCA for Mannomatic if you are using them directly!

Tamino’s term “decentered” should probably be avoided here, since the standard definition is “to disconnect from practical or theoretical assumptions of origin, priority, or essence.” (Online Merriam-Webster)

The series are always being centered relative to some period, so “uncentered” is not really appropriate either. “Short-centered” or “Calibration-centered” might be better for the MBH approach, in contrast to “column-centered” for using the full series. Even then there is still some ambiguity — see #80 above — as to whether the mean of the entire series should be used, or just that portion of the series that is used for the reconstruction in question.

Jean concludes,

Additionally, Mann divides also by the detrended sample standard deviation over the calibration period before calculating his 2nd order moment matrix, but that is not so crucial to this “debate”.

However, MM GRL 05 (p. 1) instead state that “Each tree ring series was transformed by subtracting the 1902-1980 mean, then dividing by the standard deviation of the residuals from fitting a linear trend in the 1902-1980 period.” Since MM had carefully tried to replicate the MBH calculations, they are probably correct. However, dividing by the standard deviation of the residuals about a trend will greatly accentuate a HS series, while dividing by the standard deviation of the series itself will instead often moderately attenuate a HS series. This therefore could be an important point.

(On p. 2, MM GRL 05 state that “We applied the MBH98 data transformation to each series in the network: the 1902-1980 mean was subtracted, then the series was divided by the 1902-1980 standard deviation, then by the 1902-1980 detrended standard deviation.” In this calculation, the initial division by the 1902-1980 sd would be undone by the subsequent division by the detrended standard deviations, so that all that matters is the second normalization.)

PS: Since “subtract” is “soustraire” in French, with an s before the -tra, I’m guessing that Jean is francophonic.

But the hockey stick remains when using centered PCA, and when using no PCA at all. The claim that it’s nothing but “utterly bogus artifacts” is what’s really bogus.

How is he supporting this assertion? Is he appealing to Mann’s current paper that Steve is in the process of demolishing?

I notice that in earlier replies he repeats the irrelevant mantra “The hockey stick doesn’t matter, there is a mountain of other evidence supporting AGW” so why doesn’t he just cut his losses and move on instead of waving his bloody stumps, ala the black knight, and pretending that MBH 98 has been vindicated?

If I’m not mistaken, Craig Loehle’s work is what happens where there is no PCA at all, and there was no HS. A mere flesh wound, eh? Recall, too, the CENSORED directory, curiously sans HS using their version of PCA.

AFAIK, PCA is sopposedly used for the tree ring networks in the first place in order to reduce the dependency of a particular spatial region.

I’m not sure I buy this. If you have one region that is consistant and the rest of your data is noisy, (orthogonally noisy), then won’t the first principle component be the regional data? If the hockey stick shape is orthogonal to most of the other shapes and you have a couple of them, then they would have to show up. The difference is in what percentage of the variance they account for. Your first component might only account for a small percentage of the variance in which case your features aren’t telling you much.

If the hockey stick shape shows up as your 4th component and accounts for only a few percent of the variance then it isn’t very meaningful is it, (even though it “showed up”)?

Thanks for the explanation. Do you suppose Tamino actually believes this to be statistical significant?

Is he just hoping that no one will notice that the data series that produce hockey sticks are outliers that should probably be discarded especially since they cannot be correlated to temperature during the time when the series overlaps with the instrument record?

I do not intend this to be any criticism of Ian Jolliffe, who seems to be a really good chap, but am I the only one who finds it very telling that one of the world’s leading authorities on PCA, who is also deeply involved in research into climate and weather, had before last week taken such a small amount of interest in MBH98.

When the paper came out in 1998 it was an amazing finding – a complete reconstruction of the temperature record back 1500 years using PCA – which made dramatic claims about the unprecedented nature of recent warming. Yet IJ was not curious enough about this to read the paper to find out how it had been done.

Then in 2003 and 2004 when MM was published there was a big storm in the climate world. Again IJ did not take much notice nor did he put his skills to deciding for himself whether the criticisms of MBH98 were justified.

When Wegman, whom he would surely recognise as a distinguised figure, gave his report in 2006 once more this did not lead IJ to look closely at MBH98 and work out what they had done and whether Wegman was right.

It seems that it is only after his own name was called in aid of the paper by Tamino that Professor Jolliffe has actually looked at MBH98 in detail and made a public comment about it.

He tells us that not only does he not support the use of decentred PCAs, but that given the data appear to be non-stationary it is arguable whether you should be using any kind of PCA.

Ian Jolliffe is part of the climate change consensus, but he did not see it as part of his remit to look for himself into something as high profile as the MBH hockey stick even though it was right in his area of expertise. The only possible explanation is that he had total faith that Mann et al. must have got everything important right. He gave his consent to MBH98 and the hockey stick because everybody else in the consensus also consented to it. I think that tells us something pretty important about the way the consensus works.

“It seems that it is only after his own name was called in aid of the paper by Tamino that Professor Jolliffe has actually looked at MBH98 in detail and made a public comment about it….Ian Jolliffe is part of the climate change consensus, but he did not see it as part of his remit to look for himself…”

Indeed. My point earlier was that if this is a common phenomenon it might be appropriate to find some way of bringing other controversial features of AGW to the attention of the world experts in the appropriate field.

I assume that experts such as Dr Jolliffe are going to be very busy, and, unless they are actually co-opted by a research organization to look at a piece of AGW work there is no reason why they should. Of course, we know that in practice a member of the team is always picked to do the ‘difficult’ calculations!

It is now very hard for a scientist to make a finding which contradicts AGW. Ababneh (2002) is a good example of a ‘lost’ study. The only people who can safely contradict the consensus now are world experts, and they have no time or reason to do so, unless they see their work being cited in a context which they can readily see is incorrect.

It is instructive to note Dr Jolliffe’s comments. He does not say Mann’s work is incorrect – he says he cannot understand what is being done. He says that an assertion that his presentation supports Mann’s particular decentered PCA trick is incorrect, but that it is conceivable that it could have value – just that he can’t see it. He wishes that these ‘dubious statistics’ could just be dropped. These are careful words, and indicate to me that the speaker does not want to be drawn into a political battle, but has to respond to Tamino’s assertions. I think that unless independent experts are ‘put on the spot’ in this way, even they will prefer to hide from this political battle. Therefore one approach which should be considered is that of trying to encourage or force proper science to be done. This blog is an oasis where Steve is able to do this – I wonder how we can expand its influence…?

Thanks Patrick for putting into words my thoughts on the lack of critical judgments from climate scientists on the matter of the HS as provided in this example. I do not think that one has to look for specific motives in these matters. I think it is more important to note that the observation should encourage those who are critical of these efforts from within and outside the climate science community to not be discouraged by lack of approval within the community.

As a layperson there are numerous climate science papers of a technical nature that are presented such than I can grasp fairly readily how the authors arrived at their conclusions. Even some coauthored Mann papers (on TC activity) are not that difficult for me to get my mind around, but the papers on the HS have always been illusive for me. Part of that is, no doubt, my layperson’s technical shortfalls, but I also am somewhat encouraged to hear experts in the field seemingly having problems completely understanding what the Team has actually done in these related papers.

Re: Kenneth Fritsch (#124), To expand upon this comment: if an expert in the field such as Jolliffe can not understand what you are doing with de-un-non-centered whatever PCA then either your writing is really vague or you have made up some new method that is outside the bounds of being tested, evaluated, and analyzed as such methods need to be (or both).

Re: Steve McIntyre (#120),
Perhaps each of us reveals most of himself by what he does after he discovers that he has made a mistake. In your case, your honour shines through. If I could, I’d like to raise a pint to you and propose a short little toast. But since I can’t (at least right now), would you please take my contribution to your tip jar and do it for me?

I would like to say, first and foremost, that I have been fairly close to the subject of this next statement and so my objectivity may be impaired.

In my considered opinion, the statement by Professor Joliffe regarding PCA and the Mann Hockey Stick should be seen as the final vindication of McIntyre and McKitrick’s entire thesis – that the HS and its derivatives are without doubt bad math, bad statistics and bad science.

We have now had two eminent and independent statisticians agree fully with McIntyre and McKitrick on the Hockey Stick, with one testifying that the HS is simply “bad science” and the M&M criticisms “valid and compelling” and the other stating that the HS rests upon “dubious statistics” and that PCA should not be used on non-stationary data as this study clearly uses.

and that PCA should not be used on non-stationary data as this study clearly uses.

That PCA should not be used on non-stationary data is included in probably every tutorial/text on the subject of PCA. That IJ identified the data as being non-stationary (apparently, and obviously) is the vindication.

We have now had two eminent and independent statisticians agree fully with McIntyre and McKitrick on the Hockey Stick.

LOL 🙂

What’s so funny is invoking opinions to support M&M; M&M proved their case with mathematics.

Truth, always, is indifferent to opinion. Even if everyone disagreed, M&M would still be right. That’s how math works.

On the other hand, maybe I need to think more broadly. 🙂

I have heard some argue we live in a post-modern era where context and personal experience define and alter both meaning and reality. Undoubtedly some people find opinions more convincing than theorems, and with respect to this group, you are likely right: Opinions do matter.

With respect to the HS, this may present little or no conflict. The opinion evidence, like the mathematics, is overwhelming. Every trained statistician who has seriously considered MBH has reached the same conclusion and rejected Mann’s “decentered PCA” approach. This includes heavyweights like Jolliffe and Wegman (who, incidentally, had his work reviewed by the Board of the American Statistical Association), as well as countless lesser luminaries. Even the NAS’s North Committee, while finding “plausible” the warming of the last 4 centuries, endorsed M&M’s position on every technical point that it considered.

At the same time, I do not like where this is headed. Am I going to have to get used to the idea that 2+2=5 so long as a consensus of experts says so?

Re: TAC (#130), I’m sorry but for those of us who can’t follow the math, the opinions of the experts are the only thing that matters. The difficulty arises when the experts disagree. Which expert do you believe? In these cases, the decision has to be necessarily subjective and based on criteria other than the actual facts of the matter.

For me, having one side say that 1) Dr. Joliffe is an authority in this area and 2) Dr. Joliffe’s work supports our claims only to have Dr. Joliffe himself come out reluctantly and say “no, that’s not what I said at all” speaks volumes as to which side should be believed.

I have to give credit to Mann and his group though. They really seem to have picked exactly the right algorithm to demonstrate their conclusion. I wonder what those research meetings were like when they were making these calculations.

A fresh analysis of climate indicators shows that the Northern Hemisphere is warmer now than it has been in at least 1,300 years.

Previous analyses of climatic history by Michael Mann of Pennsylvania State University in University Park and his colleagues produced a distinctive ‘hockey stick’ shape; but some of this analysis, and the tree-ring data it used, came under attack.

The latest work by Mann and his co-workers involves various climate proxies, including corals, ice cores, historical records and marine sediments. The authors show that current warming is anomalous even if all tree-ring data are eschewed.

I wrote a pithy post, trying to get a bit more press for this significant event. My post, I feel is lacking in detail but it is supposed to be because I want to catch the eye of people who are not climatologists. I did my best to explain the significance of this thread and I intend to advertise it around the web in hopes of getting some more notice.

Thanks to Timothy for attempting to provide some references to decentred principal (not principle) component analysis, but it’s not clear to me that any of those provided deal with decentred PCA. Despite what I said in my last posting (‘Uncentred PCA and decentred PCA are completely different animals’) the confusion persists. It’s not always possible to tell from the abstracts, but as far as I can see all the references supplied pertain to uncentred PCA (i.e no centring) not to decentred PCA, where centring is done, but using the mean of a subset of the data. I am well aware of instances of uncentred PCA – Section 14.2.3 of my book gives references in ecology, geology and chemistry, and it is currently used for genetic microarray data – but I have yet to have see a pre-MBH example of centring using something other column means, row means (or neither or both). I have no real excuse for not reading MBH when I first heard of it, but my lame excuse is that I assumed from reports I’d seen that it was another instance of uncentred PCA. So if any of the references supplied, or any others, use a different centring as in MBH I’d be delighted to hear of them – if a third edition of my book ever appears I’d like to include the topic and attempt to clear up some of the confusion that clearly exists. Incidentally, thanks to george for pointing out that I hadn’t read the McIntyre document as carefully as I should and was guilty myself of confusing decentred (not a name I particularly like but I’d seen it used, so adopted it to distinguish from uncentred), so some clear account is certainly needed.

Timothy also referred to Tamino’s 5-part PCA tutorial. I’d previously skimmed 1-3 and looked in detail at 4, but I have to confess that I hadn’t really looked at 5. Although I didn’t find any decentred PCA references, there is some interesting algebra there that I’d like to digest and discuss. However, discussing algebra is hardly an exciting thing for a forum like this, so I’d be really pleased if Tamino would identify himself so that we can conduct a dialogue on this.

In response to pough and L Miller, distinguishing between the hockey stick and the MBH hockey stick is the key issue. The latter is where the problem lies because of what I deemed ‘dubious statistics’. It is this one particular paper, and in particular the defence of the technique used as recently as this year, which has caused so much grief. I agree with the quote from Wegman

“We do agree with Dr. Mann on one key point: that MBH98/99 were not the only evidence of global warming.As we said in our report, “In a real sense the paleoclimate results of MBH98/99 are essentially irrelevant to the consensus on climate change. The instrumented temperature record since 1850 clearly indicates an increase in temperature.” We certainly agree that modern global warming is real. We have never disputed this point. We think it is time to put the “hockey stick” controversy behind us and move on.”

The only reason I got involved is because the ‘dubious statistics’ were still being defended this year and my name was being used in support. If there now are people out there claiming that my first post undermines the whole global warming argument, tell me where and I’ll refute this misrepresentation as well. Almost any decent statistical model-fitting will give the upward trend at the end of the series, but more importantly there are all the climate models, based mainly on physics rather than statistics, that provide convincing evidence of climate change and the reasons for it. As a statistician, on principle I don’t believe anything is absolutely certain, but my view is that the chance of all the climate models having got things completely wrong and that by 2030 the Earth is cooler than in 1950 is of the same order of magnitude as the chance that the USA will decide that independence was a bad idea and ask to be taken back as a British colony by the same date. Not impossible, but I personally wouldn’t bet on it.

but more importantly there are all the climate models, based mainly on physics rather than statistics, that provide convincing evidence of climate change and the reasons for it. As a statistician, on principle I don’t believe anything is absolutely certain, but my view is that the chance of all the climate models having got things completely wrong and that by 2030 the Earth is cooler than in 1950 is of the same order of magnitude as the chance that the USA will decide that independence was a bad idea and ask to be taken back as a British colony by the same date.

Sigh… I wonder if “as a statistician” he calculated these remote odds or not? He should not comment with his weight “as a statistician” on things he clearly has not investigated. Propagate uncertainties, something I’m certain he knows how to do, something that Pat Frank has done, and this statement looks about the same as most of Mann’s bluster.

Btw, last time I checked, models in no way can provide “convincing evidence” of anything.

The only reason I got involved is because the ‘dubious statistics’ were still being defended this year and my name was being used in support. If there now are people out there claiming that my first post undermines the whole global warming argument, tell me where and I’ll refute this misrepresentation as well. Almost any decent statistical model-fitting will give the upward trend at the end of the series, but more importantly there are all the climate models, based mainly on physics rather than statistics, that provide convincing evidence of climate change and the reasons for it.

When Ian Joliffe evidently feels he needs to profess his view of GW in the process of criticizing the methodology used in concluding a widely held result of the AGW consensus and at the same time to minimize what he is criticizing in the context of the bigger picture of AGW, I think he speaks volumes about the current state of climate science.

I can accept Joliffe’s and Wegman’s judgments on the statistics in these instances, but what they think about the other aspects of AGW has little bearing on the particular issues at hand and carries no more weight on those other aspects than that I could obtain from any other non-expert, albeit intelligent commenter. Again I say one has to consider these issues of AGW and climate science one at a time. I think Wegman and Joliffe commit errors when they find A wrong, but feel compelled to generalize about having arguments B and C in reserve. Maybe A was wrongly accepted for so long (and probably to the present time) because others were generalizing about it just like Wegnman and Joliffe generalize about B and C.

The only reason I got involved is because the ‘dubious statistics’ were still being defended this year…

That’s the motivation for most of us, and — unless I’m wildly mistaken — specifically for Steve and Ross.

Why do the flaws in MBH matter so much? Well, they really shouldn’t. The AGW tapestry consists of many threads, and MBH never amounted to more than one of them. It should never have have been central to the AGW debate, and it would not be except for a tiny little detail: The community that holds responsibility for ensuring quality of all threads in the tapestry has doggedly continued to trumpet MBH and steadfastly denied its problems despite overwhelming evidence of flaws. That leaves many of us concerned — not about MBH, since we know the truth about that — but about the rest of the threads in the tapestry, the ones that have not been checked so thoroughly. Is it possible all the other threads are equally flawed? Would anyone know if they were?

I find the auditor metaphor useful. When a financial audit reveals gross errors in a sample of documents, the presumption is that there are pervasive problems throughout the corporation. Until a more thorough review is performed, and systematic problems identified and corrected, the integrity of the entire enterprise remains in doubt.

In such cases, it often turns out that errors were due to sloppy bookkeeping resulting from laziness or incompetence. But that explanation loses its appeal when access to critical documents is denied, when records are found to be altered, when relevant information is hidden, or when other steps are taken to interfere with or suppress further auditing. That’s when words like “deliberate misrepresentation” and “misconduct” begin to appear.

FWIW, I share Jolliffe’s view on climate change. There is overwhelming evidence that the planet has warmed over the past century.

FWIW, I share Jolliffe’s view on climate change. There is overwhelming evidence that the planet has warmed over the past century.

TAC, I do not think that at the level of the generalization in your comment above that you would get much disagreement between the most alarmists amongst the AGW consensus and most of those skeptics who would tend to minimize the dire consequences of future climate change, or at least point to the uncertainty in the predictions of the extent and consequences of climate change. That TAC, Jolliffe, Wegman and I are not denialists says little about our views at the margins of these issues and of the separate issues involved.

Minimizing the importance of the original Mann temperature reconstruction and its progeny, I think runs into the problem of separating how it was used to promote AGW mitigation and what one could reasonably conclude as a scientific document (in contrast from what the authors of these reconstruction papers attempted to conclude). The HS was just what the doctor ordered in terms of showing that current temperatures were unprecedented and not matched in the time that modern civilization has had to deal with a climate change of the magnitude of the current one. It provides corroborating evidence for the climate models. The original HS took most of the wiggles out of the past climate which in contrast with the overlay of the instrumental temperature record made for a very dramatic and compelling display and one that obviously has not gone unnoticed by those pushing immediate mitigation for AGW, e.g. the IPCC and Al Gore’s publication. Has the HS been rejected by the AGW consensus? I think not and in fact I think it is being resurrected against what I think would be some rather compelling counter evidence provided in Mann et al. (2008) paper reviewed here at length.

FWIW, I share Jolliffe’s view on climate change. There is overwhelming evidence that the planet has warmed over the past century

Isn’t it the case that the blade of the hockey stick is not the important MBH finding. it is teh flat shaft that indicated that the climate was stable before the CO2 concentration rose significantly. This is how Gore discussed it in AIT. There was a stable climate before CO2 rose with no LIA and no MWP. That is why it was important to “get rid of the MWP”

I find this assertion odd since even the IPCC says that warming prior to 1950 was natural in origin. Howeverr it is a standard assertion in AGW and is probaly whay Mann’s hocley stick is something tht will not be given up.

My impression is that Dr. Jolliffe missunderstood a key point in the ‘climate change’ controversy. The discussion is not about whether there is global warming or not but rather whether this global warming is mainly due to an increment of GHG concentrations in the atmosphere and whether this will have catastrophic consequences in the near future. Or maybe I am wrong.

Anyway, It would be nice to read Dr. Jolliffe and Dr. Tamino dialogue about decentred PCA. And even more if he could contribute at CA.

Well said, Kenneth Fritsch. He does this (as did Wegman) because he wants his valid argument about the ‘dubious statistics’ present in the HS to be taken seriously by the “consensus” that might get offended that a piece of their religion is being attacked. It is seriously an unfortunate circumstance that one such as Ian Jolliffe has to prop himself up as a proponent of “the cause” in order to be taken seriously in an area he apparently dominates.

Larry Solomon, author of The Deniers, commented that as he interviewed experts working on bits and pieces of the AGW issue, he kept encountering this phenomenon. An expert would look at an area and conclude that, within that area, the evidence for AGW was weak or wrong. But they would take pains to endorse the “consensus view” nevertheless, based on all the evidence from other areas. Not that they had any specialized knowledge of those other areas, they were just picking it up in the press like everyone else. And then Larry would go talk to an expert in one of the other areas, and they’d do the same thing, pointing to the weakness of the evidence in their own field but appealing to all the rest. What he didn’t find was someone who was confident of the AGW hype based on the evidence in the area he or she was actually an expert in.

I hope that Ian will not lose his instincts as a statistician as he ponders the evidence from climate models, especially in the matter of whether different models, and different model runs, are truly independent draws from a random distribution whose moments are unbiased estimators of actual, known constants of nature.

Contrary to MM’s assertions, the use of non-centered PCA is well-established in the statistical literature, and in some cases is shown to give superior results to standard, centered PCA. See for example page 3 (middle paragraph) of this review. For specific applications of non-centered PCA to climate data, consider this presentation provided by statistical climatologist Ian Jolliffe who specializes in applications of PCA in the atmospheric sciences, having written a widely used text book on PCA. In his presentation, Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand. In the case of the North American ITRDB data used by MBH98, the reference means were chosen to be the 20th century calibration period climatological means. Use of non-centered PCA thus emphasized, as was desired, changes in past centuries relative to the 20th century calibration period.

(Note: I fixed the broken link to Dr Jolliffe’s presentation.)

Perhaps the anonymous(?) writer of the RealClimate piece owes Dr Jolliffe an apology as well!

Uncentred PCA has no centering done.
Decentred PCA has centring done with a mean of a subset of the data.

It seems either is only appropriate in a specialized case that is well understood by the person doing it, and well explained as to why and how to the people reading it. That both Wegman and Jolliffe, as well as others, say that it’s either not appropriate or inconclusive at best in the case of MBH98 is clear.

To the rest. There is no evidence rising to proof that:

A. There is such a thing as a global temperature on a meaningful physical basis.
B. The global mean temperature anomaly trend evidenced by GHCN-ERSST (.5 over the last 30 years, .6 over the last 60 years, .6 over the last 127 years) reflects an actual temperature rise; that is, an increase in the amount of energy present in the entire planet’s sea, land and atmosphere.
C. That a rise in carbon dioxide et al (CO2 only; 48.2 last 30, 73.42 last 60, 93.02 last 127) has any impact upon the anomaly trend, and if so, to what extent. Especially given that the anomaly trend is basically the same for all 3 periods.

I’m not even sure that’s scientific; is there a disproof?

I suppose there’s a D — is a .5ish trend meaningful anyway?

Now that said, does it make logical sense that putting more of the IR reacting gases into the atmosphere, specifically the lower troposphere, would result in a rise in air temperature in the lower troposphere, due to slowing the loss of heat from land and the absorption of heat by the oceans? Of course. Is there a rise in the land/sea anomaly? Sure. Do the lower troposphere satellite readings over the last 30 years show a similar pattern as the land/sea? Yes. So while the available data does match what one would logically expect, we still have the problem of quantifying the effects of thing A, thing B and so on. The key is determining what actual effect, if any, the non-water vapor greenhouse gases have in net in the system. Is the net anomaly trend simply measurement error, change in measurement methods, or caused by other things rather than long-lived well-mixed greenhouse gases?

If it is the long-lived well-mixed greenhouse gases, is it 5%? 95%? 50%?

Then you have what Ross mentioned; the experts each concluding that the evidence in their area was sketchy at best, but based upon all the other work, human activities are warming the planet. Expert A “Well, this is wrong, but all the other evidence!” Expert B “This is iffy. However, the other experts have better data, so I defer to their expertise in their areas.” Expert C “I’m not sure about this, it’s inconclusive. Yet the IPCC’s assesment is overwhelming on the subject.” Expert D “There is so much unknown in my field, but in areas I’m not familiar with, those experts have come to conclusions, so I must defer to them and go with the consensus.” And so on.

“What he didn’t find was someone who was confident of the AGW hype based on the evidence in the area he or she was actually an expert in.”

For those of you not following closely, I must state again that I agree with the IPCC that the human activities of burning fossil fuels and changing how land is used (and everything that goes with it) is impacting the climate, and that it’s probably warming. What I don’t agree with is the certainty of the specific causes, in what ratios, or the need to do anything about it other than what makes sense for practical reasons. A sort of a litmus-test limited pseudo precautionary principle.

So in the end, I agree with Steve; if I were a policy maker, I’d take the advise of the scientific community. But unlike Steve, a prerequisite would be more straight talk, less gobbledygook reports, and a requirement for the archiving of programs and data before taking action on anything that didn’t have an emirical observable practical basis beyond “AGW”.

Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

That is the Gell-Mann Amnesia effect. I’d point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all.

I wish I could remember just who he was, but one famous SF writer said several decades ago that he had an easy test for the quality of any new “Who’s who in SF” or “The History of SF” book which came out. He’d turn to his own Bio or era and see if the writer got it right. Then he’d know if it was worth reading the rest of the book or not. Basically the same principle.

“In a real sense the paleoclimate results of MBH98/99 are essentially irrelevant to the consensus on climate change. The instrumented temperature record since 1850 clearly indicates an increase in temperature.”

But does Jolliffe — or for that matter Wegman — understand that the purpose of the hockey stick was not to prove that there has been warming since 1850 — it’s purpose was to prove that said warming is unprecedented.

So to say the flaws in MBH98 (and associated studies purporting to “verify” it) are “essentially irrelevant” because the instrument record proves the existence of warming is to miss the point completely. If the LIA and the MWP actually occured, they were evidently caused by natural factors — natural factors which cannot be ruled out as the cause of the current warming.

That, at least, has always been my understanding of the importance of the hockey stick. No one that I know of ever interpreted it as an attempt to use tree rings to verify the instrument record since 1850.

Re: Michael Smith (#158),
Exactly. MBH98 is not irrelevant because paleoclimatology is not irrelevant. It is all about temperatures in the deep past, not 1850. Temperature and rate-of-rise. MBH98 claimed unprecedentedness on both parameters going back over 1000 years.

To some degree you always must trust your colleagues. The problem is that too much of this kind of faith can lead to a fairy ring of belief. Everyone thinks the other guy knows what he’s talking about, so no one questions the “consensus” – which is truly nothing more than the emergent property of the belief network.

It is important to ask how credible the computer models are on which the whole AGW proposition rests.

On that note, I have a very specific question for Dr. Jolliffe, if he is so inclined. What is the nature of the statistical model by which the strength of the various GMT forcings are deduced (solar, volcanoes, GHGs, etc)? Do you agree that overfitting parameters to relate the various GM time-series is a potential problem? Do you agree that the assumption of iid errors is a potential problem?

The Kiehl thread is a good place to start to familiarize oneself with the phenomenon of multiple GCM tunings leading to a common result.

I am pleased to be able to access the Yamal data again, thanks to Steve’s link, and have looked at it purely from the viewpoint of the numbers it contains.

First, I see that I originally analysed it in 2005, producing what I conjectured then to be some pertinent thoughts, with a note to myself to try to contact the authors, which I never did :-((

When plotted as data my diagrams seem to be exactly the same as Steve’s, and are thus equally uninformative as regards clear conclusion-drawing. For those of you who regularly work with climate-related data this will be no surprise. “Noise” frequently seems to dominate this type of scientific observation, making it “difficult to see the wood for the trees”, one might joke.

However, a simple transform often resolves (at least partially) this impasse. I have applied my usual technique of forming the cusum of the data, and this helps to clarify what can be gleaned from the Yamal data.

It is not lacking in structure, but can be resolved into periods of varying lengths that appear to contain stable (i.e. roughly constant) values, punctuated by points of remarkably abrupt change, usually to another stable regime at a lower or higher value, rather than to a period of steady change. This type of behaviour is typical of many types of climate data, something that seems to have generally gone unremarked.

This is not the forum to go into detail, which in any case involves a considerable amount of graphical presentation, which I, being somewhat computer illiterate do not know how to do on a blog :-(( I can readily provide the graphics by email for those who are interested. However, the most obvious features can be described briefly as follows.

The period from about -2067 to -500 was roughly stable on the grand scale. From then to about -300 was stable at a lower value, when another abrupt change occurred to a roughly stable value, ending at around Year 0. A brief stable period (lasting perhaps 150 years) began very abruptly, at a markedly higher value, ending in a very abrupt change to a lower value, which seems to have recovered gradually and not entirely steadily right up to the end of the data (1996).

These comments (“analyses?) are really taking a very broad view. /Much/ more detailed comments can readily be made concerning shorter periods. I can provide some numbers that illustrate the above comments, should anyone ask, but not tonight! It’s approaching midnight here!

I have applied my usual technique of forming the cusum of the data, and this helps to clarify what can be gleaned from the Yamal data.

What’s your procedure for generating a cusum? Calculate the average, subtract from the individuals and sum the differences? Why not an exponential smooth (EWMA)? Nobody here seems to like or understand cusum techniques, too much autocorrelation maybe.

The following does not seem to have made it on to open thread at Tamino’s, so I thought I’d post it here.

I am relieved that Tamino wants to move on. We can finally accept that MBH 1998 99 were wrong. So now lets looks at the generality of proxies. Two points worry me.

First if we choose proxies largely on the grounds of correlation with instrumental temperature then how can we tell whether or not the correlation is a fluke and the so-called proxies are nothing of the sort.

Secondly if the various proxy measurements have their own time series structure which is roughly speaking red noise, then any that just happen to be rising at the end of the period will correlate well with temperature, especially using the RE statistic. But this might simply reflect the red noise structure, not any serious link to temperature

Professor Jolliffe has made some more very interesting comments on Tamino, including a confirmation that Jeff Id was right in assuming that it was Dr Jolliffe who was a reviewer of the submission to Nature of MM in 2004. Among other points Jolliffe regrets that he had forgotten about this when he made his recent comment that he had not previously looked closely into MBH98.

Re: Patrick Hadley (#169),
Interesting. Since Jolliffe was the reviewer for MM — which in climate blog circles is the “debunk MBH” paper, it’s sort of ironic that Tamino happened to select his presentation as the one that explain why the MBH methods are good! (Or do I have this all wrong?)

Professor Jolliffe has made some more very interesting comments on Tamino, including a confirmation that Jeff Id was right in assuming that it was Dr Jolliffe who was a reviewer of the submission to Nature of MM in 2004. Among other points Jolliffe regrets that he had forgotten about this when he made his recent comment that he had not previously looked closely into MBH98.

Jolliffe’s credibility goes near to zero for me on this issue. His performance in this matter seems to bear out my impression of why some apparently glaring errors and weaknesses in climate science papers are never addressed head on – within the climate science community, that is. Does he really expect anyone to believe he does not remember making the MM 2004 review or that he missed the importance of the MBH original HS paper?

It is not important on what side he comes down on in this matter. It is the eternal equivocation in these matters that makes me think less of the climate scientists that do it -and in the process sounding much, too much, like politicians.

Steve: Note that Kenneth has withdrawn this comment below, after it was pointed out to him that he was misinterpreting Jolliffe’s remarks.

Does he really expect anyone to believe he does not remember making the MM 2004 review or that he missed the importance of the MBH original HS paper?

It would depend on how many reviews he has done in the last few years. I could see how someone would forget one review of many. I suspect the number of scientists actively involved in the details of climate science is quite small. The rest simply assume the climate scientists are doing their job and accept their findings as truth without further investigation.

It would depend on how many reviews he has done in the last few years. I could see how someone would forget one review of many. I suspect the number of scientists actively involved in the details of climate science is quite small. The rest simply assume the climate scientists are doing their job and accept their findings as truth without further investigation.

Raven, let me remind you that Jolliffe’s memory lapse was not a matter of merely not remembering a paper review he had done. His memory had to be jogged when he first decided to make comments at Tamino’s Open Mind blog in reply to what he felt was Tamino’s misinterpretation of his PCA non-/de-/un-centered mean. If his memory was not jogged at that point what made him finally remember now?

Also, Dr. Jolliffe is not simply a pure statistician but has had some involvement with climate science as in the book:

We all have been presented visions of the absent-minded professor, but I think we all know that those are exaggerations – and perhaps some of us might even expect a wink indicating something less innocent.

Re: MarkR (#173), SteveMc. Having read the reviews mentioned above from Nature 2004, and remembering the reviews of Juckes (apart from your goodself’s), and also the IPCC reviewers comments we have seen, I’m afraid that the Scientific system of reviewing as a quality control process is next to useless. In Nature’s case, Jolliffe says openly that he didn’t have sufficient time, and he is mysteriously unable to comment on the matter of spurious correlation and RE v R2, which for a respected statistician/thoretician, I find shocking. I also find Jolliffes gratuitous comment that statistically speaking it is unlikely that all the Models are wrong puzzling.

The other reviewer was writing in his non native language, and I am afraid it was not clear to me what he meant, and not clear if he even understood the question or the source material. The Juckes reviews here apart from yours and Willis Eschenbach’s were poor.

The only professional review that has been any good was Wegmans. Even the NAS “winged it”. Not good enough. In fact the general standard of reviewing is disgraceful, and I have no reason to believe that this is an unrepresentative sample. If I had paid for these reviews, I would want my money back. The honest thoughtful Blogs do a far better job.

Dr. Jolliffe has convinced me that applying decentered PCA invalidates the selection rules which are applied when choosing which PCs to include in one’s model. But the “relevant” (hockey-stick shaped) PC would have been included anyway, applying valid selection rules to centered PCA. And the PCs which are omitted (because they’re suppressed by the method rather than the statistics) don’t seem to correlate with temperature in the calibration interval. Therefore it seems to me that the method is flawed, but the flaw has little or no impact on the final result.

I would also agree with Dr. Jolliffe that the impact of “different centering” on PCA is not yet perfectly clear. There may be disadvantages — or advantages — yet to be discovered

Ken and Raven,
I wouldn’t be surprised by the idea Joliffe could forget the review or details about the method.

Tamino now says the method was flawed. But it seems to him if it were done right, they’d get the same answer. If so, then the task of doing the work right to determine what Tamino anticipates is the correct answer pans out.

If the hockey stick is the right answer, and someone knows how to show that using a totally defensible method, they should do it. Once it’s done, then this whole thing can be behind us. In the meantime, that a statistician tells you what he thinks the answer will be after the analysis is done only means we don’t actually know what the answer will be.

I wouldn’t be surprised by the idea Joliffe could forget the review or details about the method.

In that case I should ask you if you would like to buy some AIG stock. As for the second part of your post, it is probably the lateness of the night, but I am not at all sure what your point is.

I read what Tamino said and it sounded to me like he was saying, in effect, that Jolliffe’s answer left sufficient wiggle room for eternal equivocation on the PCA proposition and maybe decentering would make things better or maybe worse, but whatever it would not affect the HS shape of the reconstruction. What I see here is Tamino’s equivocations feeding off Jolliffe’s equivocations.

This example reminds me of what I see in the results presented in the Mann et al. (2008) reconstruction –something for everyone and not having to say the original stick was wrong.

Re: Kenneth Fritsch (#181)
I see what you are saying but I think it is worth giving Jolliffe the benefit of the doubt on this one since it is impossible to prove one way or another. I am just happy he finally spoke up at all. Perhaps it is a sign that the pressure to defend the faith among scientists is starting to weaken. Or perhaps he felt he could safely trash MBH 1998 now that Mann 2008 is out.

Steve: He was looking at this prior to Mann 2008 which had nothing to do with him speaking up.

Re: lucia (#180)
I believe Steve Mc. already re-did the analysis using centered PCA which resulted in a the hockey stick showing up in PC4 instead of PC1. I believe Tamino is on record claiming that having the “signal” show up in PC4 does not alter the significance of the results which, of course, depends on what the meaning of “significance” is.

I believe Steve Mc. already re-did the analysis using centered PCA which resulted in a the hockey stick showing up in PC4 instead of PC1.

This is an important point which when RC first went after MM they also failed to point out. What would it take for either Tamino or RC to clearly explain this reordering of PCs and then attempt to explain how they a priori decided to use the first 5 PCs – and after the fact of the MM analysis? When I see these kinds of replies, I think disingenuous at best and intentionally misleading at worst.

As far as Jolliffe’s memory lapse is concerned I put it down as possible, but also note that in raising three teenagers several years ago I never remember the dog actually eating their homework, but if we would have had one he might have.

What I don’t understand is the lag between when Jean S called Tamino’s work rubbish and now. Surely someone told Jolliffe his name was being used in vain. If no one did, that’s even worse.
========================================================

Jolliffe’s credibility goes near to zero for me on this issue. His performance in this matter seems to bear out my impression of why some apparently glaring errors and weaknesses in climate science papers are never addressed head on – within the climate science community, that is. Does he really expect anyone to believe he does not remember making the MM 2004 review or that he missed the importance of the MBH original HS paper?

Raven, let me remind you that Jolliffe’s memory lapse was not a matter of merely not remembering a paper review he had done. His memory had to be jogged when he first decided to make comments at Tamino’s Open Mind blog in reply to what he felt was Tamino’s misinterpretation of his PCA non-/de-/un-centered mean. If his memory was not jogged at that point what made him finally remember now?

I just reread all of Ian Jolliffe’s posts on Open Mind. Nowhere in it did I see Jolliffe say he did “not remember making the MM 2004 review.” The only thing he said coming close to this was a comment saying he forgot the things he learned when doing that review (corresponding to what Kenneth Fritsch referenced in the second quote). The criticisms of Jolliffe make little sense with this in mind, as a person forgetting a lesson learned four years ago is hardly surprising.

This is oddly reminiscent to Tamino’s remark in the post that started all of this, “Did MM really not get this?” It seems this sort of tone only comes up when the people discussed are being misrepresented.

It certainly does not endorse decentred PCA. Indeed I had not understood what MBH had done until a few months ago.

Ian Jolliffe // September 12, 2008 at 10:44 am

I have no real excuse for not reading MBH when I first heard of it, but my lame excuse is that I assumed from reports I’d seen that it was another instance of uncentred PCA.

After Outing as reviewer of MM 2004

Ian Jolliffe // September 17, 2008 at 8:01 am

Looking back, I was interested to note the chronology. The first review was written in February 2004 and it is clear that I didn’t understand what MBH were doing at that time. The second review was in July 2004 when I said I thought I did understand, but the notorious Powerpoint presentation was in May 2004 when I had yet to see the light.

Now for the scary memory bit. In July 2004 I clearly thought I understood what MBH were doing, though not why or how to interpret it. However, I must have felt that other things, involving less investment of time, were more interesting and moved on. Apart from another reviewing task a year later, I was unaware of the fierce controversy raging, until earlier this year when a co-worker and I started investigating algebraic relationships between PCAs with different centrings. Looking for examples, we revisited MBH, and I was genuinely surprised when my co-worker told me that MBH had not done an uncentred analysis, but something else. I had forgotten what I learnt four years earlier. So I was wrong in saying in an earlier posting ‘it was only fairly recently that I realised the exact nature of decentred PCA’; rather it was a case of being reminded. As I said, scary!

The statements that Jolliffe makes at Tamino’s blog before outing himself seem “inconsistent” with his refereeing the MM 2004 paper and being reminded by a coworker about the decentering versus noncentering issue a few months ago. I said that I did not believe that he forgot he refereed the paper and he actually did not say that – after the fact of the outing.

He was being very coy in my mind about admitting knowing much about the MBH 1998 paper except in the recent past before the outing. I also noted that I doubted he was unaware of controversy involved with the MBH 1998 paper and that would take a lot of forgetting once he refereed MM 2004.

I think I understand better than before what the MBH98 PCA is doing, namely centering the data about the mean of the 1902-80 period rather than of the whole series. The question is why, and what properties and interpretation does such a procedure have? Given the non-stationarity of the series, it is certainly not successively maximising variance as in PCA, and talking about ‘explained variance’ therefore makes little sense. I don’t feel I can comment on whether or not this procedure is appropriate without understanding its properties and interpretation

Does he really expect anyone to believe he does not remember making the MM 2004 review…?

He never said he had forgotten such, to which you just agreed. I do not see any reason to doubt a word of Ian Jolliffe’s comments. Your implications are extremely hollow, especially as you did not admit what is a glaring mistake.

Jolliffe’s remarks struck me as completely honest, unlike what you have posted.

L&C, this is something I wonder over a lot. I was a little struck by Jolliffe’s disclaimer in his early posts that he still believed that CO2=AGW(I paraphrase) was still valid from the mass of evidence elsewhere. As to Jolliffe’s general awareness of the whole hockey stick controversy, I suspect we need to figure out just where he generally applies his great knowledge of statistics. In what field are his oeuvres?
============================================================

Ken–
It seems to me…..Tamino is using the word “seems” when prognosticating what the results would be if the problem were done correctly, right? So, he has not actually re-done it. He is predicting what the results would be if it were done in some acceptable way.

Whether Joliffe would agree with an implementation of a method Tamino has not yet described is unknown. But, if I understand correctly, the method as used by Mann is admitted as flawed.

Lucia, if I understand your point correctly now, I think it is important that you understand what Steve M has referenced above. Tamino and RC and Mann will tell you that doing it right with regards to centering will result in 5 PCs that can be selected with a new criteria and the bristlecones will dominate to give the HS. Actually they usually will not tell you all that but simply that they can get the HS with the proper centering. Then, further, the sensitivity of the bristlecones in obtaining the HS result and the reason for including them becomes another issue not only with MBH 98 but with subsequently published reconstructions.

Then after all this they will tell you that even without tree rings and PCA they can get a HS. What they will not discuss in the same paragraph is the divergence problem, how sensitive the result is to proxy selection, a reasonable a priori criteria for proxy selection and the graphical display of the instrumental record over that of the proxies into recent times without actually splicing it onto the reconstruction.

But one need go no further than the Mann et al. (2008) reconstruction to compare the much less HSish reconstructions, that while continuing to have many of the warts noted above, do not allow the authors to make any claims about the recent temperatures being unprecedented in the last 1600 years for the SH or the globe.

These issues were discussed in MM2005 (EE), which I suggest that people re-read. It’s nice that after over 3 years, people are finally beginning to understand some of these things.

First, Mann had argued that his PC1 was the “dominant component of variance”. We observed that, in addition to mining HS shapes, Mann’s method overvalued the first eigenvalue and that what he thought ws the “dominant component of variance” is a lower order feature.

Using the number of retained PCs in MBH98, the PC4 would not be retained and you get a different answer, which is one of many permutations that we discussed in MM05 (EE).

They argued that the “right” number of retainsed PCs is 5, originally in their Nature reply and then in an RC Dec 2004, preempting our 2005 articles using information that had been provided to them by Nature on a confidential basis only for reply. We discussed these permutations in MM05(EE) and the issues are discussed in Wahl and Ammann withotu reference to any of the prir discussions.

Wegman dismissed changing the number of PCs as an after the fact method change that had “no statistical integrity”. Mann coopered up a rule for including 5 PCs, but when I attempted to apply the “rule” to other networks, it was impossible to obtain the number of retained PCs. Mann’s archived code notably failed to show this step.

In any event, all of this turns on gambits to include Graybill bristlecone chronologies, which NAS recommended be “avoided” and which Ababneh didn’t replicate. So there’s a very large degree of stupid effort in trying to invent statistical rules for including Graybill bristlecones – far better to see if they actually are magical antennae for world temperature.

Also remember that doing PC analysis “correctly” is no guarantee that the result has any meaning as a temperature proxy. All it is doing is identifying a distinct pattern in the data – a different thing entirely. For the bristlecones to survive as a PC4 in a fairly distinct form (as they do) means that they are orthogonal to 3 higher order patterns. Does it make sense that these are a unique thermometer? Our position in MM2005 (EE) was that of sensible data analysis – double and triple check the validity of “key” data. Cross-examine the bristlecone data from all possible angles. Every Mannian “rule” as to why you “had” to do something a certain way – declared as some sort of Mannian fiat – has fallen apart so far.

Jolliffe’s intervention is long overdue, but his point was also made by Wegman and in an obscure way by NAS.

I want to put into perspective my comments on Jolliffe’s MM 2004 review and his conversation at Tamino’s blog.

Firstly my problem with Jolliffe’s comments at Tamino’s blog has to do separately with his claim that he was unaware of the decentering of MBH 98 and by implication the controversy surrounding MBH 98 until a few months ago. I find this very hard to believe and I will let the conversation I excerpted above speak for itself. And I was wrong to imply that Jolliffe had claimed he had forgotten reviewing MM 2004. He did not and it was I, who from his conversation at Tamino’s, assumed he had little or no memory of MBH 98 before a few months ago.

On the other hand, now that a name can be attached to a reviewer of MM 2004, it is important to compare what Jolliffe and Wegman found lacking in MBH 98. I find the only difference between Jolliffe and Wegman was that Wegman’s statements appeared less equivocal and more to the point. The following is not meant to be all inclusive:

1. The clarity of presentation in MBH was problem.
2. The shouting louder and longer of MBH in replies was a problem.
3. The decentering was a major problem and a poor/invalid statistical practice.
4. Ignoring r^2 and using an obscure RE statistical test was very poor statistical practice.
5. Mann et al ignoring the importance of the red noise tests of MM was a problem.

Wegman commented on the sensitivity of the strip bark inclusion to the findings in MBH and its questionable use, whereas I do not recall Jolliffe commenting on this issue.

I think the combination of these two reviews puts the MM criticisms of MBH 98 in good stead.

It was interesting to review in detail the comments at Tamino’s blog on the Jolliffe thread. Tamino appeared to want to shift the conversation to the use of the term decentered and uncentered and what this meant in terms of what MM where describing in their criticisms of MBH 98. He also tentatively put up the notion of appeal to authority when his initial take on Jolliffe’s position appeared to be wrong. I believe it was RC and Tamino that used Jolliffe as an authority on this issue- when they thought they could understand it as rationalizing what MBH did.

As the evidence grew against Tamino’s stand on decentering in MBH and as the implications that MBH had flaws were pointed out to some of the posters at Tamino’s they instinctively wanted to move on to more recent work with what I see as a tactic that says: OK the original work results that we once accepted as valid may have flaws but it is the newer work that we now accept as valid that we want to point to. I guess this second step could be repeated with a third and a forth ad infinitum. That this can be sustained is certainly evidenced by the lack of any general adverse reaction arising from the criticisms of MBH 98 by MM, Wegman and now Jolliffe in the climate science world.

Unfortunately, in my view, I see many ways to cherry pick temperature proxies in reconstructions to obtain the “correct” answer and that forces one to make an analysis on each reconstruction. That the analysis of MBH 98 has progressed to this point is heartening but that climate science has moved on several times and that obtaining a reasonable consensus that MBH is flawed has taken so long is more than a bit disheartening to me.

Ken–
Here is how I see it:
The groups you mention tell us they think they could show the HS remains if they did the problem correctly using whatever proxies everyone accepts as “good”. If this were so, they could just do the problem correctly and show the world that this result could be obtained doing the problem in a way that convinces everyone they did it correctly.

However, instead of doing the problem in a way that resolves everyone concerns as to correctness, they persist in publishing analysis where the shape is obtained using a method that has some serious, identifiable flaw. (That is, uses proxies deemed unreliable or methods know to be flawed.)

When the new improved methods are proven to be just as flawed as the past one, they go back to speculating that– if and when the do the problem right– they will get the answer the are convinced is true.

What’s Steve’s role: Steve has done a lot to demonstrate the flaws and show how the inappropriate choices affect the results. Often, when the flaws are uncovered, we find the HS disappears.

So, in other words: Although some insist they would get the HS if they did the problem “right”, they won’t break down and actually do the problem “right”. The HS if often the result of the flaws in the current method.

In that regard: Tamino is, like others, speculating that if the problem were done right he would get the HS answer. If Tamino is correct, then, in short order, he should be able to demonstrate that his guess is correct. Otherwise, he will continue to speculate, speculate, speculate etc.

Tamino’s speculations are not proof that his guess about what the answer will be if and when the HS will appear if the problem is done correctly.

It remains for those who think the HS appears if the problem is done correctly to do the problem correctly and get the HS result!

Given the history, many will doubt the result until such time that SteveM or some external third party who wants to do a lot of work repeats the analysis and says it was done correctly!

Lucia, we’ve done the calculations under various permutations and combinations of PC methods a long time ago. Under some permutations, you get HS-shaped results’ under tohers, you don’t. It all depends on how much weight the bristlecones get.

Once you know that the results are really only bristlecone pines and the rest of the network is pretty much white noise/low order red noise, then the issue is: are bristlecones magic proxies? As I’ve observed endlessly, this requires confirmation sampling and updating and ordinary science; there’s little point inventing statistical rules merely to inflate their weights.

The other issue that often gets overlooked – various representations that were important to the widespread acceptance of the original study – were simply untrue. Most notably, the well publicized verification r2 issue. They now argue that verification r2 is not a relevant test and Mann denied to the NAS panel that he had even calculated it. Leaving another mystery – how did MBH98 Figure 3, showing verification r2 for the AD1820 step, get there? And if Mann wasn’t responsible for the source code where the verification r2 was calculated in the same step as the RE statistic, who was responsible for it?

If your point is that Tamino’s speculations seem like stubborn desire to insist on a predetermined result despite pretty strong evidence suggesting we won’t get the HS shape if the problem is done “right”: Sure.

What you say and have shown all indicates that if you do the problem correctly, and don’t use bristlecone pines one does not get the HS shape. The case for the HS shape keeps going through iterations like this:

Iteration “N”.

1) Research group “A” does the problem incorrectly with flaw type “N”
2) Steve finds flaw in method. Shows HS shape vanishes if flaw corrected. Noted third party statisticians concur.
3) Members of research group “A” eventually admit flaw in their own method, but don’t concede Steve’s method is right. So, they speculate there is some true, correct method out there and also, speculate that if that flaw were removed and some correct method (other than Steve’s) were developed the HS would reappear.

A year or so later, iteration “N+1” begins, starting at step 1.

We are now on iteration “N= what?” Four? Three? Tamino is at step 3 of some cycle.

One of the confusing things for anyone following the debate is that at any given time, points form all iterations are discussed.

If the past repeats itself, we will soon see another iteration, with yet another paper that “resolves” the issues. There will be claims this proves the “speculations” in (3). Then, you’ll look at it– step “2”, and we’ll see some important aspects of the HS was due to problems with the method. The problems may be newly invented ones, or old ones hidden or a combination of both.

I don’t know why Mann and his supporters want so desperately to show the HS is created. I don’t know why they want to speculate that if the problem were done “right” they’d get the answer they fervently believe to be true. I don’t know why they persist in believing this despite trying to get the “right” answer and only getting the HS after doing the problem using newly invented methods that statisticians consider dubious.

All I’m observing is Tamino appears to be at “stage 3”. By observing this, I don’t mean to suggest his speculations are likely to pan out. If he wants to do more than speculate, he will move on to step “1” in a new iteration — which is necessary to show his speculations are meritorious.

I’m also saying that, given the history, there are a number of people who will adopt a “wait and see” attitude on any new papers coming out of step 1. You happen to be the most likely person to find the specific flaws. So, basically, until you endorse the new result, lots of people aren’t going to consider the new method solid.

(Alternatively, a new third party could show up. But that’s not likely to happen, as these sort of diligent beavering is counter to the professional interest of load of people.)

This dilemma is also facing me with the drought prediction. That is, one could look at the quality of the statistics and say they are inadequate, and even if that was acknowledged, they could still argue that if it was done right, you would see that droughts will increase with AGW. OK, the impression I get is that statistical insufficiency is not enough. Its like winning a world boxing title on points. Without a KO, there is always argument.

The other approach, that steve alludes to is to argue that the representation of the data and analysis were misleading. This more legalistic perspective has its own issues. There is a provability about it. There are standards within other fields like finance that could be used as yardsticks. But its prone to collateral damage IMO, particularly if there is no knockout.

The reality is, that if there is no hockey stick, to AGW’ers its not the correct method! There is no a priori way to bless a particular method outside of the result it gives! Thats because there are too many unknowns in the system being analyzed.

Remember that HS shape is not enough. They need to get smallest possible uncertainties. Minimum uncertainties, as IPCC states it. That’s why Brown & Sundberg articles are not in fashion at RC.

When one starts listing the weaknesses/flaws one sees with MBH98 and its progeny reconstructions and looks at individual analysis it can become difficult to get one’s hands around the overall poor quality of these reconstructions. I left out of my list above the issue of uncertainty in the reconstructions that UC has discussed at CA at length.

Why does the consensus in the climate science community keep missing/ignoring all these weaknesses? Is it because they do not understand the statistical inventions used by the authors of these reconstructions or the potential for cherry picking proxies that these inventions can produce, and further that they do not have the time or will to make the effort to understand them? Could some of the reviewers of papers and commenters on the consensus simply have convinced themselves, that since there is a consensus on this matter of AGW that includes many areas of investigation, that the effort to challenge and understand one part of it is not all that critical to the overall picture/consensus and therefore why waste valuable time doing the heavy lifting of a detailed analysis on a single aspect of it?

All of the climate scientists coming to CA with the consensus view have by my recollection been curiously very deferential when it came to making definitive statements/judgments about MBH98 and progeny reconstructions.

Also why, when these scientists claim that the consensus view does not depend on any one area of these studies purporting to give evidence for the consensus view, are they invariably vague about what that other evidence is and how one would interpret it to give a consensus view? To take this vagueness a step further I am sure that each individual scientists has a differing view of the what the consensus view is, even when allowing for +/- temperature/climate and adverse versus beneficial effects of the climate change.

These issues are far beyond by areas of expertise, but I’ll comment anyway.

What is needed, first of all, is physical causality for selection of proxies. Validated mathematical models for the physical phenomena and processes important for each proxy would go a long way in this regard.

Verification of the equations that will be used in analyses of the data prior to coding and applications, Verification of the equations and methods following coding, and Verification of a calculation following application are essential to ensuring correctness of the numbers. These simple procedures applied to an enormous number of analyses have proven track records of improving the quality of the numbers.

MarkR–
So, sure, it may be MBH’s friends peer reviewing. That bother’s me less than the probable likelihood that MBH’s friends are peer reviewing the papers that discuss flaws in MBH. The fact that peers don’t (and can’t) really detect true errors or weirdness means that preconceived notions of “the correct answer”, or personal bias can enter the review process.

#213. Lucia, if you look at the review process of our Nature submission, you see something like this. Jolliffe, who had no horse in the race, gave our first submission to Nature a good review, as did the other reviewer (who is non-Mannian). I think that Nature was a little taken aback by the favorable reception, to say the least, and got a third reviewer from Mann’s side (Reviewer #1 in the 2nd review process) who gave a savage and irrelevant review rejecting the article, arguing that we’d already published on the topic (though everything in the Nature submission was new). It was a “good” example of POV reviewing, as opposed to QC reviewing.

SteveM–
Yep. I hadn’t read the reviews before last week, and that’s how the reviews read; that’s how the process reads.

Also, reading Joliffe’s review, it’s clear he is cautious. It is also clear he states that many of the specific claims in papers cannot be confirmed or rebutted by peer reviewers. (That is, reviewer’s don’t check computer programs etc.)

What is lacking in all the reconstructions is the simple matter of demonstrating that your PCA or regression or whatever method would work under various conditions. For example, take some simple time course of temperature with noise, grow trees under your assumed growth model, take this data as input to your method–can you reconstruct the input temperature history?

Re: Craig Loehle (#220),
I have been advocating that approach from day one. For a tree that is jointly limited by growing season temperature and precip, and survival by drought-related dieback and decline (the MWP megadroughts were hugely intense), I doubt very much that you recover much of the annual temperature signal. Especially if there are nonlinearities and non-additivities that can not be recovered through linear, univariate calibration.

Click at the “shameful self-promotion”. Grant Foster signs the 2001 post with his name and promotes their band, “Bedlam Boys”. He signs himself with a tamino Hotmail address. By the way, the music of the Bedlam Boys is OK – search Bedlam Boys at YouTube – but I am not really “thrilled” by it. The duo “Bedlam Boys” was created in Boston in 1998 where Foster roughly worked at that time and 2001.

Sounds pretty definitive, though I only had the one hit, so perhaps it’s just an outlier and should be ignored.

BTW, I know English isn’t your first language, but your last sentence should be something like:

The duo “Bedlam Boys” was created in Boston in 1998 where Foster worked roughly at that time through 2001.

Putting “roughly” first indicates the work was done roughly, not that the date is an approximation. Likewise, saying “and” rather than “through” would imply two separate times when Foster lived there. Neither, I assume, are what you meant.

The link to the comment at Tamino no longer works. All the discussion of Joliffe’s views, including Joliffe’s own comments, seems to have disappeared. This seems a pity. Did anyone keep any of the comments for the record?

I think that ends any discussion as to whether Tamino has an “open mind”. Its very firmly shut even to the extent of deleting evidence that Tamino is ever wrong.

Have you any evidence to support this accusation?

Steve: Enough on this. Tamino has explained the problem with the past comments and this does not pertain to the question of whether or not he has an “open mind” – which is not something that I wish to waste bandwidth here discussing. I note that the past comments still remain unavailable on a number of threads that he undertook to fix.

Tamino says here that a WordPress update caused comments not to be displayed on threads where comments were closed (which, in addition to past Open Threads) includes the Principal Components thread and, among others, a thread where Tamino got pummeled on his explanations why there was no post-1989 data for several Peruvian and Bolivian sites (notwithstanding the fact that data was easily online).

2 Trackbacks

[…] That is the Gell-Mann Amnesia effect. I’d point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. climate audit […]