For example, the world of complementary and alternative medicine (CAM) divides the medical community. Orthodox medicine mostly rejects papers about reflexology, iridology, and acupuncture treatment that invokes invisible pathways (meridians) of qi. CAM is served by a separate class of journals that have little overlap with the more mainstream medical literature. In this instance, ideas are incommensurable.

Unfortunately, the climate science community has been far more accommodating to the paleoclimate equivalent of alternative statistics, into orthodox journals. The wider climate science “community” is placed in the awkward position of trying to reassure the public that other parts of their field are, in fact, based on science, while, at the same time, not only not disavowing, but actively defending paleo-phrenologists and the meridians of qi converging on bristlecones in California and the magic larches in Yamal. Given that strip bark bulges, which are mostly likely merely mechanical, are interpreted as expressions not just of local temperature and precipitation but of world “instrumental training patterns” or “climate fields”, phrenology is a surprisingly apt term.

The problems of strip bark standardization were being discussed in the thread where Climategate was first mentioned – a thread which contains relevant illustrations of the problems in trying to fit strip bark bulges into any statistical framework – let alone the statistical framework stated to underpin MBH98. The picture below shows the sort of phrenological bulge that underpins the strip bark Hockey Stick – see here for further discussion.) There is convincing evidence that such bulges are present in strip bark chronologies – one of the reasons why the NAS panel said that they should be avoided in temperature reconstructions. Gavin and Tamino can huff and puff all they want about the 4th principal component, but this is the sort of data that they are importing under the guise of the “right” number of principal components.

All their talk about the “right” number of principal components is simply sleight-of-hand to confuse you – when you watch the pea, the entire purpose of the high-falutin talk about principal components is to “get” strip bark bulges into the reconstruction.

This is an old debate, but the only thing that the Team moves is the pea under the thimble.

MBH98 stated:

Implicit in our approach are at least three fundamental assumptions. (1) The indicators in our multiproxy trainee network are linearly related to one or more of the instrumental training patterns. In the relatively unlikely event that a proxy indicator represents a truly local climate phenomenon which is uncorrelated with larger scale climate variations, or represents a highly nonlinear response to climate variations, this assumption will not be satisfied.

This, of course, is a large part of the problem with strip bark bristlecones (and YAD061 and its cousins.) Actually, the problem with strip bark trees looks even worse – it seems very possible, even likely, that the 6-sigma bulges in strip bark widths are purely mechanical, arising from the formation of strip bark itself. However, these 6-sigma bulges become proof in the hands of paleo-phreonologists using their own alternative statistics.

The failure of the most critical MBH proxies – strip-bark bristlecones – to meet the assumptions of their statistical model was stated as early as out 2004 Nature submission (there is compelling evidence that Jones was the third and very antagonistic reviewer), where we stated:

The NOAMER PC1 thus gets its hockey stick shape from the Graybill-Idso sites, which exhibit a nonclimatic response and/or a nonlinear response to 20th century temperature. Since MBH98 states (p. 780) that their method requires the assumption that proxies exhibit a linear response to temperature, the Graybill-Idso sites, explicitly acknowledged as problematic in Mann et al (1999) (ref. [13]), should have been disqualified as contributors to the NOAMER PC1 in MBH98, let alone as the main determinants of its shape.

Much effort has been spent by paleo-phrenologists to frame the issue as the “right” number of principal components to retain – as opposed to the underlying issue as we had framed it – whether the assumptions of the underlying statistical model had been satisfied. Indeed, we noted that MBH99 had even acknowledged the failure of stripbark bristlecones to satisfy the assumptions of their model:

Mann et al. (1999) themselves pointed out, with reference to these proxies: “A number of the highest elevation chronologies in the western U.S. do appear, however, to have exhibited long-term growth increases that are more dramatic than can be explained by instrumental temperature trends in these regions.”

With the inconsistency that so characterizes the field, after conceding that bristlecones do not meet the assumptions of their statistical model, Mann proceeded to use them anyway. (Despite statements in MBH98 that the reconstruction was “robust” to the presence/absence of all dendro proxies, MBH98 was not “robust” to the presence/absence of bristlecones. Thus, instead of not using bristlecones because they failed to satisfy the assumption of the statistical model, Mann purported to “adjust” the strip bark bristlecone chronologies – an adjustment convincingly criticized by Jean S last year. Mann’s methodology, here as elsewhere, belongs to what can only be described as alternative statistics, a discipline that, as noted above, has found a home in the climate science sections of otherwise orthodox journals.

Mann responded to our observation that Graybill strip bark bristlecones did not meet the fundamental assumption of his methodology by invoking a supposed relationship to “instrumental training patterns” as opposed to local temperature and precipitationi:

MM04 demonstrate their failure to understand our methods by claiming that we required that “proxies follow a linear temperature response”. In fact we specified (MBH98) that indicators should be “linearly related to one or more of the instrumental training patterns2”, not local temperatures.

(Update-Jul 25-6 the criticism of Mannian teleconnections is not refuted by point to ENSO. Individual trees respond to local temperature and precipitation etc; they do not respond to abstractions like a PC3.Further, the problematic 6-sigma strip bark bulges that characterize Team reconstructions are not a linear response to climate at all.) Roman Mureika expresses the point in a comment as follows:

What the climate scientists don’t seem to understand is that for teleconnections to be usable in a scientific fashion, there must be a specific real identifiable physical effect which operates at the proxy location. This effect is clearly not local temperature since the proxy has not responded to that. To further assume that this unidentified effect is related in an appropriate equivalent quantitative form to the proxy measurements is a fiction which lends itself to the cherry picking of spuriously correlated series.

[end – update]

In my opinion, if climate scientists in other parts of the community took pains to disavow paleoclimate meridians of qi and alternative statistical methods used to buttress them – which , after all, are an important part of the public face of climate science – there would have been less fall-out for the rest of the discipline in the wake of Climategate.

When the NAS panel said that strip bark bristlecones should not be used in temperature reconstructions, this should have put an end to the use of Graybill bristlecones in temperature reconstructions. However, this didn’t happen. Wahl and Ammann totally ignored the recommendations of the NAS panel, even though it wasn’t finally published until a year after the NAS panel; the companion paper, Ammann and Wahl 2007, wasn’t even submitted until after the NAS panel. Other members of the Team also continued the use of strip bark after the NAS panel e.g. Hegerl et al 2007, Juckes et al 2007, Mann et al 2008.

Wahl and Ammann, as discussed in past CA posts, is a sustained exercise in Texas sharpshooting. Their efforts to benchmark RE significance were, of course, a singular contribution to Texas sharpshooting literature. But most of the rest of their article are variations on the theme.

Even the longstanding issue of 2 or 5 PCs comes down to Texas sharpshooting. As Jean S reminded readers at Bishop Hill, there was no evidence that Mann used Preisendorfer’s Rule N in determining the number of retained PCs in MBH98. Indeed, the explicit language of the article indicates another rule. Mann has refused to provide source code evidencing the use of this rule in MBH98. Using this rule after the act is simply one more example of Texas sharpshooting – what Wegman called “no statistical integrity”.

Gavin Schmidt’s inline responses to Judy Curry here relies heavily on Wahl and Ammann 2004 2005 2006 2007 includes a complaint that we haven’t published a rebuttal of Wahl and Ammann in the peer-reviewed litchurchur.

Obviously I’ve commented on Wahl and Ammann at length at Climate Audit. I recognize that these comments haven’t been peer reviewed by Jones, Santer, Mann and their associates, but they are still comments that I believe to be thoughtful and ones that are worth reading by someone interested in the topic. There is a separate left-frame category for Wahl and Ammann.

Second, it is very much my belief that, if the points made in these threads and elsewhere are correct (and I believe them to be), then these are sorts of things that specialists in the field, employed to do these sorts of studies, should be responsible for knowing whether or not I’d written the threads. That I’ve commented should be an assistance to them, but surely not a prerequisite.

Third, although Schmidt complains that we haven’t rebutted Wahl and Ammann in the litchurchur, this is not entirely true. McIntyre and McKitrick (E&E 2005) rebutted many, if not most, of the points at issue in Wahl and Ammann. This may seem a little surprising given that MM2005 (EE) was published prior to Wahl and Ammann. Nonetheless, it is so.

All the key arguments of Wahl and Ammann 2007 – bristlecones in a lower-order PC4, two versus 5 PCs, Mannian inverse regression without PCs – were first put forward in the Mann response to our re-submission, which I’ve placed online here – see, in particular, Mann’s cover letter.

These arguments from Mann’s 2004 response to our Nature submission featured prominently in multiple threads in the opening of realclimate. (It was these pre-emptive attacks on us that led to the opening of climateaudit as a blog in late January 2005, thanks to the suggestion and initiative of John A.)

Although Wahl and Ammann did not cite either the Mann submission to Nature or the realclimate posts (and conspicuously do not even acknowledge Mann), virtually all the main arguments in Wahl and Ammann derive from these prior publications by Mann. Isn’t the failure to acknowledge such priority a form of plagiarism?

In MM2005 (EE), we reported on our examination of the various permutations and combinations of correlation and covariance PCs, the impact of 2 or 5 PCs, etc, that had been previously raised, plus a few others. If you go through the salient cases of Wahl and Ammann, you’ll find that they are already considered in MM2005 (EE). Of course, this isn’t reported either. (Wahl’s awareness of this priority is demonstrated in his Climategate correspondence with Briffa.)

Doubtless it would have made things easier for people if we’d responded to Wahl and Ammann/Ammann and Wahl (the SI to which only became available in summer 2008) and it’s on my list of things to do. But the fact that I haven’t attempted to run the gauntlet of Team reviewers in the litchurchur doesn’t mean that I haven’t responded to Wahl and Ammann. The points have been responded to at considerable length.

Schmidt also grasped the verification r2 nettle – a nettle that he would have been better off leaving ungrasped. This was a battleground issue in 2005. Judy Curry had written:

just because no single significance test is objectively the best in all circumstances does not mean that you can cherry pick significance tests until you find one you like and ignore R2.

Gavin Schmidt replied:

[Response: This is simply insulting. You have absolutely no evidence that this was the case. The RE/CE statistics are perfectly fine at describing what the authors thought were relevant and have a long history in that field (Fritts, 1976) and as we have seen the PCA issue is moot. The idea that people went looking for ‘bad statistics’ to fix their results is without merit whatsoever. Please withdraw that claim.]

Well, it may be insulting, but the evidence is what it is.

Fritts, 1976 does not stand as an authority for not using verification r2, as it is a test that Fritts recommends prior to doing the RE test. Secondly, Schmidt’s claim that Mann reported an RE/CE pair is untrue. Mann did not report CE results for MBH98. They were first reported in MM2005 (GRL), where we observed that the AD1400 step failed the CE test, as it had the verification r2 test.

However, the most compelling evidence of Mann reporting a verification r2 in a step where it was favorable was, of course, in MBH98 itself, where Figure 3b is clearly labeled “verification r2″ – see below:

Figure 1. Excerpt from MBH98.

While the verification r2 is illustrated geographically in the above graphic, and MBH98 stated that they considered r2 statistics, the SI to MBH98 showed only the RE results and not verification r2 statistics. Mann’s source code, archived in response to the House committee, showed that he calculated verification r2 in the same step as verification RE, a point made at the time and later presented to the NAS panel.

The original Wahl and Ammann submission likewise did not include verification r2 results (even though they had issued a press release that our results were “unfounded”) Our codes and the Wahl-Ammann code reconciled – Wegman waggishly observed that it was more correct to say that Wahl and Ammann replicated our results, than Mann’s. As a reviewer of Wahl and Ammann, I asked that they include verification r2 results. They refused, citing their GRL article as authority (without disclosing to Schneider that their GRL article had already been rejected.)

In December 2005, I suggested to Ammann that we write a joint paper clearly summarizing points of agreement and disagreement. He refused, saying that it would be “bad for his career”. This has led to a great deal of wasted time on everybody’s part. Again, he refused to report verification r2 results. These were reported only after an academic misconduct complaint was filed against Ammann. Needless to say, Ammann got the same negligible verification r2 results that we had.

The NAS offered to examine the verification r2 issue, but Cicerone removed it from the terms of reference of the NAS panel. Nonetheless, panelist Christy asked Mann whether he had calculated verification r2 for the 1400 step and what the result was. Mann denied calculating the verification r2, saying that this would be a “foolish and incorrect” thing to do. Of course, it was known at the time that he had calculated verification r2 statistics, since it was in his code and illustrated for the AD1820 step.

The “dirty laundry” email (in which Mann sent to Briffa and Osborn the residuals that he later refused to send me) had not been available to the NAS panel. With these residuals in hand (or even if the actual reconstruction steps had been made available), it was child’s play to see the failed verification r2, CE and other results.

As it was, the NAS panel was seemingly dumbfounded by Mann’s bald-faced answer and did not follow up. There was supposed to be an opportunity for public discussion after presentations. However, Mann fled the room before anyone from the public e.g. me had an opportunity to ask. I sharply criticized the NAS panel for sitting like bumps on a log and not following up. Nychka came up to me afterwards and said that, just because they didn’t say anything didn’t mean that they didn’t notice. They didn’t say anything in their report either on the topic so they might as well not have noticed.

The Wahl and Ammann attempt to justify the failed verification r2 test was itself one more instance of the Texas sharpshooting. Once the failed verification r2 was exposed (and only after its exposure by third parties), they attempted to re-frame the question by now arguing that verification r2 wasn’t a relevant statistic – notwithstanding their use of the statistic in the illustration when it was to their advantage. Schmidt may find this impolite, but facts are sometimes stubborn.

The failure of the MBH verification r2 results was not as small a result at the time as now portrayed by the Team. Eduardo Zorita told me that his view on MBH changed once he knew of the failed verification r2. If Mann wanted to argue that the failed verification r2 didn’t “matter”, the failed results should have been reported and discussed in the original article.

Thus, while reconstructions relying on strip bark bulges of California bristlecones and magic Yamal larches have been published in orthodox scientific journals, this does not change the fact that the underlying analyses do not rise above phrenology.

Re: Kevin Lohse (Jul 25 14:44), this remark is irrelevant to the subject of this thread, and is a cheap pot-shot at another issue (homeopathy) that is being ridiculed as unfairly as Steve is ridiculed by the Team and their supporters in high places. The unfairness becomes clear if you examine the evidence for homeopathy, not just the case set up by its detractors. So too with MM.

Caspar Ammann NCAR page seems to be right up there with the Team leaders in refractory, self-destructive bull-headedness. A fine poster-child for what Clive Crook of the Atlantic describes as:
“The climate-science establishment … seems entirely incapable of understanding, let alone repairing, the harm it has done to its own cause.” — Climategate essay (recommended)

This post represents the very best of Steve. Clear, concise but precise, and to the point. It continues to amaze me that, on the FACTS, the team have managed to snow the main stream media and many serious people. They have managed this despite the publication of the emails (whose existence and substance could not have come as a shock to long time readers at CA.) Steve’s work does not necessarily bring into question AGW: it merely raises serious doubts in regard to one class of proxies. The team’s persistent hostility to Steve, and inability to deal seriously with the questions that he raises, on the other hand, creates doubts about their own integrity, particularly given the background of the emails. By extension, all scientists who propose theories around AGW are tarred with this brush (and many of them, seem to want to give the impression that they deserve it!) I believe Climategate has led to a global skepticism that has been hardened, despite the placatory findings of the various inquiries. Even the most sympathetic observer (and there are many in the media) must have a queasiness about the whole process. This situation cannot last much longer. It’s time for these folk to come clean and deal seriously with the issues that Steve has raised.

Thank you again Steve and also to Dr Curry. The comments at RC have been very insulting and personal regarding Dr Curry’s comments. Considering the teams’ snarky responses to everyone who chooses to ask questions and voice doubts (Dr Curry in this case) they seem to think slowly alienating everyone is in their best interest.

It needs to be said every time this issue comes up that the “RE statistic” is an invention of climate science. If you ask a professional statistician about the RE statistic, she will say she has never heard of it. In all other branches of science, r2 is used, but for climate science, apparently, r2 is not appropriate.
Phrenology is an appropriate analogy.

Gavin Schmidt’s inline responses to Judy Curry here relies heavily on Wahl and Ammann … 2007 includes a complaint that we haven’t published a rebuttal of Wahl and Ammann in the peer-reviewed litchurchur….Doubtless it would have made things easier for people if we’d responded to Wahl and Ammann/Ammann and Wahl (the SI to which only became available in summer 2008) and it’s on my list of things to do.

It’s on my list of things done a long time ago, which might eventually see the light of day if my coauthor would get around to returning the draft. Ahem.

Ross, this stuff is killing you guys in the PR department. I was wondering why you guys never replied to that paper, come on.

Steve: I tend to work on things that interest me. The Wahl and Ammann paper doesn’t interest me because (a) it is so mendacious (b) MM2005 (EE) already pre-buts Wahl and Ammann and, in that sense. We’ve already responded to it. The reason that MM2005 (EE) prebuts WA is because they plagiarized Mann’s Nature response in 2004, which was presented in Mann’s early realclimate posts – which were discussed in MM2005 (EE).

Wegman observed that Wahl and Ammann 2007 had “no statistical integrity” and one would have hoped that that people in the field would be able to understand something on their own but, as you observe, they seem incapable of understanding the issues without outside assistance. And so I guess that we’ll have to do something about it.

I am finally starting to understand why you have such difficulty getting traction on the most important issues.

You appear to still, after all this time, be under the impression that what we are participating in here is a scientific debate.

In a scientific debate, each false point has only to be disproven once. After that, the debate moves on to other points that are not dependent on the one that is disproven.

Many of the points you have called into question have been disproven, both by you and others, multiple times and (in some cases) in multiple ways.

If this were a scientific debate, that would be the end of it.

But this is not a scientific debate. It is a form of verbal war.

In a war, if you fire an artillery shell at the enemy — even if it’s perfectly aimed and hits its target — if the enemy position is still intact afterward, you don’t leave the field at that point. Nor do you stand there and say, “Well that was a perfect hit. Mission accomplished,” and wait for the enemy to fire back at you.

You reload, and fire again. Or you order a charge of the position and try to take it over.

In other words, if the enemy is still standing and he refuses to voluntarily relinquish the position you are attacking, then you have to find a way to force him off that position.

Otherwise, you will lose the war. Remember, you’re the attacker, he’s the defender. All he has to do to win is to avoid being dislodged.

With due respect, I fear you may have understood even less than Steve has about this.

The struggle that we are witnessing between two sides, both of which seek above all to somehow get the opponent to admit error, is taking place according to the rules of warfare. This is because there is one side that refuses to adhere to the rules of science.

If you are Steve, and you want this issue to end with you being acknowledged as the victor, you have a problem. You have “won” the scientific game. But the opponents will not concede. They are continuing to try to play the game. They are knocking all your pieces around on the chess board. The rules of science do not allow this.

But the rules of war do.

Thus, you have two options. Get up, take your chessboard home, and refuse to talk about their errors any more. They will still be playing the game, and in the mind of most scientists (as well as the public) they will have won, because most will perceive that a forfeiture has occurred. Again, they do not acknowledge the rules that you believe to apply to both sides.

Or, keep blasting away with maximum firepower, thereby (hopefully) progressively forcing them from their positions, against their will. This has happened in science before; it’s not good or pretty, but it’s nothing new. The stakes for the public are higher than usual, this time.

BTW, I said “hopefully” in the last paragraph because even though your shots may be on the mark, they can bring up resources from the rear to try to fortify their (perhaps unwisely) occupied position. Thus, even though they may be making a mistake by holding their ground right now, they also perceive that their attackers are making a mistake by refusing to keep firing after they’ve found a soft spot.

With due respect, I fear you may have understood even less than Steve has about this.

I am no stranger to political wars. One thing that has shaped my view on this is that I have watched many good experts lose their moral compass as they started trading their conscience for influence.

If you are Steve, and you want this issue to end with you being acknowledged as the victor, you have a problem.

I guess this would be the motivation for Steve’s work that I had missed then. My assumption was that so far, Steve has been more interested in checking that people adhere to sound scientific methods than proving a particular point. I doubt that well will run dry anytime soon. :)

You would probably argue that Von Clausewitz is more on your side than on mine, but he also said this:

We must never lack calmness and firmness, which are so hard to preserve in time of war. Without them the most brilliant qualities of mind are wasted. We must therefore familiarize ourselves with the thought of an honorable defeat. We must always nourish this thought within ourselves, and we must get completely used to it. Be convinced, Most Gracious Master, that without this firm resolution no great results can be achieved in the most successful war, let alone in the most unsuccessful.

All I’m saying is that even in a war (or especially then!), auditors are desperately needed, and their function is decidedly different from that of the warrior.

Steve: I’ve done things because I was interested in them, not because I am trying to change things. I have never had expectations in that direction and don’t worry much about such things.

As to any personal recognition, I’ve always been guided in my expectations by the following on the “5 stages of a project” in any institution:
1. Euphoria
2. Disenchantment
3. Search for the Guilty
4. Persecution of the Innocent
5. Rewarding of the Uninvolved

I don’t think Steve thinks or acts in terms of winners or losers. From my reading, over the past 9 months only, he’s interested in getting those who make declarations about AGW “proof” (or disproof) to release their facts and methods for audit. If the worst scenarios can be supported, bring them on, but if the data and methods are kept secret, I really doubt that ANY open-minded person would accept them. Good discussions on religion / politics / environment on Jeff ID’s blog – you may identify something which sheds light on your dilemma.

It looks like in Fig 1 above that the red includes everything above 0.2. Really. You can get r-square of 0.3 by fitting a linear regression to random data (check “Applied Regression Analysis”). I guess it “doesn’t matter” that your verification is bunk when your answer is virtuous.

Bear in mind that Fig 1 presented here (fig 3 of MBH98) is for the 1820 onwards step – which includes many long instrumental records. So some of these are regressing instrumental data against instrumental data… and yet he is unwilling to show any graduation between 0.2 and 1.0! Well, who knows, perhaps the figures are incredibly good and Mike is just being modest. But then again…

Love the “meridians of qi”. What I find so curious is how many of these guys are self-taught (nothing wrong with that at all) but then claim the insider’s robe and sneer at any new self-taught “outsiders”. Take Steve Schnieder, Hansen, and others who had no training in climatology but picked it up. Or the dendros with no statistical training. Or Hansen again claiming he has an inside track on predicting species extinctions when he is a physicist. Of course in reality any real scientist must be self-taught or he will quickly fall behind, so the premise that these guys are special initiates and CA readers (or say, Steve or Ross) are unwashed and have cooties is just so…immature.

“In my opinion, if climate scientists in other parts of the community took pains to disavow paleoclimate meridians of qi and alternative statistical methods used to buttress them – which , after all, are an important part of the public face of climate science – there would have been less fall-out for the rest of the discipline in the wake of Climategate.”

touches on an important point that i am very sensitive to. Yes there are a host of issues with the IPCC, but it is this paleoclimate reconstruction issue that is dragging the entire field of climate science into the mud.

While I am not an expert on this subject, I have read many of the original paper plus the north and wegman reports. your arguments and those of Montford make much more sense to me than the Dummies Guide to the Hockeystick, Tamino’s review, and Gavin’s response. If they can’t do better than this, then they have definitely lost the credibility war. This is the point I tried to make in my little review over at RC, but after reading the rest of their thread and now your post, well they are now deeper in the hole than they were prior to tamino’s review.

I would agree strongly and point out that one’s attempt to explain a topic reveals whether you really know what you are talking about or not. If the explanation sounds like gobbledeegook, this is not a good sign. Every time these guys try to defend the hockeystick, they sound irrational and there is so much handwaving they look like they are trying to get a taxi at rush hour.

You have decided to have a public fight in an area of something much less than your expertise. WIth your credentials, why would you decide to fight on denier blogs? Wouldn’t this be more appropriate for you to be having this conversation amongst your professional colleages?

Gavin Schmidt knows the climate science field down cold and you showed that you know much less than he does in this conversation.

In this article it shows that there are several other reconstructions generaly coming to the same conclusion. Wouldn’t a professional of your background understand the significance of this. It isn’t just Mann’s work coming to this conclusion.

So why are you publicly fighting on blogs? Get yourself organized to make a valid argument before taking the experts in the field.

Steve; Again you have to watch out for disinformation from the Team. Gavin’s knowledge of reconstructions is not reliable. Of the nine reconstructions in the IPCC spaghetti graph, all but one includes strip bark or Yamal or both. In standard team methods, one such chronology is sufficient to imprint the reconstruction.

You also have to watch Tamino’s examples. One was Kaufman et al 2009 – where Yamal has a very marked impact. The other was Mann et al 2008, which continues to use Graybill bristlecones. It’s non-dendro reconstruction uses upside-down and construction-contaminated Tiljander – they purport to show that Tiljander doesn’t matter by doing a reconstruction with bristlecones and vice versa. hardly convincing.

Jeff, you misinterpret what I am doing. I am not arguing for one side or the other. I am trying to remind both sides of the debate of the following, which i further explained on a post on climateprogress, which i reproduce here:

——-

Consensus on a scientific issue is established as science evolves through the following successive stages (Funtowicz and Ravetz, 1990):
1. no opinion with no peer acceptance;
2. an embryonic field attracting low acceptance by peers;
3. competing schools of thought, with medium peer acceptance;
4. a dominant school of thought accepted by all but rebels;
5. an established theory accepted by all but cranks.

At the time of the TAR, MBH reflected an embryonic field (level 2). There was very little justification for any kind of consensus statements with “likely” and “very likely”, even by the standards of IPCC’s guidelines. By the time of AR4, the field had arguably matured to level 3, a more established field with competing schools of thought. The conflict that has ensued over the high confidence levels in the IPCC conclusions and the attempts to establish a premature consensus is described by Montford’s book.

The response of a rational person considering the evidence from both sides (which is a necessity for level 3 science) is to weigh evidence from both sides and make both sides aware of arguments from the other side and emphasize the need for refuting arguments from the other side in justify your thesis.

The response of an irrational person is to declare level 2 or level 3 science as “settled science”, “a fact on par with the theory of infrared radiative transfer of gases.”

———–

the people at climateprogress seem very puzzled by this. Its called science.

If you go to the link that I have provided there are several reconstructions that seem to reach the same conclusion that MBH have. Why not argue about their methods also?

So again you are picking a fight on a blog of all things in an area not of your strength. You aren’t really qualified to write a critique as far as I can tell in a professional science manner. It possibly would take you several years to pull it off.

What are you doing here?

Why not go through the professional channels?

Gavin has to respond professionally accurate on his own postings or his own professional scientists will call him on it. If you are proven that a lot of your statements are inaccurate as he says, will you publicly retract your statements?

“If you go to the link that I have provided there are several reconstructions that seem to reach the same conclusion that MBH have. Why not argue about their methods also?”

Why dont you learn how to read the citations They are listed below. You can search CA and see a couple of things. The series in question are NOT fully independent. If several reconstructions use the same bogus data, then they hardly support each other. or rather they support the same bogus answer. There are questions of data and questions of method.

Further, WHY try to change the conversation to other topics. The topic is the methods of Mann. Please stick to the topic. Like a good professional

To me, the tragedy of the paleoclimate reconstruction issue is for all the sound and fury, it is largely irrelevant and takes away a lot of time and resources from what is relevant. Don’t get me wrong –having been raised and pushed forward with such confidence as smoking gun proof of AGW, it had to be addressed.

But in the end it does not tell us what we need to know –what has caused the late 20th century warming, is it likely to continue, and at what pace? That natural variability *could* have produced x% of the current warming is not proof that it *did* produce x% of the warming *this time*.

I do not think that I will ever find algebraic solutions that attempt to solve for the C02 influence persuasive. Do it the other way around –get all the other stuff down convincingly and solve for natural variability instead and I might be convinced.

The claim is made that past climate was stable, and recent warming is unusually fast–thus is MUST be due to humans. That is reason 1 for the fight.
There is something else we want to know–is current waming unprecedented or not (in magnitude, not rate)? If the MWP was warmer than today, and the effects on humanity were positive (as history seems to indicate, at least in Europe and China where records are good) and polar bears lived through it, then why are we panicking? This is the real reason the defense of hockey sticks is so desperate.

“One more note of importance. The manuscript actually started out as a completely different work, in which I developed a new detrending method that resolves a number of the existing problems. However, when that work was essentially done, I realized that it made very little sense to propose a new and better method when it wasn’t even clear to most people that some of the problems I was attempting to fix existed in the first place. You must clearly define/describe an existing problem before proposing a sophisticated fix for it. It would take me one entire paper, at least, just to lay out exactly what these problems were, their magnitude, their cause, and so forth; I had to write that paper first. However, science in general is not too keen on this approach; it doesn’t look very good frankly. In this case, doing so involves admitting the failure to address–or even recognize–certain fundamental problems that have led to many papers over the years–including some very high profile ones–that range from questionable to entirely worthless. This situation is just not acceptable, period.”

The cracks are appearing and the team vessel is holed beneath the water line. In my opinion they have picked
a fight with (very probably) one of the most patient and methodical of adversaries.I also think that “they aint seen nothin yet”
By electing to defend the indefensible they dig themselves deeper and deeper into the mire. By underestimating their opponent and not fully realising (or accepting) just how damaging the e-mails really are,
they continue to make errors of judgement.Long may they continue.

“Gavin Schmidt [complained] that we haven’t published a rebuttal of Wahl and Ammann in the peer-reviewed litchurchur.”

Of course, if you had, you might have received a response such as that given recently by Rutherford et al. [ http://holocene.meteo.psu.edu/shared/articles/RMWAcomment_2010_jclim_smerdonetal.pdf ]
“we are puzzled as to why, given the minor impact the issues raised actually have, the matter wasn’t dealt with in the format of a comment/reply. Alternatively, had Smerdon et al. taken the more collegial route of bringing the issue directly to our attention, we would have acknowledged their contribution in a prompt corrigendum. We feel it unfortunate that neither of these two alternative courses of action were taken.”

“Gavin Schmidt’s inline responses to Judy Curry here relies heavily on Wahl and Ammann 2004 2005 2006 2007 includes a complaint that we haven’t published a rebuttal of Wahl and Ammann in the peer-reviewed litchurchur.”

“… we are puzzled as to why, given the minor impact the issues raised actually have, the matter wasn’t dealt with in the format of a comment/reply. Alternatively, had Smerdon et al. taken the more collegial route of bringing the issue directly to our attention, we would have acknowledged their contribution in a prompt corrigendum. We feel it unfortunate that neither of these two alternative courses of action were taken.”

I guess I’m puzzled as to why, in this case, the Team has not acknowledged Steve’s comments and published a prompt corrigendum, and instead are insisting that Steve publish his comments in a peer reviewed journal before they pay any attention to them. Always playing the victim, these boys are.

I urge CA readers to actually read all of Tamino’s Realclimate post. He’s not a pit bull or professional blogger, he’s a serious and well published climate scientist. The tree rings are a non issue in terms of verifying the extent of temperature increase: note that when the graph with the two tree ring studies that M&M don’t like is removed and the other 20 proxies and temperature readings remain, the change is almost imperceptible.

Briffa and bristlecones were never central to the conclusions of IPCC, since the data concerning temperature increase comes from many sources and is robust.

But then, some people like to deny that the glaciers are retreating, that Arctic ice is decreasing, and that species are moving north. Maybe some readers here will keep an open mind and accept what the science actually tells us. This debate is over, and it’s time to recognize the facts and move on. Then perhaps our grandchildren will have a chance.

Steve: you have to watch the pea under the thimble with Tamino. His example uses a 1983 Gaspe version. An updated version of Gaspe (unpublished but sent to me by one of their associates) did not have a HS shape. Nor is the shape of the Gaspe series replicated in other cedar chronologies. Instead of doing the calculation without the controversial strip bark PC1 and Gaspe, Tamino’s trick was to do it without the strip bark PC1 and the Stahle PC1 – while leaving the Gaspe artifact in. Sleight-of-hand.

If you are worried about climate change, I urge you to focus on what you believe to be the best arguments and discard arguments relying on this sort of bilge.

Mike, you were doing ok until the last paragraph, at which point I’m afraid you probably lost credibility with this audience. Few people here dispute that there is evidence of warming in the later half of the 20th century. The issue has always been whether that warming is statistically significant. To answer that question, you must know something about past variation over a few thousand years. That’s why the hockey stick is important and why it featured so large in IPCC.

In general, the issue with Mann-like reconstructions is (1) the PCA method that they use is not supported by the statistical literature (2) there is inadequate criteria to accept/reject proxies from the mix, and (3) the divergence issue raises serious questions about tree rings as temperature proxies. Simply removing one or two series and re-running the remaining series through the short-centered PCA process is not persuasive.

Steve: the short-centered PCA introduced an interested bias, but the issue is more than that. The underlying problem is the bristlecones – PCA results in them being kept segregated as a separate factor. In inverse regression, they then imprint the reconstruction as long as they are a segregated factor included in the regression.

It may help also to understand the nature of the science. It relies on statistical techniques to establish results. These, crucially, depend for their validity on the elimination of bias. If a researcher has faulty data, then the results are undermined. If the researcher is selective in the data, then the results are undermined. It outliers are not eliminated then the results are undermined. This all means that where there is extremely complex data, and problems of measurement, it is extremely difficult to establish conclusions that cannot be overturned. This is both true of economics and of paleoclimate.

I would contend that climate scientists, as a junior scientists, need to learn from other disciplines.
– From accountancy, about sense-checking the data (see link below)
– From research into new drugs, about the necessity for more technicians to collect and collate data, and to experiance dead-ends.
– From law, to distinguish between levels of evidence and distinguish baseless rhetoric from cogent arguement
– Most of all from statistical theory, where you will find you results will have no validity unless you take active steps to eliminate bias. Even then, with complex data, your results may still be later undermined, despite passing a battery of tests.

For all of these reasons, we should accept that the results of research are tentative. We should recognise the limits of our knowledge. In recognising the boundaries, and establishing procedures to quickly identify error, paleoclimate may be able to move forward.

In addition to your excellent points, a further way to make progress it to recognize that when experiments are not possible per se, it may still be possible to evaluate the necessary assumptions. You can test how mineral composition in forams or other creatures changes with temp & salinity & nutrients, for example, or evaluate how long it takes for air bubbles in ice to become sealed off. Ignoring these possibilities leaves open uncertainties that can completely demolish one’s results.

Speaking of Michael Mann’s psychology, “Meridians of qi” are dragon-currents, telluric ley-lines similar to Tantric chakras invested with much mystical-magical significance. Given that objective scientific inquiry is not Prof. Mann’s forte, is’t possible he revert to necromancy unawares?

It appears to me that Gavin (#278 )and Tamino (#302 )have raised some specific technical issues that seem interesting on the surface and may or may not address some of the statistical issues raised in MM05. Their lack of specificity suggests hand waving – but conceptually they seem to raise legitimate issues. At least they have put something on the table that can be looked at by the resident CA experts. Addressing these issues makes sense in order to move the discussion forward.
Of course this leaves open the more fundamental issues of the proxies that are known to be poor or misleading, e.g., bcps, and meaningful physical interpretations of all the PCs that are extracted.

#278 – Gavin: There are obviously circumstances in which any correlative method (such as MBH or any other method that has been used) will fail. No-one is claiming that any method will work regardless of what the input data is! The question that we have to assess is how well they work given the input data that there actually is. To give an example, let’s imagine that all of the proxy data have non-climatic effects that cause millennial long trends that have nothing to do with what happened to temperatures or climate. Let’s further suppose that the high frequency components are absent or have relatively small amplitude. In such cases, you will get spurious correlation on the century scale between proxies even if there is no climate signal. This will produce erroneous structures in the reconstruction. Now, no-one thinks that tree ring records have millennial scale non-climatic trends (their problem is precisely the opposite, that the multi-century scale climate trends might be damped), the same is true for ice cores etc. So what is the persistence of the non-climatic noise? Since the actual proxies have a sample auto-correlation that depends on the actual climate signal as well as the non-climatic noise, using the sample auto-correlation overestimates the persistence due to the non-climatic part. It is not that climate related auto-correlation invalidates correlative methods (it doesn’t !), but that over-estimating the non-climatic persistence gives you an overly pessimistic view of the method skill under real world circumstances.

#302 – Tamino: I’m hardly the last word on this (!) but by my calculations yes, you can get a hockey-stick shape in the first PC by applying short-centered PCA to red noise. Actually there’s a tendency to get a “step-function”-like shape, but many would still call that a hockey stick. It even seems to me that PC#1 of short-centered red noise is likely to be hockey-stick shaped (especially if one calls step changes a hockey stick).

BUT — and this is a big one — how strong that PC#1 is likely to be (how much of the variance it accounts for) depends on the autocorrelation we impose on the red noise; the whiter the noise the weaker is PC#1. Yet even when I “jack up” the autocorrelation to ridiculously high values, the hockey-stick-shaped PC#1 still doesn’t come close to matching the strength of PC#1 from the MBH98 analysis of the NoAmer ITRDB proxies. By this criterion, the hockey-stick PC#1 for NoAmer ITRDB in MBH98 is demonstrably NOT from “mining” that pattern from red noise.

It’s also clear that the data contains both noise and signal, so the autocorrelation of the data is greater than that of the noise. Hence the noise series used by MM were forged with autocorrelation higher than representative of tree-ring noise. But as I said, even when I jack up the noise autocorrelation it still doesn’t give a strong enough PC#1 to come close to that of MBH98.

Finally: none of that has anything to do with being robust. What makes it robust is that you get essentially the same PC for NoAmer ITRDB data using the MBH98 procedure, or the MM PCA procedure, or fully normalized PCA (a la Huybers). Hence that PC is insensitive to changes in the chosen method of PCA, i.e., it’s robust.

That it would have been a better idea to use full-centered and normalized PCA (as Huybers recommends) is my opinion. That the result would have been the same is a fact.

And the point is doubly moot since recent work (Mann et al. 2008) uses a method that doesn’t involve any data reduction step for representing regional proxy networks.

So I noticed by glancing at the RC comment thread that rejecting “teleconnections” made some of them immediately reject Steve’s post as incompetent and rude; apparently, “teleconnections” are a well-known and accepted feature in climate science.

I’m not sure I fully understand, though. I am asking the question here rather than on RC, since I have absolutely no interest in being insulted with ad-hom attacks from moderators.

I accept that local events can have far-reaching and sometimes non-obvious effects on climate in other regions, and if that is what is meant by “teleconnections”, then I’m willing to buy it, but I don’t really see how one gets from there to accepting that proxies that locally do not seem to show a linear response to temperature, somehow would respond linearly to larger temperature patterns. Surely that assumption would at least have to be verified? One way to verify it would of course be to check if it holds for the period where we have a reliable instrumental record…

Is there some publication where this assumption is put to a rigorous test and found to be true?

I accept that local events can have far-reaching and sometimes non-obvious effects on climate in other regions, and if that is what is meant by “teleconnections”, then I’m willing to buy it, but I don’t really see how one gets from there to accepting that proxies that locally do not seem to show a linear response to temperature, somehow would respond linearly to larger temperature patterns.

Your take on the subject is pretty accurate on all points.

What the climate scientists don’t seem to understand is that for teleconnections to be usable in a scientific fashion, there must be a specific real identifiable physical effect which operates at the proxy location. This effect is clearly not local temperature since the proxy has not responded to that. To further assume that this unidentified effect is related in an appropriate equivalent quantitative form to the proxy measurements is a fiction which lends itself to the cherry picking of spuriously correlated series.

I have never seen any situation where the use of such a quantitative effect has been properly identified and scientifically justified. If any mention is made of a possible link , it is usually a hand-waving statement which starts out “It could be … ” with no further evidence.

They argue that local temperature is a high frequency effect whereas global warming is a low frequency effect. Teleconnection is a new-agey kind of word meant to convey the idea that the low frequency global climate signal can be recovered from local growth records. This might or might not be true, but it seems to me that you would still need good spatial coverage (which they don’t have) to be able to identify and filter localized high freuency signals.

Umm, enough has survived from my old MSEE training that I’m quite comfortable with the idea of applying a low-pass filter to a signal, but we didn’t use to call that teleconnections. :)

I would have thought that smoothing is part of the normal procedure for going “from meteorology to climatology” – it seems obvious that the whole reconstruction thing is an exercise in extracting a low-frequency signal from series with quite a lot of noise – but that teleconnection deals with the notion of influences over great distances – in this particular e.g. that tree rings in one part of the world would somehow be a better proxy for regional or global climate than it is for local climate. I can accept long-range influences in atmospheric and oceanic processes, but how exactly does teleconnection work with trees?

Temperature records in the US are among the best and most comprehensive we have for the last 160 years or so. They show considerable variation over that period. But as the US represents only 2% of the earth’s surface we cannot assume they mean anything in global terms.

Meanwhile, through teleconnection, a few trees in a very remote, small area of the US can tell us all we need to know about the temperature over the last 1000 years. Pure magic!

Imagine that all trees on the planet are bathed in the low frequency global signal. Their growth pattern would then be then influenced by both their local, high frequency signal (local weather) as well as the global low frequency signal. Since every tree on the planet would be subjected to the global signal it’s then a “simple” matter of extracting this signal from the composite signal and noise. The challenge, of course, is that you inevitably have low frequency variation going on at a local level (for example, a slow growing neighboring tree blocks some of the sunlight on the subject tree). You have the situation where tree growth is influenced by a large number of variables (many of them inter-related) where small perturbations to any one input variable can lead to relatively large changes in output (growth). So it’s overly simplistic to say that ‘signals in the range of x to y are global’.

The simplest example of a teleconnection involves rainfall: temperature at location A affects evaporation at location A which then influences precipitation at location B. Tree growth at location B responds to local precipitation.

Yes, I understand how one might use teleconnections to explain how local climate is affected by changes elsewhere. Still, in order for a series to be a reliable proxy for temperature, it would have to be established that it responds in a predictable way to local temperature changes. If not, it would have to be a proxy for some other local phenomenon, which in its turn is a proxy for temperature – local or otherwise.

I understand Steve as saying that a handwavy suggestion is made that bristlecones somehow are a good proxy for global temps, even though they do not represent local temps well (and even that how you sample them can make a huge difference to the result). Either Steve is wrong, or there is a proven link somewhere, which has a bit more substance than “stranger things have been known to happen”…

That’s not a teleconnection. That’s local climate at location A and local climate at location B. If the winds blow in a different direction what happens then? And are you saying that teleconnection would extend right around the globe, totally ignoring local factors for a given site?

What they seem to be doing is correlation hunting….the trees don’t correlate well to local
conditions, so they go out a bit further to see if they can match a more remote station. If so,
then presto you have teleconnections. This is of course bogus. Weather can operate over hundreds of miles, for instance storms or high pressure areas. But the most important influence on trees is the local conditions, the local rainfall, sunshine, absolute humidity, temperature, plus insect infestations, grazing, avalanches, snowfall, etc. The one area where you can have a remote connection is when you have a tree growing along a river bank which is fed from snowmelt far off in the mountains. In this case, there can be both a time delay in that the snow may fall in January, but the peak run-off is in June, and also the geographic separation from the two sites.
But in this case there is an understandable physical connection between the two locations. There aint no QI.

Steve; the situation with strip bark bulges is really much worse. These appear to be mechanical in origin, highly nonlinear and nothing much to do even with local climate – other than perhaps high snowfall in the 1840s.

The RE/CE statistics are perfectly fine at describing what the authors thought were relevant and have a long history in that field….”

Oh Gav why oh why are you so agressive. I told you before, when I was in school my teacher would say where’s youre homework, I would say I thought it should be in tommorrow Sir. He would say Form Oneers shouldn’t think.

The meme here continues to be that proxy records are unreliable, and that paleo records have been fudged. These arguments have been addressed in detail by climate scientists, but maybe CA readers should just look at the instrumental record:

It’s getting warmer, at a virtually unprecedented pace, and natural influences such as the sun and water vapor have been found by peer reviewed science to be minimal. A commenter here responded to my prior post by saying that the denier argument has now become whether this warming is “statistically significant”. Again, what part of melting glaciers, retreating icecaps, and migrating plant and animal species do you not get?

As for the 1983 study you mentioned, Steve… I consider these kinds of statements to be rhetorical, not factual. Nothing you have ever said, including use of obscure jargon about a subject you have no training in (paleoclimatology) changes the fact that the upward march of temperature and CO2 emissions is leading us to dangerous territory.

Many find you fascinating- a cuddly bear of a polite Canadian, capable of detailed research and interesting statistical analysis. This doesn’t make you correct, and it certainly doesn’t make the consequences of your contributions any less destructive.

Steve: where have I suggested that paleo measurements have been “fudged”? I’ve strongly criticized the statistical methodologies and the appropriateness of proxies, but I don’t presume that Graybill “fudged” his ring width measurements. I think that I’m qualified to carry out statistical analyses of paleoclimate reconstructions and I submit that this view would be conceded by people in the field. As to the “big picture”, it is very much my belief that people should focus on their best arguments if they wish to accomplish major changes. I’ve urged people who are concerned about climate change to focus on explaining water cycle feedbacks to the public, rather than the litany of Hockey Stick-type studies. If people who are worried about climate change actually listened to my advice on how to present their case, they’d have done so more effectively.

If the temperature reconstructions are inaccurate (or more precisely not known to be accurate since they are derived by unsound methods), how is is possible to know this?

Secondarily, one must be quite careful in making such statements since the currenty reported warming trend is much smaller than the warming trend that occured at the end of the last glaciation. I assume that your statement refers to Dr. Mann’s pronouncements about warming in the last 1000 years or so.

As for the 1983 study you mentioned, Steve… I consider these kinds of statements to be rhetorical, not factual.

What do you mean “rhetorical, not factual”?

Two proxy records are available for the Gaspe, one shows a hockey stick and the other does not. Why is the hockey stick one preferred over the other? This leads to the broader question of why certain proxies appear in a majority of the reconstrcutions. This of course leads to the very important questions that if the reconstructions are not independent and that they use proxies which were selctged by unknown criteria, then are these reconstructions of any use to policy makers.

Mike,
The issue is the one you allude to; is the recent temperature increase a “virtually unprecedented pace”? Since your link does not show temperatures that go back 1000+ years, that link is not especially helpful, in terms of proving your proposition that “It’s getting warmer, at a virtually unprecedented pace”.

If the standard paleo reconstructions are not reliable (as Steve’s analysis pretty clearly demonstrates), then the best we can say is, we don’t know if recent temperature rises are happening at a “virtually unprecedented pace”.

For those who may place some credence in Gavin’s assertions that Fritts and his followers relied on RE/CE to the exclusion of verification r2, I’ve posted up excerpts from a Fritts 1991 article and Cook et al 1995 (cited by MBH98) showing that verification r2 (or, its equivalent verification r) were part of the standard dendro verification regimen (not that these opinions have any high statistical authority, but they are interesting if only to rebut Gavin’s claim).http://www.climateaudit.info/pdf/tree – look for cook and fritts excerpts.

This is quite separate from the incontestable fact that Mann illustrated verification r2 in MBH98 and said that verification r2 was considered.

If you are talking about the comments to the recent articles over at RC I can’t find the source to your assertion about Gavin’s assertions? Are you sure you are rebutting something that he really claimed?

Gavin: “no single metric tells you everything interesting about a reconstruction’s validity” (based on NAS report that references Fritts 1976, no claim of exclusion)

Gavin: “The RE/CE statistics are perfectly fine at describing what the authors thought were relevant and have a long history in that field (Fritts, 1976)” (no claim of exclusion, nor a claim that Fritts 1976 would have supported exclusion)

Gavin: “WA07 did not invent ‘RE’ – that was discussed in the NAS report (along with the r2 issue, page 94 I think) and dates back to at least Fritts (1976) in this context, and it is a useful metric for how well a reconstruction does in the verification interval.” (no claim of exclusion by Fritts or followers here either, just of inclusion as “useful”)

Mikael:
Steve will speak for himself. My read that Gavin was implying that the RE/CE was a “better” alternative to the R2 was similar to Steve’s. Gavin s pretty explicit that the R2 is not relevant in his response to ThinkingScientist at #92
“[Response: the metric you look at for any particular application depends on what it is you are trying to assess. The low r2 values are associated with year to year variability which is not really what is being looked for, rather you want a statistic that works at capturing the general level. The RE score does that and demonstrates that there is skill (which obviously decreases as you go back in time). The way you should look at this is that the metric you use defines what you can infer from the reconstruction. So at 1450 say, you can’t trust the year-to-year variability, but the longer term average is more skillful. -gavin]”

Its revealing that people at RC and here seem so puzzled by what i am writing and why. I am making no attempt to take sides regarding the scientific arguments in this particular debate (gavin schmidt shouldn’t either, he doesn’t know enough about this particular topic). I am trying to remind people how science is supposed to be done and how we should assess justification for a thesis, something that too many people in the blogosphere and sadly the scientists forget in this highly politicized environment.

A thesis should not be considered justified until substantial efforts have been made to challenge the thesis and then rebut the challenges. It is logically absurd to claim that an embryonic thesis has been justified. This requires time for challenges to be mounted. The slow cycle of peer reviewed journal publications and comments and rebuttals is too slow given the alleged urgency of policy decisions and the 6 year cycle of the IPCC reports, hence much of the more salient discussion of controversial and important issues is increasingly being conducted in the blogosphere. I have challenged the RC group to rebut these critques, which if done effectively should bolster their thesis regarding the paleoreconstructions.

The journal peer review process does not guarantee the “correctness” of a publication. Further, on this particular topic the journal peer review process is perceived as somewhat of a joke, with supporting evidence for this in the CRU emails and further evidence provided in Montford’s book. That said, there are plenty of publications that critique the papers by Mann and others on this topic, including papers by Loehle, Zorita, Huybers, McKittrick and McIntyre, and most recently Smerdon et al. that is discussed at Klimazweibelhttp://klimazwiebel.blogspot.com/ 2010/ 07/ mistake-with-consequences.html. And none of these papers discuss what to my mind is the elephant in the room: the issue of grossly inadequate sampling in this attempt to produce a global or even hemispheric average.

So this field is quite immature. DIscussion in the blogosphere could in principle speed up the maturity of the field beyond what the peer reviewed literature can accommodate if there is serious discussion about the issues, such as calibration and assessment of the individual proxies, error analysis, significance testing, and the sampling issue. Retorts such as tamino’s and gavin’s on the RC thread do nothing to move this along.

No matter how hard a group of scientists attempts to impose a premature consensus or declare the debate to be over, science will eventually work this out. I predict (and hope) that the AR5 will result in a further backtracking of the confidence levels on this subject relative to AR4 (to a more realistic assessment of the individual proxies and a focus on regional temperature change), in the same manner that confidence levels in AR4 pulled back from the AR3. This is the only topic in the IPCC reports where confidence levels are in the conclusions are diminishing with time. This should all give us pause, since a continued pull back in confidence on this subject reduces the credibility of the IPCC.

There is nothing to be gained by declaring a premature consensus on a scientific topic, other than momentary political advantage. Science is the loser, and policy makers will start discounting scientific information in their decision making. A more thorough and honest accounting of uncertainties is required.

Judith:
Clearly stated. In your comment on Tamino’s review of Montford’s book I took you to be asking for a substantive discussion of the issues by those equipped to do so at RC and CA. This would fit nicely into what should be happening at this stage of the scientific discussion. For a moment there when Gavin was replying to ThinkingScientist I thought a real discussion was going to break out – but alas no.
I am glad to see that you are optimistic about AR5. I think you may be right, but only if the topic is not paleoclimatology. As you noted there is too little data and apparently very little new data. I suspect that the breakthrough to greater realism and more realistic levels of certainty will, somewhat paradoxically, come from GCMs and from the more realistic parameterization of cloud cover. At least those involved in discussions are actually doing observations.
Your patience and civility in all that has happened in the last few days are models for us all.

Judith,
Re journal peer review process in climate science. A major problem, apart from the slowness, is that the current peer review process operates as a closed shop in which only climatologists are deemed capable of reviewing other climatologists’ work.
Similar logic would dictate that an End of Year report compiled by the Guild of Village Idiots could only be convincingly reviewed by other village idiots. Whilst this might be pleasing to village idiots, such a contingency would be unlikely to improve the quality of the report.

The other problem with peer-review is that many journals are not interested in audits or critiques of past work. They immediately “move on”. Even when they allow it, they often limit comments to a few pages.

Mr. Loehle – What were the general Team criticisms of your paper submitted to E&E? My understanding was that you used only previously peer-reviewed proxy series (i.e., those published and accepted by climate science) and combined them in a more simple averaging scheme….I’m curious what the general criticisms were as I assume this work will not be cited in the upcoming IPCC report (or will it?)

Dollars to donuts it will not be cited, even though my result is similar to Moberg 2005 Nature paper. Here are the types of criticisms: he isn’t a geologist, he isn’t a climate scientist, there were “only” 18 series (though the China composite had 8 series and the Viau North American pollen composite had something like 100), it was published in E&E, he is a bad person, the record doesn’t go up to 2007 (of course the data don’t)–and many insisted on smoothing right up to the end of the data in about 1995, which would require future padding like the Team does, rather than what I did which was to smooth only to the point where data supported the smooth. It is also interesting that they objected to particular series I used even though there is no such critical scrutiny of their own recon series. Finally, they really wanted to weight the “best” series more heavily, which is of course what Mann does and how one get’s hockey sticks. As for the Mann himself, in climategate emails he simply refers to my paper as “awful, of course” which is a real compliment coming from someone who doesn’t understand spurious correlation and doesn’t correct the rain in Maine and uses upside down Tiljander. Whatever mistakes I made (which I then corrected) were trivial compared to standard dendro practice, like using bristlecones in the face of obvious problems with them, etc.

Mann himself, in climategate emails he simply refers to my paper as “awful, of course” which is a real compliment coming from someone who doesn’t understand spurious correlation and doesn’t correct the rain in Maine and uses upside down Tiljander.

I think it’s a giant leap to say he doesn’t understand. I think he knew/knows exactly what he’s doing, and that’s the real problem.

7. The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.

gavin gave an answer:

[Response: Absolutely untrue in all respects. No, really, have you even read these papers? There is no PCA data reduction step used in that paper at all. And this figure shows the difference between reconstructions without any tree ring data (dark and light blue) compared to the full reconstruction (black). (This is a modified figure from the SI in Mann et al (2008) to show the impact of removing 7 questionable proxies and tree ring data together). In addition, there are many papers that deal with issues raised by MM – Huybers (2005), von Storch et al, (2005), Rutherford et al (2005), Wahl and Amman (2007), Amman and Wahl (2007), Berger (2006) etc.

Judith, I implore you to do some work for yourself instead of just repeating things you read in blogs. (Hint, not everything on the Internet is reliable). ]

After denying the Tiljander problem in their response to Ross’ and my submission, Mann did post up another version some time last year. It’s useful that he placed this series on his website, but it was a year or so after publication. I haven’t taken examined this new iteration yet. This is the sort of thing that they should have tried to do right the first time.

So chapter 14 in Montford’s book lists a number of issues with Mann08. Are there specific errors in the book?

Given that Judith wrote her comment “from memory” and specifically listed a number of things in the book that Tamino’s review left out, I would say that she was close enough. The HSI book did note that Mann08 used some of the “usual suspects”, and that the hockey stick shape came either from the upside-down Tiljander or the bristlecones, since he always had at least one in there (p. 367) and that he employed a selection filter that favored 20th century upticks (p. 370).

The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.

how can this be close enough, when the paper does include a non-tree ring reconstruction and does not use “centered PCA”?

Re: sod (Jul 27 00:40),
Chapter 14 of HSI describes a whole series of things that are bad science in Mann 2008. For instance, NAS advised climatologists NOT to use bristlecone pine – but Mann 2008 did include the “obsolete hockey stick version” [HSI p367] of bristlecone data from Sheep Mountain. In addition, the suspect Yamal larch treering series was used (similar strip-bark type problems). Also, statistically unacceptable methods were used to pad the smoothed instrumental record up to the present. But the real coup was the introduction of sedimentation records collected by Tiljander, which also appeared to show a sharp temperature rise in the twentieth century. With this, Mann could show that the hockey stick appeared without the tree rings.

Now Tiljander’s work appeared to provide independent verification for the Hockey Stick shape. But it turns out to be unacceptable verification for two rather big reasons. First, the recent rise was almost certainly due to contamination from land use changes. And second, it had been used upside down – the true Tiljander record (shorn of the recent suspect data) clearly shows the Medieval Warm Period.

Mann 2008 only shows a hockey stick if all these unacceptable records are included. The other records (which may or may not be acceptable in their own rights) pretty well cancel each other out and thus provided the Hockey-Stick shaft, which now shows a MWP that is depressed in size because of ALL the rogue data.

Mann 2008 appears not to cure Mann 98 but to compound its bad science: ignoring NAS recommendations while paying lip-service; using data known and accepted to be suspect; bringing in new data that is even more suspect with the use of even more bad stats and elementary bad maths; and justifying the data with thimble-rigging tactics.

Ulf asked if Chapter 14 was badly flawed. Sod responded to this question with a deflecting comment about Judith Curry’s admittedly incorrect report. Curry is, however, correct in her observation of a thoroughly shoddy practice of science and Scientific Method here, that is bringing the whole of climate science into bad repute. And this is her main point. For Gavin to say she was “absolutely untrue in all respects” simply reflects badly on Gavin.

Tiljander collected data on four characteristics of the varved lakebed sediments from Lake Korttajarvi. All appear to be uncalibratable to the instrumental temperature record, due to increasing contamination since 1720 from local factors (farming, peat cutting, roadbuilding, lake eutrophication).

Mann08 established correlations between these proxies and the calculated 5×5 gridcell temperature, 1850-1995. As it happened, Darkcell was in the same orientation as proposed by Tiljander03, while Lightcell and XRD were in the inverted orientation. Tiljander03 hadn’t proposed a climate-relevant interpretation for Thickness.

But if non-climate signals indeed progressively overwhelmed climate signals during the 1850-1995 period, these correlations are necessarily spurious.

Work in 2008 by Jeff Id at “the Air Vent” suggests that these four proxies make only modest contributions to the Mann08 reconstructions. To my knowledge, there has been no thorough test of this question.

Mann 2008 only shows a hockey stick if all these unacceptable records are included.

but should have said

Mann 2008 only shows a hockey stick if at least one of these mutually back-scratching HS datasets is included.

I’ve owned up to my error, and am willing to own up to any further errors in my words. Judith Curry has shown willingness to do likewise. Which is all good scientific practice. But will Gavin or Tamino or Mike Mann ever do this?

It will be interesting to get some back and forth on how Mann08 handled proxy data to generate paleotemperature reconstructions, via their CPS and EIV methods. Gavin may well be right.

Be that as it may, it’s a bit jarring to see Tamino and now Gavin hold up this particular paper as an exemplar of what’s right with reconstruction literature. McIntyre has looked at its methods extensively, and found them wanting. See the “Mann et al 2008″ listing under “Other multiproxy studies” in the “Categories” pulldown menu on the left-hand sidebar, near the top of this page. Jeff Id at “the Air Vent” has additional analyses.

On a narrow issue, Mann08’s authors used the uncalibratable Tiljander proxies in their reconstructions. While an error of this sort is understandable, the failure to correct it stands as a reminder of the polarized and tribal nature of Hockey Stick Science. Unlike PCA, the Tiljander issue is accessible to any scientifically-literate layperson (e.g. this walk-through). It does not generate confidence in the rigor of the methods used in Mann08.

My understanding from The Hockey Stick Illusion is that Mann’s PC method not only mines for hockey stick shapes with an uptick at the end but also for inverted hockey sticks with a final downtick. The method gives a weighting +X to the former and -X to the latter, effectively flipping it upside down. This could explain why Mann is adamant that flipping the series upside down makes no difference – the analysis simply flips it back.

On the one hand, I can see some merit in this. If a series is anti-correlated to temperature, then that should give useful information. (Analogy: if a stockbroker is _always_ wrong when he advises readers to buy or sell a company’s shares, we’ll be able to make lots of money by doing the exact opposite!)

On the other hand, you’d expect a series of random numbers to have a few going up at the end and a few going down at the end, whilst still having an arithmetical mean of roughly zero. If the analysis method says “these ones going up at the end are correct but these others going down at the end must be upside down so let’s flip them”, suddenly everything pulls in the same directly. Heads Mann wins, tails you lose!

On the one hand, I can see some merit in this. If a series is anti-correlated to temperature, then that should give useful information.

…but surely only after it has been established that there is in fact such a relationship? In this case, Tiljander had specifically warned of increasing contamination. From her Boeras (2003) abstract: “Natural variability in the sediment record was disrupted by increased human impact in the catchment area at AD 1720.”

So presumably the series was included in the first place because varves, barring contamination, are regarded as good proxies for temperature? Given the large number of proxies used in Mann08, it makes sense that one would have to use an automatic selection algorithm of sorts, but one could also consider a continuously updated database where studies are marked according to their quality as temperature proxies – after someone has done a manual check of the purpose and conclusions of the study. In this case, simply reading the abstract would have been sufficient grounds to exclude it from the reconstruction, much as the Graybill bristlecones might have been a candidate for early exclusion since their specifically sampled for CO2-fertilized trees, not necessarily proxies for temperature.

I freely admit to only having glanced at Mann08 and am only going by recollection from the HSI on Graybill (I’m spending far too much time on this already), so I stand ready to be set straight if I have missed something fundamental here.

I do agree with you – Tiljander does not look to be a suitable proxy, given the serious doubts about the last 300 years or so (as declared by the author).

Similarly Graybill seems to have wandered around looking for funky-shaped strip-bark bristlecone pines out of scientific curiousity, whereas an attempt to map out an area using tree rings as a serious proxy for local temperature would at least attempt to sample weird-looking trees in the area in proportion to their frequency – or even ignore them altogether! [I remember reading somewhere that Graybill’s co-author Idso was appalled to learn that Mann used these trees in MBH98, given the concerns about their reliability.]

MM05 (EE) has some interesting pages about Graybill – see page 13 of the 32-page document (page number = 81; link at top left of this web page). Steve also wrote about some of the strip-bark trees he found during his field trip a few years ago, starting with breakfast in Starbucks then taking about 30 tree cores in just the one day. (Link please, someone?)

If all one is doing is picking stocks, then it does not matter which way your proxies go when fed into your regression. If skirt length predicts the stock market, fine. But if we are doing science and our framework says trees respond positively to temperature, and then we ex post weight some tree sites negatively, this makes no sense. Further, one can observe that in Mann08 some sediment proxies got a + and some a – which makes again no sense. However, it seems to me that Mann for sure and some others are simply treating tree rings as “data” and if it can match the instrumental data and spit out a regression then it is ok (which also explains teleconnections). This then assumes that exactly the same relations hold in the past for him to do the reconstruction: this pond always got a – sign and that pond always a +. But if it varies from pond to pond and forest to forest (or even tree to tree) in the 20th Century, why would it be constant at any location for 1000 yrs? Yet they seem unperturbed by unverifiable assumptions.

Tamino seems to have scrubbed his site of old posts, so the Joliffe correction is gone. I noticed this while searching for his silly error of calling a circle 1-dimensional.
From Tamino:
The n-sphere is the n-dimensional surface of an n+1-dimensional ball. The 1-sphere is just a circle; the 2-sphere is the ordinary sphere we’re all familiar with; the 3-sphere is the 3-dimensional surface of a 4-dimensional ball, etc.

From Wolfram – “Extreme caution is therefore advised when consulting the literature. Following the literature, both conventions are used in this work, depending on context, which is stated explicitly wherever it might be ambiguous.”

SOD, my statements on the RC thread were in the context of a summary of Montford’s main points, that I drew from memory of having read the book two months ago because I don’t have a copy of Montford’s book with me. These were not my personal arguments.

I am avoiding involving myself in the technical details of this debate, and am leaving this to others who have dug deeper into it.

My main personal interest is in the evolution and escalation of the conflict portrayed by Montford. And how unneccessary it was. And how the RC crowd should read the book to learn how they can avoid such uneccessary conflicts in the future. In fact, HSI should be required reading right along “Merchants of Doubt”, so they can figure out the difference. By confusing Steve Mc with a “merchant of doubt”, and applying the “hide the uncertainty” strategy used for merchants of doubt, and making ad hom and appeal to motive attacks against Steve Mc, HSI is a story of how this kind of strategy backfires when used on a watchdog auditor type, which is a completely different species.

SOD, my statements on the RC thread were in the context of a summary of Montford’s main points, that I drew from memory of having read the book two months ago because I don’t have a copy of Montford’s book with me. These were not my personal arguments.

I am avoiding involving myself in the technical details of this debate, and am leaving this to others who have dug deeper into it.

you made a pretty technical claim about Mann 2008. your claim was false. you obviously should correct your error.

you made the claim, that Tamino made errors in his approach to the book. why should we accept your opinion, when you make such obvious errors as the one above?

In fact, HSI should be required reading right along “Merchants of Doubt”, so they can figure out the difference. By confusing Steve Mc with a “merchant of doubt”, and applying the “hide the uncertainty” strategy used for merchants of doubt, and making ad hom and appeal to motive attacks against Steve Mc, HSI is a story of how this kind of strategy backfires when used on a watchdog auditor type, which is a completely different species.

you are completely wrong on this point a well. Naomi Oreskes is an expert on her field. she has written peer reviewed articles about the subject, she is dealing with in her book.

if Montford has done anything similar, you should quote and link it.

the two books can t be compared. one is a study by an expert, the other one is at best of dubious value, as the title shows.

My efforts to try to avoid such unneccessary conflicts are understood over here, but not over at RC and CP.

there is a simple reason for this. you basically endorse everything that is written here, while you are throwing mud at RC and the scientists they work with.

people at RC fully understood your false claims about Mann 2008.

your false claims obviously are part of the cause of this “unnecessary conflict”.

I don’t have a copy of the book with me, so I can’t check. Someone else will have to check for me. But if there is indeed an error in my statement, that there are no reconstructions in Mann et al. 08 that are free of both tree rings and the disputed PCA method, you should not automatically attribute that same error to Montford. I stated that this review was written without access to the book, which i read several months ago

If I make a mistake, I admit it. I simply don’t have the information with me to check it.

Sod – you have correctly highlighted the role of an auditor to point out errors of fact. It is then up to others to form their opinions of the consequences of these errors according to their significance.

Judith, isn’t part of the problem here that discussions in general, and most climate change discussions in particular, often take place at different levels at the same time; when arguing the bigger picture, hand-waving tends to be a common and necessary ingredient (on both sides)?

Me, I’m an hand-waving, big-picture kind of guy. I’m also a programmer, so I’m well aware of the need to be exact at times (compilers are real sticklers for detail). For many years, one of my closest colleagues was an extremely methodical analyst, and we used to clash spectacularly whenever I came into his room throwing ideas on the whiteboard in search for an interesting connection, abusing (in my mind, glossing over irrelevant) details as I went along. It used to take several iterations before we arrived at a common understanding. We didn’t quite speak the same language, although we were natives of the same country and experts in the same field.

We were able to work through it because we held each other in high regard. We also eventually worked out a signal system to quickly agree on what kind of discussion we were having.

I agree with your point that Montford’s book painted a very convincing and disturbing picture – so much so that refuting some (or even several) details in it hardly seems enough to render it irrelevant.

Ulf, I agree. Argument and debate should be the spice of academic life and is absolutely necessary for science to progress. This has been stifled by the climate community, since they mistakenly think that acknowledging uncertainty or admitting a mistake feeds the merchants of doubt and thwarts the policies that their science is supposed to be supporting. This is insane logic for a scientist, and if this doesn’t change, they will destroy science in the process. The endless appeals to motive and ad homs combined with hide the uncertainty (and the data) are destroying science and have necessitated the rise of the watchdog/auditor contingent, that is tormenting them endlessly. Rather than trying to understand the error of their ways, they keep slinging mud at themselves, dragging the rest of climate science (and increasingly all of science) into the mud with them. I am beyond disgusted. And they won’t even read the HSI because its written by someone from the wrong tribe. At least Tamino read it, even tho he either missed the main points or chose to ignore them in the review.

Judith, your comment on “argument and debate” is elegant and very germane. The temptation of inerrancy is Medieval, and contrary to scientific norms. The standard shouldn’t be “perfection” — in the peer-reviewed literature, in books or their reviews, or in blog comments submitted via airport WiFi networks. Constant fear of making a mistake will stifle creativity, the lifeblood of science.

The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.

Then sod quoted Gavin’s riposte at RC (I’ve stripped out the insults and snark) –

Response: Absolutely untrue in all respects… There is no PCA data reduction step used in that paper at all. And this figure shows the difference between reconstructions without any tree ring data (dark and light blue) compared to the full reconstruction (black). (This is a modified figure from the SI in Mann et al (2008) to show the impact of removing 7 questionable proxies and tree ring data together).

sod then remarks –

i am pretty sure, that the paper does contain a reconstruction, that “is free of questioned tree rings” so it looks like gavin was right, and Judith was wrong.

Gavin refers to the third revision of Mann08’s SF8a with “This is a modified figure from the SI in Mann et al (2008)…” As Gavin should know (because the point was raised on his Collide-a-scape thread), the linked figure postdates the publication of Mann08 by two months, thus was not a part of that paper and didn’t undergo peer review.

Still, Gavin asserts that there is no PCA (principal component analysis) used in Mann08. In the Mann08 SI, the CPS approach is described as being based on “variance matching”. The EIV approach works by using RegEM as “an iterative regularized variant on multivariate regression.”

I don’t know the language well enough to say whether Gavin is correct, in broad or narrow terms. If you come to believe that he is, I hope you will revise your thinking on that point, and communicate it.

This is relevant to something I have become familiar with: the use of the Tiljander varve series in Mann08. In my opinion, the ferociousness of that debate is not due to effects that these proxies have on the Mann08 reconstructions, which may well be fairly modest. Rather, I suspect the issue is Inerrancy. “If we are forced to admit that our vaunted methods of proxy selection were flawed in this particular instance, that would open us up to more general criticism of our methods.”

In allowing Mann08’s authors to lead them down this road, many climatologists have set a negative example as far as how to perform science. However, the same sort of trap looms for those who are able to criticize Prof. Mann’s work, I think.

Look again. The linked figure is November 2009 postdated the publication of Mann08 by 14 months and this doesn’t contradict Judith’s comment about the published article.

The main problem with any of these reconstructions is the one that Ross and I highlighted in our PNAS comment on Mann et al 2008 – the inconsistency of the proxies – a point that I also made to Revkin when asked why I didn’t offer my own reconstruction.

Your reply to Ulf here is very well-written and, I believe, correct, and I welcome it.

I do wonder if the risk you took in discussing the topic of the hockey stick at RC was worth it.

I also want to say that even if you made an error about the content of Mann et al. 2008, the main point at this juncture should be that Mann apparently cannot “get” a HS if he puts Tiljander the right way up (taking out the disturbed varves if he likes) and removes the Bristlecone pines.

This is one key topic that RC and its supporters apparently don’t want to talk or hear about. I, for one, have not bothered posting about it over there, because I assume the post will be disappeared.

Strikes me this would be a good fight to pick, academically that is. Even though there are a host of papers on it in climate science, its seems to me likely that:

1. Its just a fancy name for large scale patterns
2. Its undetectable by PCA any real way anyway (RyanO’s point)

While I initially thought Steve perhaps overdone the snark, now I am not so sure. Has the notion of teleconnection been exposed to any rigorous analysis that would show it to have greater substance than “invisible pathways (meridians of qi)”?

How *COULD* global temperatures affect a local site, while local temperatures do not? The only potentially reasonable explanation is that global temperatures somehow tell you something about the local site that isn’t temperature (e.g. precipitation). If you think that the temperatures in egypt and mongolia are more correlated with local non-temperature effects than local temperature, and that they are correlated in a coherent, linear, fashion throughout the past 1000+ years, then you might be a believer in teleconnections.

Needless to say, it takes a leap of faith and then some to subscribe to such nonsense.

There can, of course, be persistent relationships between well-separated regions. But you’ve touched upon the key concept that debunks standard notions of climatic “teleconnections:” lack of coherence. Not only with local or remote temperatures, but with other proxies in the immediate area. This “divergence problem” is most highly evident at the lowest frequencies, which are precisely those of greatest interest in climatology. Whenever I hear the term, I run and look for a forest to hide in.

A teleconnection IS possible in a sense, but must, as you say, be mediated by something. Let us posit that the ocean cycles or something moves the ITCZ (Intertropical Convergence Zone) north and south somewhat, shifting the zone of highest rainfall over Brazil (there is a paper that posits this, in fact). Then rainfall would be a proxy for a large-scale weather system. This proposition can be tested. However, what Mann does is posit teleconnections so he can use them in regressions but is uninterested in testing them. If teleconnections affect bristlecone pines, then something about the local climate is connected or correlated to something. Since the whole hockey stick ends up depending on this teleconnection, it would certainly be good science to check if bcp are responding to rainfall (not temp) and if so to establish how this could be a proxy for global temp (maybe it correlates with the PDO–not unlikely in fact given the location). To just stop with a hypothesis of teleconnection is not science, it is data mining.

Craig:
Even if they explored the hypothesis and found that it had some explanatory value, this is still data mining as far as I can see because the hypothesis comes after the statistical relationship is identified. These type of hypotheses need to be stated before the analysis of the data. It is essentially the same issue as removing outliers once you have identified them as outliers. It is not good practice.

It’s worthwhile to read the original Graybill and Idso paper on the stripbark bristlecone pines. They did try to correlate the tree ring data against all possibly relevant local weather/climate data, including precipitation (they summarize this nicely in a table).

IIRC, the only statistically significant correlation they reported was a moderate negative correlation to the previous winter’s temperature. The whole point of the paper was that local conditions could not explain the growth patterns, so they were looking for other explanations.

(I downloaded the paper several years ago. I can’t find a live link now, and CA apparently lost its copy when it updated last year.)

Once you explore the possible causes for the teleconnection, and they come up empty locally (no local corr to anything), then it is spurious, in this case probably due to the strip bark response to damage (since Graybill set out to sample strip barks for some reason), and you should drop it from the analysis. “Teleconnections” is not a magic wand.

I remain puzzled by the endless confidence in the reconstructions displayed by Gavin over at RC. Especially since it isn’t even his field.

I read over “A surrogate ensemble study of climate reconstruction methods: Stochasticity and robustness” by Christiansen, Schmith and Thejll., J. Climate, 23, 2832-2838, 2010, as well as Rutherford, et al’s comment, and Christiansen et al’s reply tot he comment. Christiansen et al have laid out a very strong argument that RegEM, in all forms, grossly underestimates long term variation in reconstructions, and so would likely miss a MWP that was warmer than the 20th century, yet Gavin and company carry on as if everything about the reconstructions is certain and robust, and it is nearly certain the MWP was cooler than the 20th century.

If I were Gavin, I would be a lot more cautious… he really risks making a fool of himself defending the results of what could well turn out to be a faulty reconstruction method.

All the heat that I get in the blogosphere is worth it to me because of the many thoughtful emails I receive, offering support, ideas and information. I just received this in via email, referring to an interview with Stephen Chu in the Financial Times, Feb 17, 2010 (registration required): http://www.ft.com/cms/s/0/a71cf176-1bff-11df-a5e1-00144feab49a.html

FT: On the climate threat, do you think there is legitimate concern now about the fact that some of the science, even if it’s not flawed, it’s been misrepresented, which has undermined the case in many people’s eyes.

SC: First, the main findings of IPC over the years, have they been seriously cast in doubt? No. I think that if one research group didn’t understand some tree ring data and they chose to admit part of that data. In all honesty they should have thrown out the whole data set. But science has a wonderful way of self-correcting on things like that. What the public doesn’t understand is that as you go forward there will be these things and they will self correct. On balance if you look at all the things the IPCC [Intergovernmental Panel on Climate Change, the body of experts convened by the United Nations to advise governments in responding to global warming] has been doing over the last number of years, they were trying very hard to put in all the peer-reviewed serious stuff. I’ve actually always felt that they were taking a somewhat conservative stand on many issues and for justifiable reasons.

In all honesty, they should have thrown out the whole data set. And here I was trying to be polite . . .

No. I think that if one research group didn’t understand some tree ring data and they chose to admit part of that data. In all honesty they should have thrown out the whole data set.

Hmmm, I think that could be a mistranscription by the journalist, seeing as how the first sentence does not make sense as written. A more sensible reading is “if one research group didn’t understand some tree ring data and they chose to admit part of that data, [then] in all honesty they should have thrown out the whole data set.

Whatever. A press interview is nearly as ephemeral as a blog post.

Question: Dr Curry’s claim that Mann et al 2008 contained no non-dendro reconstructions and used the ‘discredited’ PCA centering is false in both parts, as can be demonstrated by the simple precaution of actually readong the paper in question. For those of us who have not read the book, are these false claims from the book, or are they rather an artifact of a flawed memory?

Phil, your question is discussed in some detail throughout the thread. The punchline is that my statement was correct in spirit but incorrect in detail, but Gavin’s specific rebuttal to my statement was also incorrect. Thank goodness for the auditors :) .

Thanks, so the answer to my question [is the same ‘incorrect detail’ about PCA and a non-dendro reconstruction in the Montford book or not] would be? I understand you were speaking from memory and possibly do not have access to the text but so far nobody else has been able unambiguously to clear up what seems to me a fairly simple and factual point.

Gavins’s response actually managed to be correct in both spirit and detail, btw. The SI published alongside the paper clearly contained several no-dendro versions of the reconstruction. Gavin pointed us towards a figure produced later, showing the effect of removing the problematic proxies (including Tiljander), and tree-rings. We could argue all day about whether the overall shape is suggestive of an item of sporting equipment, however to me, it seems utterly consistent with the conclusions of the paper viz:

“Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used. If tree-ring data are used, the conclusion can be extended to at least the past 1,700 years, but with additional strong caveats. The reconstructed amplitude of change over past centuries is greater than hitherto reported, with somewhat greater Medieval warmth in the Northern Hemisphere, albeit still not reaching recent levels. “

You conveniently disregard the point I had made, which is that Mann has not reported on what he gets if he removes bristlecones and puts Tiljander’s data in RIGHT-SIDE-UP and removes the disturbed varves. Not removes — inverts it back to the correct orientation.

A few have already made this point before me, but not enough people versus the number who keep talking about taking Tiljander data out.

I am way out of my comfort zone here, however my understanding is that the validation step used by Mann et al requires that the proxy must be used in the orientation that it is, or else conclude that the modern climate signal is contaminated and exclude it altogether. Mann did both, finding that his conclusions are not sensitive to its inclusion.

I also am no expert on this, but I can say that it’s been sliced and diced all the ways you could ever want here on CA.

Before I go read your link I want to say that I am not disputing your characterization of Mann’s methodology. I am saying that it’s been reported extensively (NOT at RC, of course) that if Mann excludes Tiljander, he INcludes bristlecones, and vice versa.

And I am saying that there appears to me to be a climate signal in Tiljander data pre-18th century, which if they are included in the average, may augment both the MWP and the LIA.

Having read the comment by AMac that you link to, and the response by Gavin Schmidt, I see nothing that contradicts what I have written.

Schmidt, in his statement:

“There is no other possible reconstruction that would use the proxy in another orientation. It is either in the way it was, or it isn’t included at all. Both options were published together in the PNAS paper. [Therefore, ] [n]o correction needed.” [1]

as usual fails to address the real issue, which is whether there is useful data prior to the period of contamination which would regress in the other direction (I believe there is) and whether it should have been possible for Mann to include this earlier data while excluding the later data (I believe it should have been.)

If either of my beliefs is wrong, then your point which I was criticizing might deserve a second look.

However, as I suggested, these two issues have been analyzed ad nauseam by CA and some of its regular commenters, whereas I have not seen any indication that RC moderators have thus far been willing to allow it to be given fair treatment on their blog.

So, I am comfortable with my beliefs at this time. I recognize that this is not the most scientific approach, but since I am not a professional researcher, I have that luxury, whereas I do not always have the luxury of mounting a major project to try to falsify such beliefs. That is why I am critical of Mann and his collaborators who have put out what is obviously an insufficient effort. And I consider myself at liberty to conclude that a sufficient effort by Mann would apparently show results that are at odds with what he would like to show.

RTF

Steve: when we looked at the Mann algorithm in late 2008, we noticed that the algorithm permitted some proxies to be used in opposite orientations. Mann does versions calibrated on both 1850-1949 and 1896-1995. If a given proxy in a certain class is positively correlated to temperature in one period, and negatively oriented in the other (and it happens for some “proxies”, it has opposite orientations in the two reconstructions.

Do you think it’s possible that there could be a ‘trading of hostages’ here..? Where you agree that you mis-spoke in that you were looking for a reconstruction that was free of tree-rings AND Tilgander, etc. in exchange for Gavin agreeing that he jumped on your omission to grandstand a little bit (perhaps avoiding what he knew you really meant?)..?

I realize that last part is going to be an impossible journey, as I suppose no scientist would ever want to respond to a question that he is forced to improve before answering (especially one that’s less likely to be favorable rhetorically).

But, at the very least…there could be some progression down the line of the idea that maybe there isn’t a reconstruction out there of the entire Earth’s temperature anomaly profile out to 1000 years that omits these genuinely contested data sets. (which, the discovery of one would be a BIG help in trying to settle some of these things).

Steve: Judy, here is my emulation of Mann’s no-Tiljander no-dendro reconstruction in its AD800 step. I do not in any sense argue that this is “right”. However, I don’t think that this can be reasonably described as a Hockey Stick or that it provides convincing evidence on medieval-modern differences. And this is before parsing the proxies that make up the figure, some of which have hair on them.

. However, I don’t think that this can be reasonably described as a Hockey Stick or that it provides convincing evidence on medieval-modern differences.

Perhaps if you actually plotted modern temperatures (i.e. off the chart)? Hiding the incline? ;-)

Steve: we’re talking about the performance of the proxies here. I’m not suggesting that any Mannian squiggles actually mean anything. If temperatures are pasted on, then you will get, as you are aware, a divergence problem.

I thought the divergence problem was more a characteristic of dendro proxies? It is not the most hires graph in the world, however Mann’s cyan line does not seem to diverge significantly from his red instrumentals.

I see nothing wrong, if performance of the proxies is indeed the topic, with displaying the proxy alongside the quantity it is meant to be a proxy for, in this case the NH temperature, currently showing an anomaly of >1C in the Hadley crutem3nh used by the paper…..

Steve; there are so many problems in the methodology that it’s hard to describe them all. Look at contemporary posts on Mann 2008. If you do ex post screening and other tailoring on enough proxies, you can get decent matches in an instrumental period with red noise. There is no statistical analysis in Mann 2008 of a recognizable type. It is all “alternative” statistics and is very laborious to parse. If you doubt that it is total bilge, I commend the 2008 threads.

Further down you exhort readers not to make too much of this graph. Nevertheless, it shows a cold Dark Ages, a warm Mediaeval period, a cold Little Ice Age, and a warm Modern period. So it at least has a basic credibility based on corroborating evidence.

Still, if you tell me how little data the graph depends on, and that it’s not enough, then I might agree that it’s a mere bagatelle.

sod says: “[R]emoving multiple proxies from a reconstruction might have more effects, than just changing the shape of the curve.”

Again a detractor conveniently disregards my point that the Tiljander proxy did not have to be completely removed.

Mann and supporters are falling all over themselves to claim that it was impossible to include part of it, because the part that would need excluding was needed to do a calibration.

Totally absent is any explanation for why they need to calibrate the Tiljander data to the temperature record.

AMac has repeatedly pointed out that with the varves, the question of which measurement variance (up or down) correlates with higher temperatures is straightforward and not controversial. Thus, this suggests to me that Mann and co-authors should not have any problem with running a correlation between the Tiljander proxies and other proxies which have already passed the researchers’ temperature-correlation tests.

I see no reason to think that such an approach is not the best one.

The fact that Mann’s supporters talk around this question and try to deflect attention from it suggests to me that my thinking is correct.

[I was in the middle of typing when Clarke responded…I didn’t see that particular image that omitted all those contested data sets, I must have missed it somehow.]

I shall stand corrected on this existing, though I really wish we could have a full-earth reconstruction going back this far (especially given all the present day arguments surrounding arctic ice-loss vs. antarctic ice gain).

It’s a good thing there are ‘auditors’ out there like MM. It appears some of these papers get the ‘review they never had’ and probably should have, given the submission of later figures and revisions in future papers that may not have been as deeply motivated, were it not for their evaluation.

And no, I don’t think auditors should be required to suggest their own alternatives or reconstructions, but it would certainly be nice if someone would…because you don’t want a situation where one group is just working on what IS, and the other is just taking that info, and trying to figure out what IS NOT.

A mistrancription by the journalist, or misspeaking by the interviewee?

I’m not sure that I agree that a press interview is nearly as ephemeral as a blog post. The FT is a pretty serious newspaper, read by a lot of key players in commerce and in government. It is an unwise and ill-prepared interviewee who does not recognise this and frame their remarks accordingly.

Where blog posts may miraculously disappear, the FT has a wide paper-based circulation and once committed to this medium the tend to stick.

I’m sure that nobody in climate science would commit such elementary misunderstandings of the terms of their public utterances.

But then again I did hear Watson say at the Climategate debate that he had not read the relevant e-mails..and Davies caught out by the chronology of an inquiry he had commissioned and was there to defend…….

Bishop Hill notes a review of Hockey Stick Illusion just out. Many commenters are raving about it and I agree, it’s by far the best I’ve seen, incisive and informative. It clarifies many of the issues raised on this thread, including the rubbish content of Mann 2008. Judith Curry do go and look at it!

There’s not a lot of factual conent here to check, however Dawson utterly misrepresents the NRC (North) panel as performing a fatal hatchet job on MBH. So where North wrote …

“It can be said with a high level of confidence that global mean surface temperature was higher during the last few decades of the 20th century than during any comparable period during the preceding four centuries. This statement is justified by the consistency of the evidence from a wide variety of geographically diverse proxies. Less confidence can be placed in large-scale surface temperature reconstructions for the period from A.D. 900 to 1600. ” (Which is consistent with the grey error-bars in MBH. ) Dawson exaggerates this up to

“it acknowledged that the Hockey Stick’s depiction of temperatures before 1600 was invalid”.

Huh? Dawson then selectively quotes the NRC report on uncertainties, incorrectly asserts that it found MBH’s “reliance on single validation statistics was unacceptable.” but he skips over the central conclusion that ‘

“he basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on ice caps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years.’”

(from the Summary)

Dawson then repeats the false straw man that the Hockey Stick is a result of the decentred PCA, condemned by Wegman, when using ‘conventional’ PCA makes no significant divergence. Dawson repeats the falsehood that ‘The Hockey Stick graph that had appeared in the 2001 report was conspicuous by its absence’ in AR4 when it’s right there alongside more recent studies. On Mann 2008 Dawson writes ” It was noted that the R2 statistic was absent, despite the fact that the North committee had agreed that it was required to verify the reliability of a reconstruction”. Here is what the North report actually says about r2: “However, r2 measures how well some linear function of the predictions matches the data, not how well the predictions themselves perform. The coefficients in that linear function cannot be calculated without knowing the values being predicted, so it is not in itself a useful indication of merit”

and so on and so forth. If this is really is an abridgement of the book, and the book contains the same errors, then it is hardly a strong recommendation.

Still no answer to my simple question.

Steve: Eduardo ZOrita charqacterized the NAS Panel as the most severe criticism of MBH that was possible for them. At the House subcommittee, North was asked whether he disagreed with anything in the Wegman Report. (As was Bloomfield.) Under oath, they said no.

“In January 2009, Windschuttle was hoaxed into publishing an article in Quadrant. The stated aim of the hoax was to expose Windschuttle’s alleged right wing bias by proving he would publish an inaccurate article and not check its footnotes or authenticity if it met his preconceptions”

Re: Phil Clarke (Jul 28 03:51),
Phil, it appears your simple question is about the inaccuracy of Judith’s statement vs accuracy of Gavin’s response, and which version is present in the book. And your claim about Gavin’s response is that he’s correct, and that the no-dendro, no-tiljander version is “utterly consistent with” this conclusion of the paper:

“Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used. If tree-ring data are used, the conclusion can be extended to at least the past 1,700 years, but with additional strong caveats. The reconstructed amplitude of change over past centuries is greater than hitherto reported, with somewhat greater Medieval warmth in the Northern Hemisphere, albeit still not reaching recent levels. “

Steve M has shown that your basic assertion of the data being “utterly consistent with” the papers’ conclusion is itself wrong. Steve understands the data better than Judith, better than Gavin, better than Mann and better than you.

Doesn’t appear helpful to me to be beating people up about whether they got this or that detail incorrect, when almost everybody got the big picture wrong.

The lessons to me in this:
* Remember to be humble, willing to admit mistakes. I don’t see the Team doing that. I do see folks here doing that.
* It’s hard to keep track of the pea under the thimble. In this case, it’s dendro+tiljander that leads to hockeystickness. Steve got this right. You still do not appear to have seen it.
* Be cautious when people pose “gotcha” questions. It’s like asking “when did you stop beating your wife?” :)

2. The graph excludes modern instrumental readings (aka ‘recent warmth’), so far the NH average for 2010 is >1C, well off the scale. No version of the MWP so far plotted comes close. Consistent, then.

Steve: I think that your question’s been answered. In respect to the published article, Mann et al 2008, it is my view that the published graphics in the original article and SI, including amendments in 2008, did not include a reconstruction showing a hockey stick that did not involve either strip bark bristlecones or contaminated Tiljander sediments. Andrew Montford’s point in HSI – a point previously made at CA – was the bristlecone rebuttal used Tilander sediments (contaminated and upside down); and to show that contamination of Tiljander sediments “didn’t matter”, used strip bark bristlecones. The point in the book was right. Judith’s point is consistent with the book.

In my opinion, knowing of the contamination of the Tiljander sediments and the problems with strip bark (e.g. NAS panel recommendation), it was the responsibility of the authors not to use either, not the responsibility of critics to spend weeks trying to figure out all the weird things in the Mann 2008 algorithm in order to calculate a non-dendro non-Tiljander result if that was what was of interest.

In November 2009, just before Climategate, Mann placed a non-Tiljander non-dendro reconstruction on his website. He did not issue a Corrigendum at PNAS nor did he publish a notice of the new information at realclimate. That Mann did so in late 2009 long after the fact did not refute the claim in respect to Mann et al PNAS 2008.

It’s very misleading for Gavin to pretend that a website addition in November 2009 was part of the corpus of Mann et al 2008, that should have been considered in CA commentary on Mann et al 2008 in late 2008 (which was what MOntford was reviewing).

This website version came just before Climategate and has not been analysed other than a quick note at the time. As I observed previously, the no=Tilj no-dendro does not have a classic HS shape, instead having a pronounced MWP – though I place little weight on any of these squiggles.

“I think that your question’s been answered …. Judith’s point is consistent with the book. ”

Pea and thimble time. Dr Curry stated “The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.”

As Gavin pointed out, reading the paper shows that both points are demonstrably wrong, so I was curious as to whether this was a reflection of the book or not. Apparently not, and Montford in fact deploys the argument that there was no reconstruction without Tiljander AND tree-rings, which is true of the original paper. Thanks for clearing that up.

Dr Curry’s claims were not her personal arguments, were correct in spirit but incorrect in detail and ‘consistent with the book’ (as opposed to say, actually in the book?). Is this that ‘post-normal science’ I’ve been reading about? ;-)

So for what they’re worth, we have the various NH reconstructions in Mann 2008, we have his no-dendro, no-dubious-proxies posted later, and the no-dendro, no-Tiljander version constructed here, for which thanks. I think I am correct in saying that the reconstructed temperatures for the last 1,000 years in none of these plots rises above about 0.6C below the current NH anomaly of about 1C?

Steve: Judith’s comment mentioning PCA in connection with Mann et al 2008 was an error that derived from her recollection, not from the book. Her point that there was something wrong with both alternatives in Mann et al 2008 was, I think, “correct in spirit”, but in these sorts of debates, it is important to be correct in letter, as any such missteps are pounced on to divert attention from the beam in the Team’s eye, as happened here.

There is no basis for your statement that Mann’s later version contains “no dubious proxies”; it still contains dubious proxies and the method will produce hockey sticks from red noise (though for different reasons than short-centered PCA – Jeff Id has written lots about this). So there is little meaning in any of these squiggles. BTW
Mann’s Nov 2009 version still uses the Baker speleothem reconstruction upside-down for example.

Too bad Tamino’s review was posted during a period when I don’t have time to put into blogging. I felt obliged to pipe in since I challenged RC to do the review. My mistake has been an unfortunate distraction. Which wouldn’t have been a distraction if this mistake hadn’t been used to mischaracterize and discredit my broader points.

Mistakes happen, and they shouldn’t be a big deal when they are identified, acknowledged, fixed. However, the politics of expertise that is the basis of the consensus demands that the experts be oracles and never admit mistakes. Ravetz’s statements about the “radical implications of the blogosphere” are challenging the power politics of expertise that these guys are playing. Here’s hoping that a saner environment for dialogue and argument can evolve in the blogosphere.

Judith, I’m not as sold as you are on Ravetz’ “post-normal science”. Or I’d be inclined to frame it differently. IN non-controversial fields where there are practical applications, extensive analyses often occur in the period post normal-science. It’s called engineering. If you’re building a chemical plant, you don’t go directly from little articles in Nature and Science, regardless of their merit, to construction of a plant. There are lots of parameters to be checked and engineered.

It seems to me that the AGW dispute – as between say Lindzen and others – is on the amount of water cycle feedback and that study of this phenomenon would benefit from fewer articles in Nature and Science and more technical reports of the type that engineers do. Post normal science.

Steve, I don’t buy postnormal science either. You might argue that there is a postnormal environment, but science isn’t postnormal. But I think the idea of extended peer communities is very relevant for scientific issues that have policy relevance, and preceded the postnormal science idea. As for the radical implications of the blogosphere, well maybe from where you sit you can see it, but what you have done has radically changed the politics of expertise.

Here’s a blurb on this from a paper that i’m writing, that may clarify my point:

When the stakes are high and uncertainties are large, Funtowicz and Ravetz (1993) point out that there is a public demand to participate and assess quality, which they refer to as the extended peer community. The extended peer community consists not only of those with traditional institutional accreditation that are creating the technical work, but also those with much broader expertise that are capable of doing quality assessment and control on that work.
New information technology and the open source movement is enabling extended peer communities through the rapid diffusion of information and sharing of expertise, giving hitherto unrealized power to the peer communities. This new found power has challenged the politics of expertise (Fisher 1989), and the “radical implications of the blogosphere” (Ravetz) are just beginning to be understood. Climategate illustrated the importance of the blogosphere as an empowerment of the extended peer community, whereby “criticism and a sense of probity were injected into the system by the extended peer community from the blogosphere” (Ravetz WUWT post).

Too bad Tamino’s review was posted during a period when I don’t have time to put into blogging. I felt obliged to pipe in since I challenged RC to do the review. My mistake has been an unfortunate distraction. Which wouldn’t have been a distraction if this mistake hadn’t been used to mischaracterize and discredit my broader points.

sorry Judith, but it is you alone, who discredited your broader points.

there are multiple other errors in your posts on RC. the Mann 08 story simply was the most obvious one.

people did focus on your error, because you failed to admit it immediately. you are still being vague about it.

your defence is also weak. lack of time is a false claim. you have written a lot of further comments, mostly with very little real content.

lack of access to sources is also a false excuse. Mann 08 is available on the web.

you could have checked it within minutes, and just admitted your error.

and instead of writing another post that does not focus on facts at hand, you should do exactly that now!

ps: the biggest other error in your posts was, that you did not detail any error by Tamino. just vague claims, without any substance. this also is something, that you should address as soon as your work allows!

Mistakes happen, and they shouldn’t be a big deal when they are identified, acknowledged, fixed. However, the politics of expertise that is the basis of the consensus demands that the experts be oracles and never admit mistakes. Ravetz’s statements about the “radical implications of the blogosphere” are challenging the power politics of expertise that these guys are playing. Here’s hoping that a saner environment for dialogue and argument can evolve in the blogosphere.

the only real mistake that we saw, was your mistake. and you failed to acknowledge and fix it, right up till now.

i don t understand, why you write general stuff like this. it is a perfect description of your own errors.

sod, my point is that the whole situation, the insane hockeystick conflict, is a general problem, that deserves general discussion.

The technical details of the science are of much less relevance in terms of “truth”, because they change monthly as new science comes out (this is a very young field). The other statements in my review are apparently accurate in terms of representing Montford’s points, which was the point of my review, which was a summary of Montford’s points and what the book is about. Which Tamino failed to do, and said things that had nothing to do with what was in the book, and no, I have no intention of cataloguing each of these and going back through the book which i don’t have with me to make sure that i don’t make a “mistake” in this mostly senseless and irrelevant exercise in my opinion. Each of these points in terms of the actual science or “truth” is being debated by both sides, as evidenced by the traffic here and at RC. I did provide clarification re the AR4 pullback, which was related to warmest year, warmest decade.

My point about integrity and feynmann is that integrity isn’t about scientific “truth” which is still being argued in this field or about who made mistakes. Rather integrity is about the process of science. Science isn’t a result its a process. On the RC thread, I have been subjected to ad hom, appeal to motive, twisting of my arguments, etc etc. Which reinforces the point of Montford’s book that this kind of behavior escalates conflict and doesn’t move the science forward

I am still checking here and engaging here a bit, but i really do not have time to wade through all that stuff at RC and CP, where there is way too much noise relative to any possible signal. I have told people at RC and CP to email me if they have a substantive point or question, and I will get to it.

If you have a specific technical point that you want me to address, let me know. If it is about some nuance of some hockey stick answer, I will let gavin/tamino argue it out with McIntyre. My main point is that people should read the book. Hopefully this rather insane exchange will motivate people to do that.

“My main point is that people should read the book. Hopefully this rather insane exchange will motivate people to do that.”

Speaking just for myself, Tamino’s review, the highly revealing and error-strewn response to it, and the risible ‘Quadrant’ review moved the chances of me never reading this book from ‘likely’ to ‘very likely’.

Steve: Too bad that you are deterred by Tamino’s error-strewn review. Nor has anyone provided a substantive response to any of my criticisms or responses to Tamino’s nonsense.

The exchange _is_ rather insane, and it is so because Curry’s critics fixate on one ill-informed inaccuracy and deflect many attempts to move on from it.

Of course, if one wants to hold Curry’s feet to the fire to try to extract more from her, that is their right. But don’t let that be an excuse for avoiding discussion of other important topics! Otherwise, it eventually starts to look like just another misdirection.

Unfortunately, the masters of propaganda (which are the psychologists, the sociologists, the market scientists, the political scientists, and of course the lawyers) know all too well that misdirection — even voluminous misdirection — can be highly effective with a large percentage of one’s audience.

Do it enough, and you might be able to eventually prevail in the public mind in any dispute whatsoever, even without having any of the facts at all on your side!

The argument does not work on the typical skeptic who posts here. But, of course, there are many others who come here to read (just like I did for a couple years before posting). And many of these do not enter the debate equipped with a full briefing of the evidence on each side of a question. They are the prime targets for such tactics.

Then there are wavering academics who get the message that this is what will happen to them if they begin to voice concerns with the “party line”. And so the message has a chilling effect on them. This is the other main reason for trying to keep the focus on Curry’s mistake.

In a horrible sort of way, Curry is actually kind of helping them make their point (not intentionally, of course.) What is needed is another from her station or a similar one who will similarly speak up for the truth and take the inevitable lumps and black eyes that result.

I’ve no doubt there will be another. But who will it be, and will they jump into the ring too late to make a meaningful difference to the outcome of the debate?

I have read the Bishop’s “The Hockey Stick Illusion” and it is a very good read. I am impressed by his command of the history of the IPCC process, the contributions of the various players and of Steve McIntyres involvement. This book years from now will be seen as one of the seminal works in the history of the science of climate change.

Sod, one other point. Montford’s book is a history of science tome, not a journal article. In reviewing it like a journal article or a blog post on climateaudit, Tamino’s review missed the mark, independent of whether his scientific arguments are somehow more “true” than those made by Montford. People are so caught up in this silly war that endless games of gotcha are played over nuances of points in a science that is highly uncertain. I’m saying this is a dumb game and i’m not going to waste my time playing it.

I finally understand what is going wrong in this exchange, illuminated by SOD’s queries at climateaudit. To those of you who haven’t read Montford’s book, it is a history of science tome. It starts out with the rollout of the TAR report and how McIntyre got interested in the problem. It ends with a hastily added chapter on climategate, which broke just as the book was going to press. Its sort of who said/did what when, which is well documented, explanations of some of the technical details, along with a narrative that reflects on these events. The who said what when is accurate as far as i can tell, as well as explanation of the scientific details. The narrative is of course open to some spin.

Tamino reviewed the book like it was a review of hockey stick science and another salvo in the RC vs CA war. This isn’t what the book is about, which is why i gave Tamino the C- grade for his review. So given what the book is about, it is not to hard to imagine what i meant when i said Tamino’s review was inaccurate: it simply did not portray what Montford said nor did it catch what the book was all about. I was not in any way attempting to counter Tamino’s “review of the science”. Like i’ve said 10 times before, this topic is not my expertise, it is an immature field with many uncertainties, so I am not motivated to dig into any scientific nuances here and debate them publicly in a forum like RC that has a great deal of hostility on this topic owing to pent up frustration, battle scars, whatever.

The point of this history of science is to understand how this happened and why. In reading this, i saw many points where i said “if only something slightly different had happened, this would never have occurred.” This conflict is fundamentally different from a merchants of doubt conflict. Surely we all want to avoid such conflagrations in the future. So the issue that montford raises, and that i have raised in my posts, are general issues, about the integrity of science, how to avoid conflicts, how to deal with mistakes, how science should be conducted when there are alot uncertainties and the field is immature, when the situation is politicized, etc.

So I have no intention of debating any aspects of the science on this topic. In spite of the fact that most people on this thread thought the point of all this should be defending Mann’s science (and Amman, etc) and identifying scientific “truth” in all this. This is highly uncertain science in a young field. So get over it, we aren’t going to get “truth” on this anytime soon. The challenge is to avoid these crazy conflicts and move paleo reconstructions forward

Jeff, so far I have avoided getting sea sick, and it isn’t easy. I am getting emails from some very serious and well connected people that are extremely concerned by what they are reading on CP and RC. Their strategy is backfiring I think, maybe rationality will prevail at some point. I recall a similar (but not as bad) reaction to my trust post over at WUWT, and many of them eventually became more reasonable. So this is a test: are the RC posters less rational than the WUWT crowd? So far the answer is yes, but it takes time to settle down. Unfortunately I ended up being the sacrificial victim in all this, but I have more lives than a cat :) Somebody who actually gets paid to be a scientist has to speak up in defense of science, I guess.

2. The graph excludes modern instrumental readings (aka ‘recent warmth’), so far the NH average for 2010 is >1C, well off the scale. No version of the MWP so far plotted comes close. Consistent, then.

…is illogical. What gives us any ability to conflate proxy and instrumental temperature? What gives us confidence that proxies are able to predict instrumental readings with any significant accuracy at all, let alone recent readings? So far, we’ve seen very little ability to accomplish this.

…is illogical. What gives us any ability to conflate proxy and instrumental temperature? What gives us confidence that proxies are able to predict instrumental readings with any significant accuracy at all, let alone recent readings? So far, we’ve seen very little ability to accomplish this.

your claim is plain out false.

we have a period of overlap, between proxies and a very accurate temperature record.

but the more important point is a simple one: when scientists label the y-axis of their graphs with the values in “°C”, they can be compared to measured temperature. simple, eh?

A few questions that may show that teh issue is more difficult than lablling axes witht eh letter “C”.

What sort of smoth ahs been appled to the instrumental data? How is the end point smoothing problem taken care of? And the elephant in the room, is the proxy response linear in the modern era? The divergence problem shows that this is not a problem in the labelling of axes.

we have a period of overlap, between proxies and a very accurate temperature record.

I’d take exception to the temperature records being “very accurate”, but I’ll let it pass for argument purposes. The fact is that there are some well known anomalies between (among?) the proxies and the temperature during the recent portions of the overlap. That’s what the whole “hide the decline” brouhaha was about. And that doesn’t even include the updated proxies which disagree with earlier proxies but have been withheld without explanation.

” MR. STUPAK. Thank you, Mr. Chairman. Dr. Wegman, in your
report you criticized Dr. Mann for not obtaining any
feedback or review from mainstream statisticians. In
compiling your report, did you obtain any feedback or
review from paleoclimatologists?

DR. WEGMAN. No, of course not, but we weren’t addressing
paleoclimate issues. We were addressing–”

And, as you know Gerald North later said of the session: “I was asked to appear before a small group of senators about 15 years ago and that was very encouraging. But this time proved to be a disappointment. This issue is so polarized politically that it is impossible to simply inform the elected representatives. I was definitely under the impression that they were twisting the scientific information for their own propaganda purposes. The hearing was not an information gathering operation, but rather a spin machine.”

“There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not ‘wrong’ and the science was not ‘bad’. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).”

Why would a statistician, called in to opine on statistical issues in a climate paper, need anybody other than himself to render a judgment or assessment?

North’s comment is both irrelevant and wrong. Other people using crude methods, and arriving to the correct answer, has nothing to do with mann using the wrong method (and refusing to admit that he did so). North also assumes that mann got to the right answer anyway, because “later studies” perhaps used “better choices”; we know that virtually all other reconstructions had the same issues with the source data. This has virtually been stated hundreds of times here by Steve: there are no independent reconstructions, they all use overlapping data, specifically the only data that “matters” (bristlecones/yamal/tiljander).

Why would a statistician, called in to opine on statistical issues in a climate paper, need anybody other than himself to render a judgment or assessment?

Consulting statisticians would definitely wish to familiarize themselves with the background context of a statistical application which they were examining. This can often be done by reading background books and papers and in cases where an issue appeared to be unclear, they might choose to contact someone knowledgeable about the subject.

Wegman did background reading on the subject and paraphrased some of the relevant points from such readings in his report. In fact, Deep Climate ignorantly tried to pass this off in a series of posts as “plagiarism”(!) on Wegman’s part. The fact that the report clearly identified the problems with Mann’s work indicated that he did not need any interviews with paleoclimatologists.

North’s statement on “choices” made by Mann displays a naivete of the situation which is astonishing. Making choices with regard to proxies and their use is one thing. However, when it comes to “choices” in the context of statistical applications, this is an entirely different matter. Unless one fully understands the mathematical background of a statistical procedure, making changes in the procedure (which has been derived under a set of assumptions within which it can provide valid results) will not produce any “great discoveries”. In this area, such discoveries come from capable mathematical thought with proof of their efficacy justified analytically.

By his own admission (and by evidence in his papers), Mann is not a statistician. The effect of choosing to short center the proxies was one of those “choices” which backfired because of the propensity of the changed procedure to produce hockey stick output – not a “great discovery”, but an “unpleasant surprise” which went unnoticed by Mann et al.

…Dawson utterly misrepresents the NRC (North) panel as performing a fatal hatchet job on MBH… Dawson then selectively quotes… incorrectly asserts…skips over the central conclusion… repeats the false straw man… repeats the falsehood… and so on and so forth… hardly a strong recommendation.

Still no answer to my simple question.

Last first. Judith Curry said

The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.

If she had said

The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of the effects of questioned tree rings and centered PCA.

she would have been correct in detail as well as in spirit. See my answer to Sod upthread to see exactly why.

As to your string of damning epithets, others here might also like to refresh their memories as to what the North panel said, and to whom – and why two very different pictures of the North panel arose in consequence – which I decided to get to the bottom of.

No. Even with the addition of your extra words (!), the claim is still materially incorrect.

BTW I note that your website still carries the quote ‘Sir John Houghton, first co-chair of the IPCC, said, “Unless we announce disasters no one will listen” ‘

Now, as I informed you some time ago at WUWT, Houghton said no such thing, and he regards the repetition of this falsehood as defamatory. Some might find this willingness to continue to propagate false information fatal to your credibility, and the exact opposite of true scepticism.

In the context of the Montford book, I see (from Amazon’s ‘look inside’) that it repeats David Deming’s allegation that a major person working in the area of climate change confided in him that ‘we have to get rid of the medieval warm period’. That scientist was Jonathon Overpeck. In one of the illicitly-published mails, Overpeck states that he had never before heard of Deming, that he had no memory of ever emailing him, that he had no knowledge of ever using the quoted phrase, that he never would use it in the context being ascribed to him, it was bogus and that he found the whole thing rather upsetting. A decent investigative journalist, certainly anyone wanting to be taken seriously as a science historian would at least mention this mail alongside the ‘quote’. From the pages available via Amazon’s ‘look inside’, apparently Montford does not. It seems anyone going to the book for a complete and balanced history is making a category error.

A decent investigative journalist, certainly anyone wanting to be taken seriously as a science historian would at least mention this mail alongside the ‘quote’. From the pages available via Amazon’s ‘look inside’, apparently Montford does not. It seems anyone going to the book for a complete and balanced history is making a category error.

Actually, he does mention it, and as I happen to have the book, let me help the discussion by accounting for how he does it.

On pages 420-422, in the final chapter, “The CRU Hack”, Montford briefly covers the emails, as they were made available after the completion of the book. The relevant part of Overpeck’s email is cited in the text, and followed up with Briffa’s email on “presenting a nice tidy story as regards ‘unprecedented recent warming[…]'” and Mann’s words on “a good earlier that [Overpeck] made [with] regard to the memo, that it would be nice to try to ‘contain’ the putative [Medieval Warm Period]”.

Montford continues:

It seems clear then that there was outside pressure on the scientists to ‘get rid of the Medieval Warm Period’, a pressure that in some cases at least, was not entirely unwelcome. And if future developments turn out to show that Overpeck did not make the statement attributed to him, it seems clear that he at least had indicated to his Hockey Team colleagues that he would be happy to ‘contain’ evidence of past warming.

(1) Houghton’s actual words. Thank you for your reference which I checked out. I will amend my web page accordingly. But I think you should know that Bishop Hill seems to have the last word on this – so I shall reference that instead. If you can find something that refutes Bishop Hill please put it up here. Thanks.

(2) Overpeck… I will similarly amend my web page since you show that the issue is one person’s word against another person’s word. Unfortunately the whole context makes me trust Deming rather than Overpeck though Overpeck may well have simply forgotten.

(3) You assert that Judith Curry’s [amended] claim is still materially incorrect. I’m not convinced. This assertion goes flat against all the evidence here, and STILL ignores her main point of doing good science (in the course of which, incorrect or misleading details can be expected, and will be corrected… in due course, when the main issue is seen to be getting attended-to). I don’t want to argue further with you on this one.

I am afraid BH’s attempt to spin the fact that a completely fabricated quote had been prominently circulated by pointing out that Sir John said something that is vaguely similar just reinforces my already low opinion of his journalistic integrity (another example would be the selective and inventive listing of the purloined mails).

As pointed out here “The first one — a false one — has been used by deniers to charge that the IPCC knowingly exaggerates the risks of global warming in order to hype the issue and get attention. The second states that its human nature to ignore problems until they reach critical mass.”

He also runs a hearsay quote at the start of his book in the knowledge that the author denies having written it, then tucks the denial away in a chapter on the leaked mails. The quote would have been a zinger if true, so I guess it was just too good to lose. But one expects better of a ‘science historian’.

Re: Phil Clarke (Jul 29 12:45), Clearly you have not looked at the Bishop Hill ref I gave, which, if you take the head post together with the full comments thread, is highly nuanced, generally advocates apology by skeptics, stands up for Houghton’s good points, and only in the context of all that is his underlying alarmism noted – accurately quoted this time. BH notes carefully the difference, not the “vague similarity”, to wit, Houghton did NOT encourage people to lie.

So much for balance in your remark.

Your final paragraph has now been answered elsewhere here by the Bishop.

These tactics are common for merchants of doubts. They are relatively uncommon for the watchdog auditors. I suggest that you judge your posts and comments by these standards. Scientists allegedly defending themselves against attacks on their integrity.

“It’s not dishonest; but the thing I’m talking about is not just a matter of not being dishonest, it’s a matter of scientific integrity, which is another level. . . [A]lthough you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. . . The first principle is that you must not fool yourself–and you are the easiest person to fool. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.”

Individuals conducting science at the policy interface can inadvertently compromise their scientific integrity for what they perceive to be good motives, such as supporting a worthwhile cause or preventing what they believe to be misinformation from being published, included in an assessment report, or featured in the media. Whereas such actions can provide temporary political advantages or bolster the influence of an individual scientist, the only antidote in the long run is to let the scientific process take its course and deal with uncertainty in an open and honest way. When scientists and defenders of the scientists employ the same tactics as the merchants of doubt, they lose the moral high ground (and weaken their stance in political battles that depend on consensus and the judgment of experts).

The reason the Montford’s book is relevant is that it shows how conflict ensued from overconfidence of the scientists. By incorrectly assuming that this was a merchant of doubt style attack, the scientists retaliated with merchant of doubt style tactics, which inflamed the situation into something that remains volatile to this day, as evidenced by these threads. So I am asking the scientists and the denizens of RC and CP to look in the mirror, evaluate their behavior by the standards of Gleick and Feynmann. And the hockey stick conflict was unavoidable, that is the real lesson of Montford’s book.

Very nice Judith. One can actually act properly by being a total narcissist–just ask 1)will this action make me look foolish, mean, illogical, or ignorant? and 2) did I possibly bias my result such that someone could see a bias or find a mistake?
That is, it is not necessary for a scientist to be an angel, and self-interest can lead to the high ground.

Dr. Curry,
I can only say that the response to your posts at RC and CP have left me stunned. Totally stunned. The unbelievable nastiness towards you has made it very easy to raise doubts of anyone – just tell them to read the comments at RC an CP and then here and at BH.

Re: Kan (Jul 28 22:00),
It is no longer surprising when one realizes that Judith is correct about the implications of “conducting science at the policy interface” and “temporary political advantages.”
The challenge we face is the growing politicization and policyization of science. I don’t see structures in place to protect against this, nor against the inevitable consequences.
The fact that “consensus” is now an acceptable form of measure in some scientific arenas is troubling.

A decent investigative journalist, certainly anyone wanting to be taken seriously as a science historian would at least mention this mail alongside the ‘quote’. From the pages available via Amazon’s ‘look inside’, apparently Montford does not. It seems anyone going to the book for a complete and balanced history is making a category error.

I’m not sure we should be paying any attention to someone who would question my integrity in this way without even having read the book. Overpeck’s denial of having sent the email is on page 421.

“’m not sure we should be paying any attention to someone who would question my integrity in this way without even having read the book. Overpeck’s denial of having sent the email is on page 421″

So not alongside then, not footnoted, not caveated? My error was in expecting that the quote and the denial by its author that he ever actually wrote the words would be adjacent, or footnoted. Or something. Not tucked away at the other end of the book.

Well, actually I guess I would expect that the quote would not be used at all in those circumstances. ….

Kan and Mr Pete, to follow up, here is my latest salvo to RC. Mr Pete’s message suggests another post on the politics of expertise.

————

At the request of the many emails I’m getting, here is one more salvo about trying to remind people of science should be done and how arguments should be conducted, and how disagreements can be resolved, and conflicts avoided.

Charles Sanders Peirce (from the Wikipedia) outlines four methods of settling opinion and overcoming disagreements, ordered from least to most successful:
1. The method of tenacity (sticking with one’s initial belief) and trying to ignore contrary information.
2. The method of authority, which overcomes disagreements but sometimes brutally.
3. The method of congruity or “what is agreeable to reason,” which depends on taste and fashion in paradigms.
4. The scientific method whereby inquiry regards itself as fallible and continually tests, criticizes, corrects, and improves itself.

Much of the disagreement is often about ambiguity of statements, and these are easily resolved if the situation has not elevated into animosity and conflict. In the course of rapid exchange blogospheric discourse, people tend not to present formal arguments with carefully crafted premises and conclusions drawn using a specified logic (including myself and the scientists that host this blog), its more in discussion mode. I would personally be interested in a blog that consisted only of formal arguments, queries about ambiguities, and formal rebuttals. But that is not what we have here. Further, when trying to develop a specific thesis in a blog comment stream, other comments hone in on ambiguities in one of the premises as an attempt to dismiss the entire thesis. Yes, lets try to eliminate the ambiguities, but more importantly lets try to understand the main thesis in someone’s points. In the blogosphere, when all this is laced with heavy snark and appeal to motive attacks, it is very difficult get anywhere.

A critical element in avoiding conflict, justifying a thesis, and understanding someone’s statement or thesis is to ask the question “What would have to be the case such that this statement/thesis were true?” And then both the proponent and examiner should ask the reverse question: “What would have to be the case such that this statement/thesis were false?” The general idea is that the fewer positions supporting the idea that the statement/thesis is false, the higher its degree of justification. Verbal ambiguities can easily be resolved in this way. And it’s a good way to clarify scientific debates also. This is called looking at both sides of the argument and actually trying to understand them. Kudos to those of you who wandered over to climateaudit to try to see what was going on over there and understand their arguments.

When there is a great deal of uncertainty and ignorance on a scientific topic (paleo reconstructions certainly qualifies here), the problem arises when we have conflicting “certainties” by two sides. Conflicting certainties arises from differences in chosen assumptions and the natural tendency to be overconfident about how well we know things. Most of this conflict can resolved by acknowledging and understanding the uncertainties. Conflicts about methodology can be more easily resolved than conflicts about scientific hypotheses (e.g. 1998 is the warmest year in the last millennia), although methodological issues are a key element required to support a scientific hypothesis.

Uncertainty is complex beast, with multiple locations, different natures (imperfections of knowledge vs inherent variability), and different levels ranging from the unrealizable ideal of complete deterministic uncertainty to total ignorance. The IPCC has not done a very thorough job in characterizing uncertainties. In the first IPCC assessment reports, the executive summaries included lists of “we are certain of the following” and “we have confidence that”, and they included a list of four broad areas of uncertainty. For the IPCC third and fourth assessments, Moss and Schneider’s (2000) guidelines were followed, with a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts. The actual implementation of this guidance in the AR3 and AR4 WG1 Reports adopted a subjective perspective or “judgmental estimates of confidence,” whereby a single term (e.g. “very likely”) characterizes the overall confidence. Now there have been all sorts of critiques of this method in the published literature, but lets accept the method for the sake of argument.

With this in mind, lets examine the following aspects of my statement in #167:

“3. The NAS North et al. report found that the MBH conclusions and “likely” and “very likely” conclusions in the IPCC TAR report were unsupported at that those confidence levels. How the hockey team interpreted the North NAS report as vindicating MBH, seems strange indeed.

This statement is ambiguous, it can be interpreted in several ways. The verbal ways that the strength of the arguments in MBH, the IPCC, and the North report were described differently, I attempted a generalization using words that the readers would identify as confidence levels. The ambiguity could have been resolved with a longer statement that was more clearly worded. Remember, the point of my summary was to describe the overall content and arguments in Montford’s book to support my earlier statement that Tamino had missed much of what the book is about. This statement was just one statement in a post that included many points to support my thesis regarding missing elements in Tamino’s review, my intent was not to reproduce these arguments in any detail or immerse myself in the technical battle on this issue. This ambiguity in my statement is easily clarified, and does not detract from my overall thesis, and does not in any way reflect on the accuracy of Montford’s book.

Now on to the real point. My statement below is correct and unambiguous.

4. A direct consequence of the North NAS report is that the conclusions in the IPCC AR4 essentially retracted much of what was in the IPCC TAR regarding the paleo reconstructions. This is the only instance that I know of where the IPCC has reduced a confidence level or simply left out a conclusion that was in a previous IPCC report. This is discussed in the CRU emails.

A reminder, it is this statement in the TAR that is omitted from the AR4:
“It is also likely that, in the Northern Hemisphere, the 1990s was the warmest decade and 1998 the warmest year.” The word “likely” means a confidence level of 66-89%. Gavin and I both agree that this statement is unjustified. We disagree on the significance of this high level of justification in TAR and its retraction in the AR4. This is the only instance of a retraction in confidence from the IPCC. The statements that Gavin cites from Lindzen regarding uncertainties in clouds and aerosols are correct; the IPCC continues to acknowledge a high level of uncertainty and low confidence in these areas. The IPCC has never presented a statement of confidence at the very likely or likely level that has the words “cloud” or “aerosol” in it. The cloud-aerosol forcing/feedback includes much at the border of ignorance, which is acknowledged by the IPCC. But I argue that the ignorance surrounding global/hemispheric global temperature over the past millennia from paleo proxies is a topic where the ignorance level is even greater than the cloud-aerosol issue. I also suspect that the there will be further retraction of the confidence levels in the AR5 regarding global/hemispheric temperature reconstructions. This overconfidence in the IPCC reports on this topic is at the heart of the conflict described in Montford’s book.

So going back to Charles Sanders Peirce and how to overcome disagreement. On the subject of Montford’s book, Tamino’s review, and the larger issue of the state of the science of paleo reconstructions, what have we seen over the past few days in the blogosphere? CP relies almost exclusively on strategies #1 and #2, my statements rankle so much with Romm because I am an “authority” that he previously referred to. At RC we have seen a mixture of all 4 strategies, with a heavy dose of appeal to motive and ad hom attacks. Given that the RC moderators reject many comments, it is not to their credit that they have been letting these through. BH tends to #3 (they are very polite by blogospheric standards and Montford is amazingly snark-free, but not heavy on scientific arguments on the blog). CA scores highest on #4, they stick to mostly to arguments, evidence, identification and clarification of ambiguities (you can agree or disagree with them, but it is mostly evidence based arguments, and ad homs and appeal to motives are snipped). With regards to snark, it is more evident in the main posts at CA than at RC, although the inline comments from McIntyre are relatively snark free, whereas those from Gavin are not. Snark is endemic to the blogosphere, makes it entertaining I guess, to some anyways. But it neither adds nor detracts from scientific arguments, it merely distracts. So readers interested in the arguments should filter it out, and not keep tallies based on snarky gotchas that are minimally relevant to the argument.

And finally, since I am a glass half full kind of person, I am trying to see what me might have gained from this exchange across the four blogs. If I wanted friends, I would stick with facebook. The insults are irrelevant to me and should be irrelevant to any thinking persons assessment of the argument. I am prepared to declare victory if there are more people looking at both sides of the argument, there are any new readers for Montford’s book, if people have wandered over to climateaudit to check it out, if people (especially the RC principals) are starting to get it that the watchdog auditors (e.g. McIntyre) are different from the merchants of doubt (I’m sure that CP won’t cede this). And if the dialogue can change regarding uncertainty, to acknowledge that there is a high level of uncertainty in level 3 science (as per my previous post on Funtowicz and Ravetz classification), and still a significant level of uncertainty in level 4 science, and not too much of climate science is actually at level 5. The rebels who dispute the level 4 consensus are not irrational, and you need to differentiate rebels from cranks. An interesting case of this is Roy Spencer (a rebel) currently taking on the cranks that are denying that the greenhouse effect exists.

It would be much easier if the public could just trust the experts to be right. This doesn’t work, and again Feynmann said it best: “Science is the belief in the ignorance of experts. When someone says, ‘Science teaches such and such,’ he is using the word incorrectly. Science doesn’t teach anything; experience teaches it. If they say to you, ‘Science has shown such and such,’ you should ask, ‘How does science show it? How did the scientists find out? How? What? Where?’ It should not be ‘science has shown.’ And you have as much right as anyone else, upon hearing about the experiments (but be patient and listen to all the evidence) to judge whether a sensible conclusion has been arrived at.”

Can I suggest that this comment by Judith deserves a post of its own rather than being buried deep in the comments here. I think she wonderfully sums up the ordeal and her comment needs to be prominent and visible.

Charles Sanders Peirce (from the Wikipedia) outlines four methods of settling opinion and overcoming disagreements, ordered from least to most successful

A similar attempt was made by programmer and angel investor Paul Graham, whose essays are quite popular primarily in the programming community. In How To Disagree, he proposes a disagreement hierarchy, in order to help people disagree well in the blogosphere. He defines the following levels, in order of increasing persuasive power:

Now we have a way of classifying forms of disagreement. What good is it? One thing the disagreement hierarchy doesn’t give us is a way of picking a winner. DH levels merely describe the form of a statement, not whether it’s correct. A DH6 response could still be completely mistaken.

But while DH levels don’t set a lower bound on the convincingness of a reply, they do set an upper bound. A DH6 response might be unconvincing, but a DH2 or lower response is always unconvincing.

The most obvious advantage of classifying the forms of disagreement is that it will help people to evaluate what they read. In particular, it will help them to see through intellectually dishonest arguments. An eloquent speaker or writer can give the impression of vanquishing an opponent merely by using forceful words. In fact that is probably the defining quality of a demagogue. By giving names to the different forms of disagreement, we give critical readers a pin for popping such balloons.

Sod, what the heck is your question at this point? And why do you think it is important, after all this? If it is anything I can’t answer without the book in hand, you will have to wait until Aug 5. If it is anything of a technical nature about the science, others can answer it better than I can. Listen to evidence from both sides and make up your mind about whether there is sufficient evidence on either side to call it in favor of one side or the other.

I think one of the issues that degraded the room into salvos is through defending the book itself without acknowledging that Montford’s very title manifests the very sort of ‘things to be avoided’ that therefore ironically aren’t.

For some, I guess ‘give’ has to come before ‘take’.

I would also suspect that non-specific apologies will only incite more furor. But, granted, specific apologies will not be accepted (or praised) either. C’est la guerre.

This covers both errors I actually made, plus errors that i didn’t make but someone thinks that I did. So everyone who thinks that an apology from me is relevant or useful or necessary, will have one.

sorry Judith, but that is not how apologies work.

the idea is to correct your error.

here it is again. (your point #7)

7. The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.

7. The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not satisfy their critics because of problems with the proxies and analysis methods.

This is the only instance of a retraction in confidence from the IPCC.

In the AR4 Summary for Policymakers final draft (SPM_SOR_TSU_FINAL), on page 11, lines 23 through 34 state:

Proxy climate data and paleoclimate models have been used to increase confidence in understanding past and present influences on climate. [6.6,9.3]

A large fraction of Northern Hemisphere interdacadal variability in temperature reconstructions for the seven centuries before the mid-20th century is very likely attributable to natural external forcing, particularly to known volcanic eruptions, causing episodic cooling, and long term variations in solar irradiance. [6.6,9.3]

In Government Comments on the Final Draft of the SPM, for WG1 of AR4, the Government of Germany stated the following (I believe) in reference to the above excerpt:

Clarify to what extent the stated upper bounds are simply lower because a smaller sigma uncertainty range is provided. TAR gave 2 sigma uncertainty ranges, whereas AR4 states only 1.65 sigma uncertainty ranges (5%-95%) (see Chapter 10, page 65, line 23). Without clarification, the reader is mislead in believing that only better scientific understanding caused smaller uncertainties, while in fact a large part is due to different terminology.
[Govt. of Germany (Reviewer’s comment ID #: 2011-35)]

I submit this not to nitpick Dr. Curry, but, if this is accurate, to show that appeals to the authority of the IPCC may be undermined by this apparent relaxation of standards.

Overpeck’s alleged quote was just too juicy to lose, so much so that usual journalistic standards had to be relaxed. How else to explain the use of a quote in the knowledge that the quotee denied using the words, with the reader left to continue another 400 pages before discovering this fact?

Incidentally, apparently without embarassment, our science historian cites the Journal of Scientific Exploration as the source of the non-Overpeck non-quote. You certainly can discover some fascinating science in that august journal; when they’re not publishing maverick climate heroes they’re publishing the state of the art in modern sheep-strangling..

Deming claims to have gained the climate science community’s confidence and ‘An important person working in the field of climate change and global warming sent me an astonishing email with the words: ‘We must get rid of the Medieval Warm Period’. Montford cites an article in a fringe science publication.

This is a key money quote as it supports the narrative being spun about scientists wishing (for some reason) to flatten the blade of the stick. By contrast Overpeck writes that he had never heard of Deming, had no memory of mailing him, and would never use the phrase in the way Deming intends. Generally the person being (mis)-quoted is the best judge of whether his words are being used fairly. He cannot categorically deny ever writing the phrase, simply because he cannot remember verbatim every mail he sent to every distribution group, after several years have elapsed. But Overpeck’s version directly contradicts Deming’s portrayal of a mail sent personally in confidence.

This could all be cleared up by the production of the email in question, which to my knowledge has never happened. So the quote has the status of hearsay, strongly contested by the quotee. A historian would probably not use such evidence, or else caveat it heavily, yet it is like crack cocaine in some quarters, here it is described as a ‘sudden flash of light’ and ‘getting rid of the MWP’ crops up frequently as a theme in Montford’s writing and elsewhere.

As for wanting to ‘contain’ the MWP, read the mail again. It is no more sinister than this year ‘containing’ February 2010.

Steve: Phil, this has nothing to do with Tamino’s misrepresentation discussed in the thread. I realize that you would rather change the topic, but it would be polite if you would either concede or dispute the point at issue in the thread. Deming’s claim was re-iterated in testimony to a Senate committee. Fred Pearce’s The Climate Files contains a recollection from the same period that bears on the attitude of scientists at the time as the Deming quote;

Tim Barnett…joined Phil Jones to form a small group within the IPCC to mine this data for signs of global warming. They planned to summarise the research in the panel’s next assessment due in 2001. They had an agenda. “What we hope is that the current patterns of temperature change prove quite distinctive, quite different from the patterns of natural variability in the past, Barnett told me in 1996. Even then they were looking for a hockey stick. 44

Barnett was discussing looking for modern distinctive climatic patterns rather than manipulating paleo-evidence. The line about looking for sticks was added by the journalist.

I apologise for drifting off-topic, and for not conceding or disputing the technical point. Your blog, your rules – but I had not realised that this was a pre-requisite for posting. Dr Curry for example has specifically stated that she is not going to debate the technical details of the Hockey Stick and has posted several hundred words of quite general stuff without being accused of impoliteness.

As it happens my conceding or disputing the point would not carry much weight; I am not up to speed on the details, so I am going to withdraw now and do some reading up, in order that any future contributions from me are ontopic, and correct both in detail and in spirit.

Bye for now.

Steve: “looking for modern distinctive climatic patterns” is surely in keeping with an attempt to “get rid of the MWP”. It doesn’t imply that the people in question intended to “manipulate” evidence or that they did so (or didn’t do so.) However when people go looking for a pattern, they have to be very conscious of confirmation bias and that they don’t fall into phrenology. There are lots of issues in how this was done that do not imply or require the presence of “outright fraud”.

Re: sod (Jul 29 15:59), Have any of the critics of Andrew Montford’s book actually read the book? Overpeck’s denial was only revealed after the leak of the ClimateGate emails. The FOI email leak occurred after the main text book was complete. Rather than re-edit the preceding 16 chapters with added quotes, chapter 17 (The CRU Hack) was added at the last minute, and includes quotes that related to the preceding chapters. Overpeck’s denial was included in this chapter, as were several other quotes.

This joke from the Soviet era may be appropriate, following Judith Curry’s visit to the Tamino thread at RC :

A foreign delegation came unexpectedly to a collective farm. There was no time to prepare. After they left, the Chairman of the collective farm called the District Party committee. “You didn’t warn me in advance, so they saw everything, the ruined cow sheds, and all the dirt, and all our misery and poverty.”

“Don’t worry,” the Party secretary said.

“But now they will tell about it all over the world.”

“So, let them indulge in their usual slander,” the Party secretary said.

…But I’m wondering why other papers allowed to be published as ‘ground-breaking’ (despite a flawed methodology)… and all subsequent addendum publications are declared as an ‘evolution’ or ‘progression’ of the science, such that it is ‘irrelevant’ or ‘unhelpful’ to remark about any previous methodology that no opposing party has actually explicitly agreed was flawed, regardless if their results were ‘corroborated’ by ‘slightly better’ methodology.

Apparently Michael Mann has penned an opinion piece in the Star Tribune in the last few days, discussiong Soon’s paper from 2003…

Prof. Mann must be taking a break from his serious work by writing letters refuting wrongthink to the editor in local newspapers far removed from his place of residence. ;) …

… or is there something more to it?

The letter looked vaguely familiar. Except for the changes to name a different “spread[er] [of] false information about science and scientists”, the submission was identical to one sent to the Fredericton Daily Gleaner – a small-city newspaper in eastern Canada a thousand miles away!

You don’t suppose that this could be a form letter being used as part of an organized campaign to restore the consensus. Naw, that would be too ridiculous to even consider …

4 Trackbacks

[…] f) There was considerable evidence of biases in the data selection in the proxies (along with small sample sizes); the selection of the proxies in the reconstruction; and the short-centring which gave rise to hockey sticks on random data 99% of the time. Given this, any measure of correlation statistic was rendered largely meaningless. McIntyre did not explore this. However, Montford provides evidence that the verification statistic used was highly irregular in the disciplines outside of climate science. (e.g. p156-164) Latest – McIntyre shows the evidence that to suggest verification statistic was cherry-picked. […]