Sunday, February 13, 2011

a) In the first review Steig never suggested that ridge regression should be used, but he did have serious problems with using the values of kgnd that Team O’D wanted to favor and he expressed them rather clearly.

b) Steig’s statement was that the authors should justify their choice of kgnd.

c) In response Team O’D brought ridge regression into the paper as support for their using particular values of kgnd (although iridge lead to somewhat different values of warming in W. Antarctica).

d) In his second review Steig said that since the problems with kgnd remained, maybe (as in perhaps, perhaps you don’t understand the English Lucia) it would be better to use iridge WHICH HAD BEEN INTRODUCED BY THE KNUCKLEHEAD AUTHORS who are now confused about what they did.

e) the authors in their second response, agree that the ridge regression results were the most likely and say they will move the TTLS/TSVD results to the supplemental information

(1) Steig did not bring up iridge (but then, I didn't say that he did).

(2) Once O'Donnell et al. included their iridge calculations in the manuscript, Steig did suggest that iridge would be better than to show than their original TTLS results with kgnd=7, and did ask the editor to insist that something more likely to be correct than the original TTLS results with kgnd=7, be shown.

(3) Steig's statement in Review #3 that "The use of the 'iridge' procedure makes sense to me, and I suspect it really does give the best results" reads as a plain endorsement of iridge, and is inconsistent with Steig's comment above that he believed at the time that "iridge should not be used". Maybe he did believe as he said in his comment, but he certainly left the opposite impression with what he wrote in his review

but it is important to read what John N-G wrote about the other reviewers (in summary)

Meanwhile, reviewer B states that he/she doesn't really understand the statistics, . . . Reviewer B has not seen any other reviews at this point, but is fully expecting that Steig or Mann ought to be one of the other reviewers.

Reviewer D, brought in for the second round, finally seems to have looked at O'Donnell et al. and Steig et al. side by side.

Now John's point about Steig changing his mind has some validity, but Eric did justify it, explaining clearly his reasons for doing so in the blog post and, in fact, in the third review stated them at least in part.

Yep, and the first statement is what you said in your reviews. Quite frankly I fail to see how raising the same (unaddressed) issues in your post as you raised in your review is duplicitous or any of the other nasty phrases which have been tossed your way.

Well, Eric's first statement is also not the same thing as "I SIMPLY DID NOT THINK I COULD ARGUE WITH THE KNUCKLE-HEADED REPUBLICANS that George W Bush will be a bad president", nor did he say he'd like to see more evidence on iridge before casting his vote, so I think it's understandable that some people failed to understand him properly, although that doesn't justify the rabid nature of some of the responses.

Eric please note the maybe in Eli's point d and the statement at the end that "Eric did justify it, explaining clearly his reasons for doing so in the blog post and, in fact, in the third review stated them at least in part."

John is perhaps a bit stronger, but he is not so used to dealing with asses.

The quote about the iridge procedure making sense should never be pasted _without_ the 2nd sentence starting "but"... otherwise, that leads to misinterpretation such as "nor did he say he'd like to see more evidence on iridge before casting his vote".

"The use of the 'iridge' procedure makes sense to me, and I suspect it really does give the best results. But O'Donnell et al. do not address the issue with this procedure raised by Mann et al., 2008, which Steig et al. cite as being the reason for using ttls in the regem algorithm. The reason given in Mann et al., is not computational efficiency -- as O'Donnell et al state -- but rather a bias that results when extrapolating ('reconstruction') rather than infilling is done."

In your post here and the post at Lucia's you use the phrase "...WHICH HAD BEEN INTRODUCED BY THE KNUCKLEHEAD AUTHORS...". Eric uses the term "...THE KNUCKLE-HEADED REVIEWERS...". Of course both points may be correct characterizations, but I fear there has been more confusion created. Or maybe I'm the confused one...

Although he has behaved very badly, I have some sympathy with O'Donnell, because I've been in a similar position. There is a prominent paper that pretty much everybody accepts, but you see a weakness in their methodology. Their conclusion could be wrong! So you do your own study, do it properly, and after a lot of effort, you end up concluding that their conclusions weren't wrong, or at least, not wrong enough to make much difference to anybody.

And when you try to publish your work, you find that you have to go to a less prominent journal than the one that published the original work, because even though you have a superior methodology, the conclusions are pretty much what everybody believes already. Indeed, people will probably continue to cite that first paper, even though you are the one who finally did it right! As far as you are concerned, they were just lucky that their inferior approach did not give them the wrong answer. You should get the credit, not them.

When you write your paper, there is a strong temptation to gloss over the fact that the ultimate conclusions are pretty similar, and to emphasize differences, no matter how minor. But there is a good chance that the editor will have sent your paper to the author of that first paper, or to somebody who knows that paper very well, and that reviewer won't let you get away with it. He'll insist that you prominently acknowledge the strong similarity in your conclusions. And since the conclusions actually are pretty similar, the editor will agree with him. It's frustrating. From your point of view, you've finally done it right, but from his point of view (and pretty much everybody else's), you've just confirmed his conclusions. Your paper (which in your mind, at least, is superior) ends up being a footnote to theirs.

Of course, complicating the situation is the political dimension. This sort of thing just keeps happening to the contrarians, going back to McIntyre et al's original critique of the hockey stick--they found a genuine flaw in the statistical analysis, but when others re-analyzed the data correcting the statistical error, the hockey blade didn't go away. And then there was Watts et al.'s surface station debacle, in which they identified genuine inadequacies in siting of surface stations, but others showed that when the bad stations were dropped out, the conclusions didn't change appreciably. On their blogs, they can pretend that identifying these technical flaws somehow invalidates the entire edifice of climate science, but if they try to break into a peer-reviewed publication, the darned reviewers keep insisting that they acknowledge that their corrections don't make any meaningful difference insofar as the conclusions that most everybody else cares about.

When it comes to extracting conclusions from sparse data, Tukey had clear views about methods.

The idea that method X was wrong and method Y finally got it right would have caused amusement.

Statisticians argue all the time about the relative merits of different approaches, as there are often tradeoffs (and remember O10 rather mismatched in at least one place. That doesnt make it wrong either.)

For example, perhaps someone can convincingly prove that one specific normality test is the RIGHT one.

"The payoff for a climate scientist is to learn something important about the climate. The payoff for O'Donnell is to prove publicly that Steig et al. didn't apply their method properly."Posted by: n-g at February 14, 2011 01:43 AM

trrll, the problem with that accounting is that it assumes O'Donnell's intents and motivations were those of a scientist, where the elephant in the O'Donnellgate is just how different they actually were.

This is to say that, as is obvious but to this point not discussed, O'Donnell et al was not scientific inquiry conducted with the aim of furthering science and scientific reputation. It was a rear guard action for a fallen denialist talking point about Antarctic cooling, (and an important one, given how visible and dramatic is the warming at the other pole) and a way to further the smear campaign against the villains of denialist propaganda, i.e. RealClimate scientists (and by infamous denialist extension, e.g. hockey stick=climate science, the IPCC) and scientific journals, (again given --> credibility one paper in Nature = credibility Nature = credibility climate science literature). It certainly had nothing to do with natural curiosity.

Now before the concern trolls cry out for proof beyond an unreasonable doubt, whether or not that accounting of intents and motivations is correct in a literal sense, or whether McIntyre, etc. are by now mainlining their own smack is not particularly relevant. They very clearly and without evidence imputed dishonesty and bias to Steig 09, they are very clearly deeply invested in the Heartland/industry narrative of climate science and policy, not least given how much personal fame and fortune they've received from it, and they very clearly saw their paper as advancing those interests. Either way their actions bespeak these truths.

Starting from there it's much easier to understand what happened here. From the beginning O'Donnell et al saw this as little more than the climate wars moved from blogs to journals and, cross their fingers, from there on to mainstream media. Therefore, clearly, reviews by the opposition 'team' weren't there to actually improve the paper, but to sandbag it, because that's they way the means get justified, and that's clearly what they would do in Eric's place, (and what McIntyre has done). Hence Steig would have to be overcome and outwitted, but also despised for his dishonesty and ineptitude in preventing a refutation of his work from being published.

Some speculation is given as to how RealClimate and their warmist allies will spin this stick in their eye, and new blow to their case. A request by Steig for a final draft of the paper just goes to show how undignified the team's conceit is (meanwhile, the fact that by this time Dr. Steig has already confirmed to O'Donnell that he is reviewer A goes unnoticed by some of the smartest people on the planet).

And then.... Dr. Steig writes a post where he has the temerity to dissent with certain aspects of their approach and results. Cue righteous quantities of self-righteous bile and ensuing soap opera. The dimwitted lukewarmers are taken in, which their blog counters register as the smell of napalm in the morning (*eat my dust bitches!*). The bunnies break out their shovels to sift through an aftermath that makes a jackknifed manure truck look like lunch on the green.

In short, the original sin here goes back to this post-modern post-normal post-sanity thing. In a world without objective reality, he with the biggest megaphone wins.

Yes, Bart, I don’t think you understand the issue at all. Steig was trying to trick O’Donnell et al. into making a positive contribution to the science, something they were trying very hard to avoid. He may have succeeded which is why they are so upset."

He should have insisted on removal of the blog references and a few minor cosmetic points. And then when O'D et al was published ES et al should have replied in JoC. At which point, O'D et al would have pointed out that Steig was a referee on their paper and so why did he not raise these concerns at review.

One observes there are an awful lot of people running around in circles rather than doing something useful [useful not including pointing out the 8,275,447th occurrence of duplicity and lots of work to point it out.

Dano, say what you might, it doesn't get quite as bad defending science on the climatology front as it did on the evolutionary biology front. There people would argue day after day with Young Earth Creationists, trying to argue at a level creationists would understand - while creationists would try and see just how dumb they could play things and still get us to buy their act. Mind-numbing for both, but usually by degrees. A bit like boiling a frog, I suppose.

Hank, MapleLeaf recently asked Tamino whether he would write something up on a contrarian paper just recently, "Anyhow, it seems the contrarians and denialists are giddy over this paper, so perhaps something needs to be said. Also, it won't be the first time that GRL published something by contrarians that was wrong ;)"

Tamino responded, "No, it won't be.

"Every few months the denialists get giddy over a new paper that they claim 'blows the lid off global warming.' A few months later, they have to find another one."

At an abstract level Tamino's argument doesn't seem that much different from Dano's.

I have heard it said on more than one occasion that, "Insanity is defined as doing the same thing over and over again but expecting different results." I have also heard it said that you shouldn't argue with trolls. It draws attention to them, gives their arguments an air of legitimacy and is usually futile.

Along these lines you thought a while back whether Gavin might have actually wanted us to respond to a troll since he didn't say otherwise. Alternatively he might have wanted to see whether we had enough common sense not to engage with the troll. I know that the contributors have expressed in the past a desire to focus more on the science and less on "debate."

I am not saying that arguing with "denialists" is necessarily the same thing as arguing with trolls. But I think we owe it to ourselves and those that we wish to help and protect to make sure that it doesn't become that. And I don't think there is any easy solution or answer.

In my view at least good intentions aren't an end-in-themselves. Good intentions might make you feel good at the end of the day, but I believe we are all looking for a something more.

The debate in media have to be won in some way. Otherwise much of the research have not had much meaning... IMHO debunking "skeptic" papers in the literature takes shorter time then trying to tell the media that yes they are published but wrong... It also needs to be done to make good assessments.

However debunking in the literature takes a lot of time for a few persons, with little payback.

(Why is there no list of Spencers "mistakes" over the years that he has pushed? It would be a nice start in a letter to a reporter that is wondering what is going on... "look here Spencers been pushing all kind of wrong stuff around for years. It does not mean that all he does is wrong, however don't take his word for anything. Regarding this new study... ")

Magnus Westerstrand wrote, "(Why is there no list of Spencers "mistakes" over the years that he has pushed? It would be a nice start in a letter to a reporter that is wondering what is going on... "look here Spencers been pushing all kind of wrong stuff around for years. It does not mean that all he does is wrong, however don't take his word for anything. Regarding this new study... ")"

Agreed. We should spend less time on defense, on reacting to the attacks on climate scientists and where ever possible quickly shift to offense. Point out the history of the particular skeptic that is involved in a given controversy, point out what organizations are essentially devoted to PR or are libertarian think tanks and what campaigns they have been involved in in the past. Point out their funding. A while back the Competitive Enterprise Institute was going after Gavin. Their agenda? The very name of the organization is quite up front about it, and they have quite a history. And when they choose to use the term Climategate point out what Watergate actually was: political operatives breaking into acquire material to be used in an extensive smear campaign.

"My recommendation is that the editor insist that results showing the ‘mostly likely’ West Antarctic trends be shown in place of Figure 3. While the written text does acknowledge that the rate of warming in West Antarctica is probably greater than shown, it is thefigures that provide the main visual ‘take home message’ that most readers will come away with. I am not suggesting here that kgnd = 5 will necessarily provide the best estimate, as I had thought was implied in the earlier version of the text. Perhaps, as theauthors suggest, kgnd should not be used at all, but the results from the ‘iridge’ infilling should be used instead. The authors state that this “yields similar patterns of change as shown in Fig. 3, with less intense cooling on Ross, comparable verification statistics and a statistically significant average West Antarctic trend of 0.11 +/- 0.08 C/decade.” If that is the case, why not show it? I recognize that these results are relatively new – sincethey evidently result from suggestions made in my previous review – but this is not a compelling reason to leave this ‘future work’."

I like sunlight. Bunnies like to lurk in shadows and not see the whole picture. Eric "insists" and then at the end says these new results (iridge) are from suggestions by HIM in his previous review.

Eric then proceeds to attack the use of iridge while blogging while wanting to leverage his anonymity as Reviewer A.

The whole reason for the iridge exercise was to show it got similar results. Then Eric insists on its use!

I see from Bunny land you use the pendulum measure of responsibility in all conflicts and it is stuck on one side. You always seem to hold 0% responsibility on anyone that agrees with you.

"Second, in their main reconstruction, O’Donnell et al. choose to use a routine from Tapio Schneider’s ‘RegEM’ code known as ‘iridge’ (individual ridge regression). This implementation of RegEM has the advantage of having a built-in cross validation function, which is supposed to provide a datapoint-by-datapoint optimization of the truncation parameters used in the least-squares calibrations. Yet at least two independent groups who have tested the performance of RegEM with iridge have found that it is prone to the underestimation of trends, given sparse and noisy data (e.g. Mann et al, 2007a, Mann et al., 2007b, Smerdon and Kaplan, 2007) and this is precisely why more recent work has favored the use of TTLS, rather than iridge, as the regularization method in RegEM in such situations. It is not surprising that O’Donnell et al (2010), by using iridge, do indeed appear to have dramatically underestimated long-term trends—the Byrd comparison leaves no other possible conclusion."

Wait, in his review he insisted that they switch from TTLS to iridge, then on RC he says they should have used TTLS instead of iridge. Yep perfectly acceptable and would have had no problem acting this way, because no one from the team would EVER criticize him for that.

And here you are saving your best ethical scorns for O'Donnel. If Steig is your reference for ethical behavior... well what more needs to be said.

In his earlier review, he pushed RyanO et al pretty hard on problems with their use of TTLS, and that using kgnd=7 was not objectively the best result, based on verification stastistics, but that that was the value they chose to use on their analysis and graphics.

He suggested they deal with those problems. Not that they use iridge - rather, that they properly deal with the issue with their use of TTLS.

RyanO et al chose, in response, to add the iridge results to the paper, but not show them. The authors chose to do that, not Steig. In fact, in Ryan's first slanderous post at CA, he make a point of saying that Steig takes too much credit when he says "since they evidently result from suggestions made in my previous review" since the authors had already done that work anyway. IOW, RyanO acknowledges that their iridge analysis predates any suggestions from Steig.

In your quoted portion of this review, Steig is saying two things:

1. O et al are still showing graphics from an analysis that is not objectively the best analysis, and the editor should insist they show the best one.

AND

2. if the authors are going to use iridge to justify their argument, then they need to properly include the iridge results in this paper, and get it re-reviewed.

Properly including iridge, and getting it through review, would necessarily mean also discussing known problems with iridge - such as the well-known problem that it underestimates trends in exactly the kinds of situation that O were using it to address. It is not the reviewers job to write the paper for the authors - that's why Steig suggested another round of reviews of the revised paper, so that it could be seen whether O et al properly deal with the iridge issues.

O et al did end up using iridge, but Steig never got to re-review the paper, and O et al managed to show the iridge result without properly dealing with the underestimation of trends issue - and it was this problem that Steig pointed out in his RealClimate post.

Nowhere in there does Steig SUGGEST that O et al use iridge, and no where in there does he INSIST they use iridge instead of TTLS.

Y'all keep making this argument, that Steig set them up o iridge. He simply did not - the authors chose to use iridge, and the authors didnt properly deal with the underestimatin issue. As far as I can see, the argument either demonstrates that all y'all simply cant read, or it presupposes that none of us can read.

That would be great Lee if Ryan had used iridge for his paper, he did not. He used it to show another method would yield a similar result. He said he would do more work with it in a future paper. Steig then insisted it be the main focus of the paper instead of TTLS. IOW Steig complains of the usage of TTLS, Ryan et. al. show here is another method that shows similar results, Steig (seeing the trends are closer to his own paper) INSISTS they use that method and relagate TTLS to the back, Ryan says they may do that in a future paper even though both agree iridge INITIAL results are more likely. Steig obviously got pretty defensive and petty with his reviews of O'Donnel, but I get your point whatever Steig says, you believe whatever O'Donnel or anyone else says you do not. Keep spinning no one is buying it. Both of these men committed wrongs in this, at least I see that. And you?

Thanks for the link Sou, it perfectly supports my summaries of the events above.

"Steig complains of the usage of TTLS, Ryan et. al. show here is another method that shows similar results, Steig (seeing the trends are closer to his own paper) INSISTS they use that method and relagate TTLS to the back, Ryan says they may do that in a future paper even though both agree iridge INITIAL results are more likely."

Steig did not insist that they use iridge, he insisted on them reporting it since they had replied to the first review using iridge results INSTEAD of addressing his point about justifying the value chosen for a parameter in TTLS. This is an O'Donnell et al. fail 100% in response to review 1. Review 2 or 3 says if you use iridge, make sure you address reported problems with iridge. O'Donnell et al. failed AGAIN.

CE is trying to learn duplicity, not quoting from O'Donnell himself let alone from the review notes that have since been made public, preferring to make stuff up. As she or he knows, Steig did not insist on any particular method, only that the most likely results be shown in the figure. Perhaps CE prefers to only be told about the least likely results? Odd that.

This is part of what O'Donnell wrote:"Some people have read selected excerpts and come to the conclusion that Eric actually proposed iRidge independently of us, and have thus used this as a defense of my actions. This is not true. We were the first to mention iRidge. Eric recommended that our "most likely" results - which he seemed to think [accurately] would be the iRidge results - should be what appeared in the main paper. These are different things."

I don't think even ODonnell's response there gets it right. Nowhere do I read Steig saying that he seems to think that iridge would be the 'most likely' results. He had spent the first several pages of the review rehashing the argument over the kgnd setting, and arguing forcefully that using kgnd=7 was NOT the best result, tha tamong th kgnd settings that o et al report, a different settign woudl be the best.

It seems to me that he acknowledges that "perhaps" O'Donnell's use of iridge might actually be even better "as the authors suggest" but he wants that justified. He did not insist that the authors change from TTLS to iridge - he asked that the editor insist that they use the best of their results and explain it.

O'Donnell et al decided that iridge was best, and they rewrote the paper - they decided, not Steig, to rewrite it - but they screwed it up and failed to discuss a major shortcoming of iridge that was likely to have a direct impact on their analysis.

Even worse, they managed to get the editor to bypass the additional round of review that Steig asked for after the authors responded to the 'best results' problem - so Steig didn't have a chance to catch that error in review, and the new reviewer D apparently wasn't up to speed enough in the field to catch it.

But it is clear - no where does Steig "Insist" that O et al use iridge. Celery eater says again "Steig (seeing the trends are closer to his own paper) INSISTS they use that method and relagate TTLS to the back," but nowhere is there cite or quote where Steig says this.

OMG you people are hopeless. Steig never mentioned the possible problems with iridge until the 3rd review. The only "most likely" results were the iridge ones at the time Steig said that. wow just wow.

Still not a single critical word from any of the team members, talk about religon.

What "most likely" method do you think Steig was referring to? TTLS? irdige? Some other mystery method not mentioned?

Oh I get it Steig was saying that he insists they use the most likely method to the editor and the editor calls for a rewrite and O'Donnel et. al are supposed to go out and use every method possible known, find the most likely result and come back.

Celery eater wants clairvoyance from Steig: "Steig never mentioned the possible problems with iridge until the 3rd review. The only "most likely" results were the iridge ones at the time Steig said that. wow just wow."

Iridge was not in version 2, that is why Steig asked it to be shown (in version 3). It was put in version 3, but with no caveats. Steig mentions in review 3 that O'Donnell should consider published shortcomings, but no, O'Donnell appeal to the editor, ignore Steig, and then get upset when Steig points out in a public post that their iridge based analysis has problems. EPIC FAIL.

What "most likely" method do you think Steig was referring to? TTLS? irdige? Some other mystery method not mentioned?"

Oh, good god...

Steig doesn't say which is best - he's not the fricking author! It's the author's job to figure that out - its is their analysis.

He spent much of the previous review, and much of the first part of this review, showing that TTLS with kgnd=7 was the WORST of the results that O et al report. He doesn't want them to base their analysis on the worst result, but on the best. In context, clearly he wants them to use the best of what they have reported in their paper.

Steig doens't tell them what the best is - it is the authors job to figure that out, and then to justify that choice in the paper. Perhaps the best might be TTLS with a different value of kgnd - Steig had argued for that already. Perhaps several of the values of kgnd give comparable results, perhaps iridge is best, or comparable.

Steig doesn't say which is best - he accepts, cince the author have mentioned iridge, that perhaps it might be iridge, but he doesn't insist on it. What he is insisting on is that the authors NOT use the WORST results, and he asks that the editor require the authors to use the BEST results in their graphics and analysis - which would perforce be determined by the authors and justified in the rewrite.

The best might have been, for example, TTLS with kgnd = 5, and perhaps the authors could have done the analysis and shown that this was representative of several equally good (based on verification statistics) results.

I suspect that the authors didn't want to use gnd = (anything except 7) because doing so would show much better agreement with Steig than they wanted to imply. I suspect that because of they, they were already committed in their own minds to using either kgnd = 7, or iridge, because these gave the result they wanted to show - so that when Steig nixed kgnd = 7, they read it that their only option was iridge.

Steig didn't insist on iridge - Steig was clearly willing to accept a value of kgnd with better verification statistics, preferably with the best verification statistics. But the authors were not, so they (and you, it seems) read that as requiring that they use iridge.

Hamilton should go read the versions of the paper. iRidge was in the second version but TTLS was still the feature and main part of the paper.

Lee that's pretty funny speculation on your part. Steig seemed pretty confident on what was best in public comments. He in fact set himself up pretty well to attack this paper in public, that clearly shows Steig's result are worthless.

From Steig's 2nd Review (again).My recommendation is that the editor insist that results showing the ‘mostly likely’ West Antarctic trends be shown in place of Figure 3. While the written text does acknowledgethat the rate of warming in West Antarctica is probably greater than shown, it is the figures that provide the main visual ‘take home message’ that most readers will come away with. I am not suggesting here that kgnd = 5 will necessarily provide the best estimate, as I had thought was implied in the earlier version of the text. Perhaps, as the authors suggest, kgnd should not be used at all, but the results from the ‘iridge’ infilling should be used instead. The authors state that this “yields similar patterns of change asshown in Fig. 3, with less intense cooling on Ross, comparable verification statistics and a statistically significant average West Antarctic trend of 0.11 +/- 0.08 C/decade.” If that is the case, why not show it? I recognize that these results are relatively new – since they evidently result from suggestions made in my previous review – but this is not a compelling reason to leave this ‘future work’.

Notice in the beggining he says the "best results" should be used and then refers to these as "new results" and says "why not show it" and there is "no compelling reason to leave this as future work".

Remember O'Donnel stated that irdge would be in a future work.

Nice box you are in Lee either you are right and Steig does not know how to properly construct a paragraph capturing his thoughts or you are mistaken and when Steig insisted that the most likely results be shown he was indeed referring to the irdge results.

I know you will not yield as your faith prevents this, but I offer the exercise nonetheless.

"My recommendation is that the editor insist that results showing the ‘mostly likely’ West Antarctic trends be shown in place of Figure 3."

ie, replace ONE FIGURE with a different figure showing most liekly trends.

"While the written text does acknowledge that the rate of warming in West Antarctica is probably greater than shown, it is the figures that provide the main visual ‘take home message’ that most readers will come away with."

This is the reason why the authors should do so.

"I am not suggesting here that kgnd = 5 will necessarily provide the best estimate, as I had thought was implied in the earlier version of the text."

I'm not arguing that any particular value of kgnd is best, even though it seemed in the earlier draft that the authors imply that kgnd = 5 gives the most likely results.

"Perhaps, as the authors suggest, kgnd should not be used at all, but the results from the ‘iridge’ infilling should be used instead."

And it might even be that the iridge results that authors refer to in this newest draft might be best.

"The authors state that this “yields similar patterns of change asshown in Fig. 3, with less intense cooling on Ross, comparable verification statistics and a statistically significant average West Antarctic trend of 0.11 +/- 0.08 C/decade.”"

See, the authors make a plausible case.

"If that is the case, why not show it? I recognize that these results are relatively new – since they evidently result from suggestions made in my previous review – but this is not a compelling reason to leave this ‘future work’."

If they decide it is important to use iridge to make that plausible case, show it - don't just refer to it.

-----That, celery eater, is how to read this as a coherent paragraph. And there is not one place in it where Steig insists on any given technique. He is pointing out that the authors have a lot of choices, and that the choice they had made, of kgnd = 7, is not an acceptable choice to use - but that one of these others will be.

This is dead simple, celery eater - if you aren't trying to find phrases that you can cherry pick out to make the case you want to make.

Celery Eater sez:"Hamilton should go read the versions of the paper. iRidge was in the second version but TTLS was still the feature and main part of the paper."

No shit Sherlock, because1.O'Donnell put it there2. O'Donnell claimed it was better

A referee is going to ask: if you got what you claim is a better method, feature it in your paper. Steig did not claim iridge was better, merely did not disagree with O'Donnell's claim it was better.After rewriting to feature the method O'Donnell claimed was better, Steig points out they need to address issues brought up by Mann since they seemed unaware of them.

How nasty of a referee insist on people using what they claim is best, and warn them of pitfalls!

Hamilton now says iridge was in the 2nd version whence before he said it wasn't. It was in the 2nd version ONLY to show that TTLS was probably good has it yielded similar results. The claim was not that it was a better method only that results in one area my be in better agreement with Steig 09 (which is crap).

celery eater:"Hamilton now says iridge was in the 2nd version whence before he said it wasn't."

I misremembered, iridge results were put into the text of version 2 bu O'Donnell and Eric wanted it put into a figure also, THE BASTARD! Still Steig's fault for tricking them into using iridge, by making them use this as an argument rather than justifying parameters used for TTLS.

Funny how a self proclaimed Auditor was unable to anticipate criticisms in spite of the reviewer telling them the very same things in review. Perhaps the blog should be called Clouseau Audit.

celery eater continues:"It was in the 2nd version ONLY to show that TTLS was probably good has it yielded similar results."

Yes, and if used inappropriately would give results equally poor with a suboptimal TTLS. Steig asked for justification in the first (refused - FAIL) , and warned them to investigate the literature to make sure there were no problem with the second (refused - FAIL - and blamed Steig). The sword cuts both ways when you use a second method and get a similar answer.

Celery eater finishes:"The claim was not that it was a better method only that results in one area my be in better agreement with Steig 09 (which is crap)."

O'Donnell certainly does not qualify it as better only in one area, nor because it agrees with Steig:

“Eric recommends that we replace our TTLS results with the ridge regression ones (which required a major rewrite of both the paper and the SI) and then agrees with us that the iRidge results are likely to be better . . . and promptly attempts to turn his own recommendation against us.”

"Mind you: I have not thought about the idea long enough to know for sure that it is the best possible way to determine the “optimum” method. Perhaps there is some difficulty presented by the fact that if the surface stations really had warmed, we might have expected the satellite data for those stations to shift also. So, there may be something a little artificial about adding synthetic warming to the peninsula stations without tweaking the satellite data."http://rankexploits.com/musings/2011/sensitivity-test-odonnells-idea/

Rabett Run

Subscribe Rabett Run

The Bunny Trail By Email

Contributors

Eli Rabett

Eli Rabett, a not quite failed professorial techno-bunny who finally handed in the keys and retired from his wanna be research university. The students continue to be naive but great people and the administrators continue to vary day-to-day between homicidal and delusional without Eli's help. Eli notices from recent political developments that this behavior is not limited to administrators. His colleagues retain their curious inability to see the holes that they dig for themselves. Prof. Rabett is thankful that they, or at least some of them occasionally heeded his pointing out the implications of the various enthusiasms that rattle around the department and school. Ms. Rabett is thankful that Prof. Rabett occasionally heeds her pointing out that he is nuts.