Climategate Documents Confirm Wegman’s Hypothesis

Lost in the recent controversy over Said et al 2008 is that the Climategate documents provided conclusive evidence of the hypothesis originally advanced in the Wegman Report about paleoclimate peer review – that members of the Mann “clique” had been “reviewing other members of the same clique”.

In today’s post, I’ll examine the origin of this hypothesis in the Wegman Report, its consideration in Said et al 2008 and how Climategate documents provided the supporting evidence that neither the Wegman Report nor Said et al 2008 had been able to provide.

I won’t attempt to analyze the plagiarism issues today (I will return to this on another occasion), other than to say that some recent literature on the topic attempts to distinguish between degrees of plagiarism e.g. Bouville, Clarke and Loui.

In addition, contrary to recent false claims by USA Today, Said et al 2008 was not “a federally funded study that condemned scientific support for global warming”. It does not mention global warming nor even climate. Nor is Said et al 2008 a “cornerstone” of criticisms of either Mann or IPCC as Joe Romm falsely claimed. For example, it has never been referred to or discussed at Climate Audit even in comments. Nor at any other climate blog, to my knowledge. (Update May 24 – Nor is the “cornerstone” included on PopTech’s list of 900+ “skeptic” papers.)

Wegman Report 2006

In 2005 and 2006, there had been considerable controversy over our criticism of Mann et al 1998-99. Although we had placed at least equal (and perhaps greater) weight on other aspects of our critique (e.g. that the important early steps of the Mann reconstruction did not have the claimed ‘statistical skill’ and that the Mann reconstruction unduly weighted Graybill’s bristlecone chronologies, which were known to be problematic), particular controversy attached to our observation that Mannian principal components mined datasets for hockey stick shaped data.

Rather than conceding even seemingly indisputable points, Mann and his associates contested every single issue – even the seemingly indisputable and elementary observation that Mannian principal components mined datasets for hockey stick shaped data. To this date, neither Mann nor any of his associates has conceded the point.

As we and others observed at the time, other reconstructions do not use Mannian principal components and demonstrating defects in the MBH reconstruction does not per se demonstrate the invalidity of the other methodologies. However, as discussed in extensive commentary at Climate Audit, there are serious defects in respect to the “other” reconstructions as well and none provide a “safe haven”. There are also serious defects with the variations on MBH methodology proposed in Mann’s response to our 2004 submission to Nature, variations that were subsequently plagiarized by Mann’s associates in Wahl and Ammann 2007.

When the Wegman Report came out (in July 2006), it observed that the dispute about Mannian methodology had been waged for what then already seemed like an interminable time without resolution by the climate science community of even the most elementary questions, such as the validity of Mannian principal components methodology. In light of this failure, the Wegman Report observed that it was timely for a third party to consider the matter. Wegman came down entirely on our side of the question of whether Mannian principal components mined for hockey-stick shaped data. This issue fell well within the professional expertise of the members of the Wegman panel. The NAS panel also came down entirely on our side of this particular issue, though you’d never know it from the fantasizing commentary of Gerry North and other climate scientists. At the time, Eduardo Zorita observed that the NAS panel was as severe as possible under the then circumstances. One member of the NAS panel (not Christy) told me, under the condition that I not reveal his identity, that we had effectively killed the enterprise of trying to reconstruct temperatures from lousy data and that it would take brand new data to resolve the questions – something that he thought might take 20 years.

After agreeing that Mannian principal components was an objectively flawed methodology, the two panels went in different directions.

The NAS panel attempted to opine on the scientific question of the 1000-year reconstructions, but, unfortunately, attempted to do so without doing any due diligence on other candidate reconstructions. As Gerry North subsequently told a Texas A&M seminar, they just “winged it”. They observed that reconstructions by other academics also yielded hockey stick shaped reconstructions – a point never at issue. However, they did not check whether these other reconstructions used strip bark bristlecones, proxies which the NAS panel said should be “avoided” in temperature reconstructions, and ended up illustrating reconstructions using strip bark bristlecones in their own summary.

After the Wegman Report (like the NAS panel) determined that Mannian principal components was an erroneous methodology, the Wegman Report considered a different question: given the defects of Mannian principal components, how did the methodology pass peer review and then remain unchallenged by specialists in the field? Wegman’s question pertained only to statistical methodology. Whether you could “get” a similar answer by a different method had no bearing on the failure of specialists to call an invalid statistical methodology.to account.

The Wegman Report hypothesized that this failure was due to the inter-connectedness of climate scientists through co-authorship and, in particular, by the extent of Mann’s network of coauthorship, a level of inter-connectedness that the Wegman Report seemed to think as not existing in their own field. Wegman speculated that members of Mann’s closest circle (“clique’ in network terminology) reviewed papers of other members of the clique, resulting in non-independent and weak peer review, which, in turn, had resulted in the failure to identify the incorrectness of Mannian principal components in both the original article and subsequently. This was expressed in the 2006 Wegman Report as follows;

One of the interesting questions associated with the ‘hockey stick controversy’ are the relationships among the authors and consequently how confident one can be in the peer review process. In particular, if there is a tight relationship among the authors and there are not a large number of individuals engaged in a particular topic area, then one may suspect that the peer review process does not fully vet papers before they are published. Indeed, a common practice among associate editors for scholarly journals is to look in the list of references for a submitted paper to see who else is writing in a given area and thus who might legitimately be called on to provide knowledgeable peer review. Of course, if a given discipline area is small and the authors in the area are tightly coupled, then this process is likely to turn up very sympathetic referees. These referees may have coauthored other papers with a given author. They may believe they know that author’s other writings well enough that errors can continue to propagate and indeed be reinforced.

From their analysis of Mann’s co-authorship network, the Wegman Report stated:

it is immediately clear that the Mann, Rutherford, Jones, Osborn, Briffa, Bradley and Hughes form a clique, each interacting with all of the others.

In follow-up questions, Wegman was pointedly asked whether social networks analysis “proved” his “hypothesis” about peer review (an issue that also arose in recent commentary about Said et al 2008):

You stated in your testimony that the social networking analysis that you did concerning Dr. Mann and his co-authors represented a “hypothesis” about the relationships of paleoclimatologists. You said that the “tight relationship” among the authors could lead one to “suspect that the peer review process does not fully vet papers before they are published.” Please describe what steps you took that proved or disproved this hypothesis.

Wegman answered that the anonymity of peer review prevented him from showing that members of Mann’s clique had reviewed one another’s papers, observing somewhat pointedly that the Committee had missed an ideal opportunity to shed light on this question:

Obviously because peer review is typically anonymous, we cannot prove or disprove the fact that there are reviewers in one clique that are reviewing other members of the same clique. However, the subcommittee did miss the opportunity to ask that question during the testimony, a question I surely would have asked if I were in a position to ask questions.[my bold]

Here, I cannot help but make the same point about the Muir Russell review, which, despite far more time, attention and expenditure, also failed to ask the same question of the three CRU members of the clique (Jones, Briffa, Osborn).

The Wegman Report’s use of social network methods to examine intra-clique peer reviewing occasioned only limited response at the time, even in the climate blogs. Among blogs for the faithful, it was briefly considered by Realclimate and Crooked Timber, both of whom sneered at the methodology as yielding nothing more than tautologies.

At Climate Audit, Wegman’s speculation about using social network attracted even less attention. I didn’t post at the time on social networks. To my knowledge, I only commented on the topic in passing twice (both much afterwards), each time being mildly critical, observing that both Jones and Bradley seemed far more central in the network as at 1998 than Mann. (Mashey recently criticized Said et al 2008 for using data up to 2006 rather than data up to 1998.) The topic was discussed by CA readers in one 2006 thread, but I myself didn’t participate in the discussion. Afterwards, the topic more or less disappeared from view here. Said et al 2008 was never mentioned at CA either in a post or even in a reader comment and, until yesterday, I’d never even read Said et al 2008. I don’t think that I even knew of its existence.

Said et al 2008
In the following comments, I’m going to discuss the “substantive” sections of Said et al 2008, reserving the discussion of section 1 for another occasion. Said et al 2008 explicitly linked back to the brief controversy arising from the social network analysis of the Wegman Report. They extended their typology of styles of co-authorship to include “solo, entrepreneurial, mentor, and laboratory”:

Co-authorship establishes a linkage or tie between two individuals. These linkages can be examined as a social network and patterns exhibited in the social network of an individual and his co-authors can shed considerable light on how an author works and deals with his colleagues. Using the block model analysis outlined in the previous section, we can cluster the set of co-authors….

Wegman et al. (2006) undertook a social network analysis of a segment of the paleoclimate research community. This analysis met with considerable criticism in some circles, but it did clearly point out a style of co-authorship that led to intriguing speculation about implications of peer review. Based on this analysis and the concomitant criticism, we undertook to examine a number of author–coauthor networks in order to see if there are other styles of authorship. Based on our analysis we identify four basic styles of co-authorship, which we label, respectively, solo, entrepreneurial, mentor, and laboratory. The individuals we have chosen to represent the styles of co-authorship all have outstanding reputations as publishing scholars. Because of potential for awkwardness in social relationships, we do not identify any of the individuals or their co-authors.

Said et al 2008 published block model diagrams for four authors supposedly representing each of these styles. Although they did not identify the authors or their cliques, precursors for two of the figures can be readily identified from the Wegman Report. The block model (their Figure 1) used to represent the “entrepreneurial” coauthorship pattern is the figure for Mann from the Wegman Report (with names removed). The names removed from Mann’s “clique” (in the top left corner) are Jones, Briffa, Osborn, Hughes, Bradley and Rutherford – all names familiar to CA readers. Similarly the block model that supposedly represents the “mentor” coauthorship pattern is the figure for Wegman himself, taken from the Follow-up Questions to the Wegman Report.

After presenting social network analyses of these coauthorship styles, Said et al 2008 observed that the premise of the peer review system was the existence of “independent, unbiased, and knowledgeable referees”:

Wegman et al. (2006) suggested that the entrepreneurial style could potentially lead to peer review abuse. Many took umbrage at this suggestion. Nonetheless, there is some merit to this idea. Peer review is usually regarded as a gold standard for scientific publication. Clearly it is desirable that the peer reviewer have three important traits: independent, unbiased, and knowledgeable in the field. As any hard-working editor or associate editor knows, finding independent, unbiased, and knowledgeable referees for a paper or proposal is a difficult chore. This is especially true in a rather narrow field where there are not many experts so that issues of independence arise quickly. Clearly as a field becomes increasingly specialized, there are not as many independent experts. Thus finding someone who is both independent and knowledgeable is difficult. In the past, when many more authors adopted a solo style of authorship, finding someone who was not a co-author was relatively easy. Nonetheless, the issue of unbiasedness still was an issue.

Said et al argued that the overlapping co-authorships arising from “entrepreneurial” co-authorship, in effect, drained the pool of potential “independent, unbiased and knowledgeable” referees in a small discipline:

The social network analysis of an entrepreneurial style suggests the following. There are many tightly coupled groups working closely together in a relatively narrow field. It is clear that closely coupled groups have a common perspective. Thus it is very hard to find a referee that is both knowledgeable and independent. Because of the common perspective, in addition it is very hard to find an unbiased referee. Thus this style of co-authorship makes it more likely that peer review will be compromised. One mechanism for selecting referees is to look at papers referenced by the paper in question. This possibility means that a naive associate editor might actually pick someone from the social network of co-authors, who is not obviously a co-author. Indeed, the paleoclimate discussion in Wegman et al. (2006), while showing no hard evidence, does suggest that the papers were refereed with a positive, less-than-critical bias.

As Wegman had done previously in the follow-up answers to the Wegman Report, Said et al 2008 conceded that, given the anonymity of peer review, it was impossible to be more than “suggestive” on this point:

Of course because referees are not identified, getting hard evidence of independence, unbiasedness and knowledgeable expertise is not readily available. The social network analysis can therefore only be suggestive. It is our contention, however, that safeguards such as double blind refereeing and not identifying referees invariably lead to the conclusion that peer review is at best an imperfect system. Anyone with a long history of publication in their heart of hearts knows that they have benefited or have been penalized, probably both, by imperfect peer review

The final sentence of this paragraph is particularly ironic in view of the evidence of cursory peer review of Said et al 2008 itself, the cursoriness of which was perhaps due to longstanding association between editor Stanley Azen and coauthor Wegman. (While the short time of acceptance is evidence of this cursoriness, I have seen little evidence that peer review is made more effective by aging on the to-do list of the reviewer.) The cursoriness is evident by the failure of the peer review process to require attributions for section 1. No reasonable reviewer could have regarded the definitions in section 1 as original to Said et al 2008, regardless of whether the text was paraphrased or verbatim. The failure of the peer review process to require attributions in section 1 is itself evidence that Said et al 2008 was accepted essentially without anything other than the most cursory review.

Like the Wegman Report, Said et al 2008 could only suggest that “entrepenurial” co-authorship resulted in a situation where members of a clique reviewed one another’s papers, but fell short of actually demonstrating that this happened in any actual cases.

Recent Commentary
Little recent commentary on Said et al 2008 has addressed the substance of the article (most commentary showing little evidence of the parties having actually read the article). However, two social network specialists have commented – Kathleen Carley in a recent USA Today interview and Gary Robins (page 151 here).

Carley characterized Said et al 2008 as an “opinion piece”, observing that the authors had not been able to provide data to “support their argument” ( a point clearly acknowledged by the authors who pointed to the anonymity of peer review.) Although Carley would have recommended a major revision, she didn’t characterize anything within the article as ‘wrong’:

Q: (So is what is said in the study wrong?)
A: Is what is said wrong? As an opinion piece – not really. Is what is said new results? Not really. Perhaps the main “novelty claim” are the definitions of the 4 co-authorship styles. But they haven’t shown what fraction of the data these four account for.

Earlier, Garry Robins of the University of Melbourne had observed that network analysis by itself could not demonstrate that members of the Mann clique had reviewed one another’s work, conceding that, given peer review anonymity, ‘this data would be difficult to obtain’.

The implications for peer review in section 4 of the article present inferences that go too far beyond the data and the analysis. The argument is that because there are many tightly coupled groups working closely together they will have a common perspective, so unbiased reviewing is compromised. Of course, in regard to a given article, a set of co-authors (i.e. within the one group) will have a common perspective at least in regard to that article. That does not mean that the other groups necessarily share that same perspective (even if they share one co-author). The literature on network entrepreneurship, including work by Granovetter and Burt, gives plenty of theoretical and empirical reasons to suggest that such groups, linked by one entrepreneur (in this case the central author), may indeed have different opinions or approaches. In other words, a common perspective within groups does not imply a common perspective across groups. Because it is not possible a priori to infer perspectives from network structure, the issue is an empirical question that requires additional data before conclusions can be drawn.

Moreover, the analysis is essentially an egonet strategy, focusing on the co-authors of one central author. Even if there were a common perspective within the egonet, this does not imply a paucity of other reviewers outside the egonet (i.e. who are not co-authors) to provide different perspectives. It is highly risky to draw conclusions about a complete network based on a single egonet. In summary, there may or may not be compromised reviewing in various research domains but this network analysis cannot provide sufficient leverage to show it.
A more complete network analysis involving editors‘ and reviewers‘ links, in the context of a domain-wide co-authorship network, together with data on individual positions taken on controversial research issues, would be one way to proceed, although admittedly some of this data would be difficult to obtain

Climategate Social Network Diagrams
Soon after the release of the Climategate dossier, social network-type diagrams were published – though no one at the time thought to compare them to the social network diagrams in the Wegman Report and Said et al 2008.

The main nodes of the Climategate social network diagram were Mann, Jones, Briffa, Osborn with Santer, Schmidt, Wigley, Overpeck and Jansen next in prominence, as shown in the figure below (click on figure for expanded version):

Other interesting social network diagrams for Climategate were presented at the time here and here.

The Climategate Dossier
Wegman had stated in 2006 that “obviously because peer review is typically anonymous, we cannot prove or disprove the fact that there are reviewers in one clique that are reviewing other members of the same cli que.”Although the Climategate dossier offers only a small sample, it provides the evidence missing from the Wegman Report and Said et al 2008 – conclusive evidence that members of Mann’s clique had reviewed papers by other members of the clique. Not only does the Climategate dossier contain emails about peer review, it contains a number of actual reviews by Phil Jones (a member of Mann’s clique) of articles by other members of the clique.

In each case, as Wegman had hypothesized, Jones’ reviews were short and favourable – invariably “acceptance subject to minor revisions” – recommendations that were entirely opposite in tone to his reviews of critics of the clique. Moreover, in important cases, Jones’ reviews evaluated articles in terms of how they would contribute to the forthcoming 2007 IPCC assessment report – in which he and other clique members (e.g. Briffa, Osborn, Trenberth) were Lead Authors and Coordinating Lead Authors responsible for assessing articles written by members of the clique and reviewed by other membes.

For example, the Climategate documents contain Jones’ review of Santer et al 2005 , co-authors of which also included not only Santer, but other regular Climategate correspondents: Wigley, Schmidt and Thorne. Jones’ cursory review was less than a page and contained only minor comments. Santer et al 2005 was submitted on May 16, 2005; Jones’ short review was dated June 4, 2005; the article was accepted on July 27, 2005.

One of the more controversial Climategate emails was Jones’ email saying that he would keep a paper by McKitrick and Michaels criticizing CRUTEM out of the forthcoming IPCC report “even if we have to redefine what the peer-review literature is!” Jones made good on this threat through the first two drafts, the only versions sent to external reviewers. In the final draft, Jones and his fellow IPCC authors grudgingly commented on the McKitrick and Michaels paper, but only accompanied by disparaging editorial comments which, as McKitrick later observed, then had no support in academic literature. In 2008, Gavin Schmidt of realclimate, submitted an article entitled “Spurious correlations between recent warming and indices of local economic activity” to International Journal of Climatology (Tim Osborn of CRU being on the editorial board) that purported to provide the support missing at the time of the 2007 IPCC Report. Once again, Phil Jones was a friendly reviewer for submission by another member of the clique. No such indulgence was extended to McKitrick when he attempted to respond to Schmidt’s criticism at the same journal. On this occasion, Jones was doubly conflicted, since Schmidt 2008 criticized the critics of Jones’ own temperature index. As on other occasions of intra-clique reviewing, Jones recommended acceptance with minor revisions and the article was speedily accepted. Jones’ review stated:

it will be good to have another paper to refer to when reviewing any more papers like dML06 and MM07. There is really no excuse for these sorts of mistakes to be made, that lead to erroneous claims about problems with the surface temperature record.

The Climategate documents even contain a review by Jones of a lengthy submission by Mann (one that I am presently unable to identify), with Jones once again recommending acceptance subject only to minor revisions.

Jones’ Review of Wahl and Ammann
Wegman’s “hypothesis” was vividly confirmed with the evidence in the Climategate dossier that Jones had even been a reviewer of Wahl and Ammann 2007, the official Team response to the MM criticisms of Mann et al 1998-99, that had been at issue in the Wegman Report. Wahl and Ammann had been submitted on May 11, 2005; Jones’ review was only 11 days later.

Unsurprisingly, Jones’ review was limited to “minor” comments, with his only ‘major suggestion’ being to include a map in the supplementary information. Jones’ review did not even object to Wahl and Ammann’s reliance on a rejected submission (Ammann and Wahl, submitted to GRL) for essential results – the long story of “Caspar and the “Jesus Paper” is wittily told by Bishop Hill. Jones appraised Wahl and Ammann in terms of its potential contribution to the forthcoming IPCC report (in which it would be assessed by Keith Briffa), stating that it was to be “thoroughly welcomed and is particularly timely with the next IPCC assessment coming along in 2007”, adding (inaccurately as it turned out) that it would ‘go a long way to silencing the critics‘.

As CA readers are aware, there were numerous substantive issues about Wahl and Ammann. Many of these were discussed in Wegman’s answers to follow-up questions to the Wegman Report, in which Wegman was asked detailed questions about Wahl and Ammann. Wegman observed that Wahl and Ammann had actually validated our results and calculations. Wegman stated caustically that Wahl and Ammann’s methodology, in which they altered MBH methodology to “get” a similar answer, had “no statistical integrity”. See Wegman’s discussion of Wahl and Ammann at question 10 here. Even though there were numerous substantive issues on the table, Jones did not ask about or seek clarification on any of these issues.

I won’t re-litigate these various issues in this post. My point here is that instead of Wahl and Ammann being peer reviewed by someone independent, it was reviewed by Phil Jones, Mann’s closest correspondent, with the subsequent IPCC assessment being conducted by fellow clique member Keith Briffa of CRU.

Given the concerns about plagiarism in Said et al 2008 (which is limited to failure to attribute introductory material only), it is important for readers to realize that Wahl and Ammann 2007 thoroughly plagiarized Mann’s 2004 submission to Nature (responding to our 2004 submission), not just in background words, but in important ideas and concepts. Virtually every argument advanced in Wahl and Ammann had previously been proposed in Mann’s 2004 submission to Nature (some of which had also been summarized in posts by Mann at realclimate in late 2004 and early 2005). Despite their obvious use of Mann’s prior work, Wahl and Ammann contained no reference to Mann’s prior work (nor even an acknowledgment to Mann.)

Conclusion
Unfortunately, section 1 of Said et al 2008 (and the corresponding section of the Wegman Report) contained background material from standard texts without attribution. This breached academic protocols and requires a proportional response – a point that I will discuss in a forthcoming post.

However, within this controversy, it should not be overlooked that the Wegman Report was vindicated on its hypothesis about peer review within the Mann “clique”. The Wegman Report (and Said et al 2008) hypothesized, but were unable to prove, that reviewers in the Mann “clique” had been “reviewing other members of the same clique”. Climategate provided the missing evidence, Climategate documents showed that clique member Phil Jones had reviewed papers by other members of the clique, including some of the articles most in controversy – confirming what the Wegman Report had only hypothesized.

212 Comments

Every time I read one of these posts, I picture a coffin, already full of nails, with a new nail being pounded in between in the tiniest possible space. Eventually the coffin will collapse under the weight of all the nails.

Re: Jeff Alberts (May 23 23:49), The danger is that it won’t collapse and will eventually be clad in steel. Just what does one hammer into a steel coffin? Never mind… It will sink much easier when finally eased over the gunwale.

Steve,
I think you continually misunderstand the role of journal referees. Editors choose those that can give them a good range of advice. They will typically include one that he judges to be “friendly” and one that he expects to be critical. He will assess what they say accordingly. Of course, the friendly ref is likely to come from the “social network” of the author. The critical author will often be one that has written a paper in disagreement. If the paper itself criticises other work, the author of that work is a likely referee.

Many journals formalise this. For example, CSDA, where Said et al appeared, invites authors to submit the names of their preferred referees:“Please submit, with the manuscript, the names, addresses and e-mail addresses of 3 potential referees. Note that the editor retains the sole right to decide whether or not the suggested reviewers are used.”
This is common practice. Of course, authors nominate their friends.

They fight two notions: 1. that pals review. 2. that pals let certain things through that should not get through.

So. You have agreed with the first wegman claim. There is pal review. And so you think that DC Eli and mashey are wrong.

Second. You think Steig was fair in his fight to keep Odonnell from being put through as published. And you think that the other reviwers were friendly and too easy. So, you agree with wegmans second conclusion. Here too you disagree with DC Mashey and Eli.

What we lack is an adequate theory to explain how bad science gets thru review. Wegman has a theory that purports to explain some of that. Pal review. Absent a better explanation ( like leprechauns) I think you agree with Wegman.

No, I didn’t say any of that. I said nothing about Steig. And I didn’t say anything about reviewers being too easy. Rev C had a lot of recommendations. But I thought he qualified as friendly.

And I don’t agree with Wegman, and didn’t say so. I think his stuff is nonsense. I doubt that the “bad science” you have in mind is really so. But any review system will have human error.

And you’re stretching the term “pal review”. I’ve said that an editor will usually try to have one reviewer who could be seen as sharing the viewpoint of the author. And a critic. And then others. And he’ll weight their opinions accordingly. That gets the best range of investigation and advice. There’s nothing new about it, nor peculiar to climate science.

Its true that the trouble may be that any journal editor may have only a small group of people to select from , so its it may be hard to avoid ‘pal review’

However, we known that Jones and co did try to bring pressure on journals to keep articles out and ensure their own views dominated the area going so far as to demand some editors got sacked. When you add to that their past status in the community , you can see how a journal editor may find life ‘easier ‘ if they pick reviewers who may be inclined to be supportive of Jones and Co work in the knowledge that the you starch may back I will starch your system is at work . Add to that the fact that they been shown to pass reviewers between them , even when they the person has no role within the review. And it can be seen how this ‘pal review’ can work even if the journal editor doe not intend it.

A linkage that thus far attracted little comment is the participation of the Jones-Mann clique on editorial boards of climate journals in this period. Mann was an editor of Journal Climate; Briffa was on the editorial board of Holocene, Osborn on INt J of Climatology, Jones and Santer on Climatic Change. Thus their concern to “plug the leak” at GRL where they were worried that they didn’t have someone on the editorial board,

1. there is a difference between having a reviewer who agrees with your position
and having a reviewer who is friend or co worker or co author.

2. Wegman made a claim. His analysis suggested that Mann’s co author’s were likely
to be reviewers of his

#2 happens to be true. Denying that sort of reality put you in a class with the
folks who deny basic radiative physics.

That is the question on the table. Not whether the practice is allowed. Was wegman right? You seem to be arguing that he was right and giving an explantion of why Pal review happens. So, you agree with Wegman.

The issue isn’t that there is ‘pal review’ but whether or not it matters when (1) a new field emerges into prominence, and when (2) the public knowledge demands upon the peer review system in screening out ‘bad ideas’ fails.

MY theory might well be similar to Ross Mc
Itrick’s–namely, that positive reputational effects of those on or friendly to the Hockey Team prevented all due diligence. Thus, the value of Judy Curry and Richard Muller in re-examing the cheerleading apraisals from realclimate.org, and ferreting out the merits and demerits of ‘Hiding the decline.’

When science can’t screen out even the simplest and clearest of bad science, then the system is broken. And hence Steve’s re-examination of Wegman’s contribution towards the same.

As usual, Nick refuses to concede the slightest point and instead changes the subject. All too often, readers allow Nick to divert discussion from the topic of the post.

Nick says “I think you [Steve] continually misunderstand the role of journal referees.” This post was NOT an editorial about the role of journal referees. This is a large topic. Personally, I wish that journal referees would pay more attention to ensuring that articles contained complete disclosure, including archiving of all relevant data and results, as opposed to whatever they do now.From the outset of my inquiries, I’ve held the position that “journal peer review” constitutes a very limited form of due diligence. An opinion that is not shaken by examining Jones; reviews of fellow members of the Team.

In the post at hand, the issue is Wegman’s surmise that members of Mann’s clique had reviewed papers by other members of the clique. Nick is free to argue whether this is or isn’t a “good” thing or whether there are practical alternatives. But first, Nick should at least agree that the practice took place. And that this confirms the surmise made in the Wegman Report.

Re: Steve McIntyre (May 24 09:04),
I’m sure the “practice” took place. I’m simply saying that editors try to assemble panels with a range of dispositions. And when for example Jones reviews Santer’s paper, there would have been others on the panel of less “friendly” disposition. It works like that in other fields too. On many journals Santer could have nominated Jones as a referee if he wanted.

Demonstrating that one member of a panel was “friendly” proves nothing. There’s usually such a person. What matters is the total composition of the panel.

At journals where I submit, it is forbidden to submit suggested names of reviewers with whom one has professional relationships such as being on a grant together or publishing together. This is called “conflict of interest” fyi.

In most of the ecology/conservation/forestry journals where I submit, it asks for names and says to please refrain from using those with whom you have a professional relationship (boss, student, coauthor). It is fairly standard. This is for my 130 peer-reviewed papers.

Re: Craig Loehle (May 24 11:16),
The google search that I indicated shows a large range of Journals that request, in their guide for authors, that referees be nominated. They use a standard phrase. I haven’t yet come across one that made restrictions on who they could nominate. That’s why I’m would like to know if you could nominate a specific journal that does state such a restriction?

The journal “Ecology” says: “Suggested editors and reviewers should not have any conflict-of-interest in evaluating the paper.” and this holds for all journals of the Ecological Society of America and for most other journals in the field. You didn’t look very hard.

The phenomena of errors appearing in science publication deserves an explanation. You’ll recall that at Lisbon we talked about the theory of “honest” error. One could say that errors just “naturally” appear in science
publication. Or one could offer a hypothesis about what preconditions lead to errors. Wegman has proposed that pal review, or lets call it friendly review, can contribute to the introduction of error.

Taking O Donnell as an example, do you believe that the “unfriendly” review by Steig improved the paper? Do you think The friendly review was better or worse?

Now clearly, you can play the skeptic to the hypothesis, much as skeptics look at the climate and say there is nothing remarkable to explain.

Re: steven mosher (May 24 17:17),
I agree that energetic informed review often improves papers. That’s why editors aim for a range of reviewer views – the anti often does the hard work, even if driven by mean spirit. It’s why the Church had a Devil’s advocate – to make saints saintlier. Remember the hullabaloo re peer review a few months ago was about 88 pages!

But friendly review needn’t be empty. As someone said on the thread, friends don’t let friends publish mistakes. The Jones reviews cited were helpful, and some thought went into them.

Steve – since you binned my last post I will repeat the uncontentious parts:

If I wrote a paper on the JET fusion processes and why I need to use beryllium tungsten walls.
Who would review it – geologists? research chemists? joe bloggs, the blogging king?, or would it be other researchers in the field of fusion reactions and material scientists.
Would I know these others?
Yes.
I would be in email, telephonic, and even social contact with them. Some would even be friends! Pals (to you).

They would all be from a small number of people who hopefully understand the subject they are reviewing and I would hope they would be the most knowledgeable in the field. They may be people that I had reviewed. This would severely limit the number of reviewers. I suppose you could call this a clique. Is this objectionable??

the ford prefect.
Because your hypothetical paper would presumably cover several disciplines, let’s take chemistry as one, there could be value in a review by a chemist unknown to the author, a specialist chemist experienced with some of the materials and some mechanisms. It follows that, ceteris paribus, a proper review could be done by a cluster of unrelated, cross-disciplinary scientists not known to the author. Steve has shown many times over the years that climate papers later inspected by statisticians are often sub-optimum. Wegman has stated that climate researchers need to use professional statisticians more often. Surely you would have to concede that having pal review of your JET fusion process might not be the best way to go. You seem to underestimate the potential value of inputs from people outside the immediate field. I would be delighted to have strangers review my submission because a completely novel, less incestuous line of interest might emerge. Indeed, there might even be a higher probability that it would.

Re: Geoff Sherrington (May 24 18:43),
Geoff,
It’s great to theorize about clusters of unrelated, cross-disciplinary scientists. But where do editors find such people with time and willingness to spare? And get on top of new stuff, as they would have to do?

Wow, Nick. Your past coauthors, who probably were on a grant with you or were your student or prof and with whom you (in the Team case) exchange chatty emails and write a whole bunch of papers would not have a conflict of interest? Just wow. And the field isn’t that small that objective reviewers can’t be found.

So your complaint is that the peer review process is weak because of a shortage of reviewers. That is not news, nor relevant to this discussion unless you are conceding that poor review practices in climate peer review have deterred potential referees, who prefer not to enter a mud sling.

Would it not be easier to concede that Steve is correct in his header? It is a strong possibility. If you have inside knowledge that he is wrong, then you should present it, even at the risk of losing your anonymity, because few people are buying your arguments because inter alia they stray from the point too often. For your posts, Logic 8/10; Relevance 1/10.

REVIEWING AND PROMPTNESS OF PUBLICATION: All manuscripts submitted for publication will be immediately subjected to peer-reviewing, usually in consultation with the members of the Editorial Advisory Board and a number of external referees. Authors may, however, provide in their Covering Letter the contact details (including e-mail addresses) of four potential peer reviewers for their paper. Any peer reviewers suggested should not have recently published with any of the authors of the submitted manuscript and should not be members of the same research institution.http://www.benthamscience.com/open/tofscij/MSandI.htm
This was found using google under this search “peer review policies for ecology/conservation/forestry journals”

Nick, it seems you don’t search well because the first search item indeed does include excluding language.
I wonder dear boy, how many more examples will I find where you say none exist?

Re: Gaelan Clark (May 24 16:55),
I didn’t say none exist – I said I couldn’t find any. And I did find many journals which invited suggestions with no restriction. However, I agree, you’ve found the Open Forest Science Journal which does have such a restriction.

Nick,
There is no evidence of critical reviews in this literature. Take the Schmidt paper. This contains two very obvious points which referees might have picked up on. The first was that he did not test for spatial autocorrelation but merely asserted that it was likely (and did not distinguish clearly between residual autocorrelation which does matter and was not present in M&M, and spatial autocorrelation in the dependent variable which is a feature of the data which needs to be explained. The second was that the climate model predictions left even more to be explained by socio-economic variables than the actual observations, so were of no help in explaining the Mckitrick and Michaels results. If there had been a critical review I doubt the paper would have been published. A good reviewer (critical or not) would have pointed out these weaknesses and certainly asked for tests for autocorrelation of residuals, which are bog-standard things to ask for. Jones review merely shows how little he knows about the issues. Nor is there any evidence that Wahl and Amman had a critical reviewer. When we see also the correspondence about the need for tough reviews of critical papers your hypothesis of balanced reviewing in this area seems very unlikely.

Whoops! Try again
“This is common practice. Of course, authors nominate their friends.”
Do they, Nick, or do they nominate colleagues and other established researchers in the field who can give the paper a thorough going over so that when it is finally published it stands up to scrutiny and actually advances the science?
Anything else is purposeless except to aggrandise the researcher himself. Simply adding to the list of papers with one’s name attached, regardless of their scientific credibility, is a waste of everyone else’s time and, in the case of most climate research, taxpayers’ money.

It’s very simple. The editor is assembling a panel. He wants a range of advice. He asks authors who they would like on the panel. He asks for 3 names. Why 3? Because, and this is something people here seem to have trouble understanding, it’s really hard to get people to say yes to reviewing. He’ll ask the first, and if no, then the second, etc. He’s probably hoping the authors have done the hard work of getting their nominees to agree. But he’s not going to make a panel out of only author nominees.

Nick, again, if you can refrain momentarily from editorializing about the wonders of academic peer review, let’s review the statement that Wegman made in 2006 that is at issue here:

because peer review is typically anonymous, we cannot prove or disprove the fact that there are reviewers in one clique that are reviewing other members of the same clique

Both the Wegman Report and Said et al 2008 have been severely criticized for failing to prove this point.

Do you agree with Wegman’s original hypothesis: that members of Mann’s “clique” were reviewing other members of the same clique?

Whether this is or isn’t a “good” thing or whether it is or isn’t out of line with standard journal practices are different question, but please can you either agree or disagree on Wegman’s original hypothesis?

Re: Steve McIntyre (May 24 10:09),
Yes, I’m sure that climate scientists have reviewed papers co-written by scientists that they have coauthored papers with in the past. I’ve done that myself. I’m not a climate scientist. In my field, if that was forbidden, papers simply couldn’t be reviewed. In forty years, you accumulate a lot of past co-authors.

Wegman and Said were correctly criticised because they wtote a whole confection where they tried to make coauthor patterns take the place of evidence of reviewing. They had no evidence of whther the practice was more or less extensive among the various groups that they spoke of.

Let me also comment that I have never argued that it is the function of journal peer review to root out all possible errors. At this time, we don’t know whether MBH98, MBH99 were peer reviewed by pals or not. But even if they were reviewed by independent reviewers, that the reviewer was independent would be no guarantee that he would have spotted a defect in Mannian principal components.

More serious issues with the peer review process of MBH in my opinion arose from the withholding of adverse verification results, and particularly the failed verification r2 statistics.

Re: Steve McIntyre (May 24 12:25),
Steve,
The order of posts is now so randomized that I don’t know if your comment is responding to something I wrote or not.

Adding to the randomness is that my comment of 11.25am, to which Roman replied, now seems to have gone into moderation.

However, to answer this one, it just isn’t known to be true that “the peer review was done by Phil Jones”. PJ was probably just one of several reviewers, likely with a range of views. You have no reason to say that Schneider did not find an “independent” reviewer.

Re: Steve McIntyre (May 24 12:25), No he’s arguing that Pal review is both necessary and insufficient. he’s arguing that the best practice is Pal review. He’s arguing that Pal review does not vitiate the independence we expect out of peer review.

I’ll put words in Nick’s mouth till he spits them out or disowns them.

Pal review is required. It’s the best way to do things and I would not change it if I could. the public should accept Pal reviewed science because I say so.

Just because YOU think they’re biased and dishonest doesn’t mean they are.

Peer review is about checking that the study was correctly and that the results are sound. It isn’t perfect and some bad papers get through. Just saying that people are friends or that they have co-authored papers is insufficient to demonstrate flaws. You need to look at these case by case, you can’t rely on a syllogism to prove your case.

Check every other field of science and you will see that people who review papers will be in the same social network. Even in your precious Engineering Journals.

If people have integrity and an interest in the truth – they would nominate critical scientists – the people they would argue the science with at a conference, for example. If people are not interested in the truth – they will nominate a suitably dishonest and friendly pals.

The original hypothesis of the Wegman article was a practical one – that “entrepeneurial” collaboration reduced the pool of potentially disinterested reviewers. I think that Wegman under-discussed the role of IPCC here, which encouraged academics in the field to collaborate on a consensus. A result of this is that the pool of disinterested reviewers was reduced.

A key point that came out very clearly above. I do agree about almost everyone missing the point here. That critic stokes the wrong fire and everyone seems to jump into it. Take a bit more time and, if you want to talk to him at all, get Nick to answer the question: was the Wegman hypothesis about pal review right or wrong, based on the evidence of the Climategate emails? Then look at the same issue in the light of IPCC involvement of so many in the clique. Mega-helpful.

Steve: Nick, you still refuse to agree to the point in question. The question is not whether “climate scientists have reviewed papers co-written by scientists that they have coauthored papers with in the past”. The question is whether you agree that “members of Mann’s clique reviewed papers by other members of the clique”. Whether such practices are widespread in climate science is an interesting but different question. Please answer the question at issue and we can move on.

The concept does not appear in the section 2.3, entitled **Background on Social Networks**, p. 17-22, where are defined core concepts (Actor, Relational Tie, Dyad, Triad, Subgroup, Group, Relation, and Social Network) and are characterized some possible interpretations of network vertices (Partitions, Allegiance, Global View, Cohesion, Affiliations, Brokerage, and Centrality).

Of course, the concept of clique should not be considered to connote anything. After all, it is a mathematical concept borrowed from graph theory:

McIntyre,
Please define what you understand by clique.
Is it 3 people 100 people. Climate scientists in general, some in particular????

Is it those that have reviewed papers of each other. Is there a time limit? Is there a clique involved with distant relatives? Can you have a clique of people who passed through the same University? Is there a time limit?

Wegman’s hypothesis was nonsense, as the history of the paper itself shows. Said and Wegman 08 was sent to an editor who was part of Wegman’s clique. It is pretty clear that the editor, Wegman’s friend, was the reviewer (the dog ate the homework excuse is not to be believed). So here is an instance where the great man school of authorship produces a mess.

Now this is not to say that the editor should have picked up the copying (plagiarism), although Eli suspects we are not from the time when all submitted papers will be run through TurnItIn or the like as it now is mandatory at many places with theses. Which of course, would have been trouble for a number of theses from the Wegman group.

But it is to say that the editor let this very weak paper through because he had trust in Wegman, the great man. He admits that he had no knowledge of Social Network Analysis and no basis for his review other than a personal relation.

The real issue is the relationship of the editors with senior authors. Wegman’s taxonomy is a load of cod’s wallop.

This is the key point that has been missed by everyone. It is the IPCC angle that makes comparisons to most other academic practice invalid. The IPPC involvement forces climate scientists to form cliques.

If “critical” means “will never accept”, then it would be foolish to nominate such scientists. If M&M were to nominate Phil Jones, for example, he’d have “gone to town”, and their paper might never have been published.

“Critical” as in “review objectively with a critical eye” ought to be a quality of all scientists. However, one of the issues with a hypothetical group of partisan peer reviewers is that it forces non-members of their group to change behaviour defensively also.

Nick, while I’m much chagrin to “pile on”, I think you’re being a bit disingenuous. In the common public, especially in the alarmist camp, there is indeed an expectation of unbiased and independent review. Even the words “peer reviewed paper” is a euphemism for valid., even though it is painfully obvious to people such as you and myself that nothing could be further from the truth.

What you are stating, may or may not be true, but this certainly isn’t the impression alarmists have left with the public. If the reviews are not to be unbiased and independent, then it would behoove the alarmist scientists and their advocates to clarify this point to the public. We are all about more open and honest communication, no?

Suyts,
Here is Nature on how their editors use referees advice. They emphasise that it is advice – the editors make the decision.

“Editorial decisions are not a matter of counting votes or numerical rank assessments, and we do not always follow the majority recommendation. We try to evaluate the strength of the arguments raised by each reviewer and by the authors, and we may also consider other information not available to either party. Our primary responsibilities are to our readers and to the scientific community at large, and in deciding how best to serve them, we must weigh the claims of each paper against the many others also under consideration.”

And here’s what they say about what they look for in referees. They don’t demand independent or unbiased. These are their stated policies.

“Reviewer selection is critical to the publication process, and we base our choice on many factors, including expertise, reputation, specific recommendations and our own previous experience of a reviewer’s characteristics. For instance, we avoid using people who are slow, careless, or do not provide reasoning for their views, whether harsh or lenient. “

Nick, that’s fine. Now all we have to do, is inform the public that they’re perception of what peer-review means isn’t correct. I’m sure you’ll join me in support of such an endeavor. You and I can jointly make a statement, something to the effect…….

1. Peer-review isn’t a euphemism for validity, but rather, in some cases, simply like minded ideologues agreeing with each other. And,
2. Peer-review articles in climate science are very likely not independent nor unbiased.

James Sexton &

(Nick, feel free to copy and paste that part of the statement and add your name, you can even put yours in front of mine if you wish!)

From ‘Policy’
” All contributions submitted to Nature journals that are selected for peer-review are sent to at least one, but usually two or more, independent reviewers, selected by the editors. Authors are welcome to suggest suitable independent reviewers and may also request that the journal excludes one or two individuals or laboratories. The journal sympathetically considers such requests and usually honours them, but the editor’s decision on the choice of referees is final. ”

and from ‘System’

” Not only does peer review provide an independent assessment of the importance and technical accuracy of the results described, but the feedback from referees conveyed to authors with the editors’ advice frequently results in manuscripts being refined so that their structure and logic is more readily apparent to readers. ”

The parameters by which {independent} is assessed are not stated.

It seems clear to me that collaborating under an active grant would be grounds for disqualifying ‘independence’.

Routine, mandatory disclosure of all such potential entanglements with prospective reviewers seems to be a prudent addition to the process.

Shared active grants should down-weight the selection of thus-linked reviewers, unless no other reviewers exist.

They’re at different institutions, in different countries. They didn’t collaborate on this project. So yes, they’re independent.
They are capable of independent thought. I know this is anathema to you, but you are hardly the authority. Sure, you can shout your opinion, but it’s not important to the way peer review works.

What you are doing is making a claim that they are not independent without actually saying what your (and I stress this is al your own invention) criteria for independence is. Did you bother contacting the journal for how they determine who is and isn’t ‘independent’?

Steve: the use of the term “independent” reviewer is not my “invention”. The term is used, for example, by Nature. The onus is on Nature and other journals to define what they mean by an “independent” reviewer. Perhaps you can ask them.

You are inventing your own definition of ‘independent’, if you haven’t checked with the rules of the Journals. YOU are claiming they are not independent. How can you do so, when you haven’t even bothered to find out what they mean.

From my perspective it is clear that Jones and Mann are independent. You do understand that you can be friends with people and be independent of them.
Steve: I discussed whether proxy studies with overlapping authors were “independent”. In that context, the meaning of “independent” is obvious: studies with overlapping authors are not independent. Nor are studies that recycle the same proxies. I did not comment on what an “independent reviewer” was in those discussions.

Re: Nathan (May 26 05:16), what are you going on about? What makes you believe that the publishers of the journal (not the “journal” itself which is not a sentient entity) would think that a separate definition of the term independent is either needed or effectively practicable?

I may be missing something in the concept, so I await your clarification as to your personal view of how such a definition might differ from the usual one as understood by most adults. Merely stating some examples illustrating the lack of independence in the instructions to the authors and reviewers is not a “definition” as far as I am concerned.

For the record, let me suggest my view of the definition of the term “independence” as it might apply in the procedures for reviewing of articles in academic journals:

The reviewer should not be involved in a such a close personal or professional relationship with the authors that the presence of that relationship could reasonably be construed as placing the reviewer in a position where the objectivity of the reviewer could be affected.

Could you perhaps detail the extensive conflicts of interest that jones had with these three authors prior to the reviews? Grants they had together? Evidence of a close working relationship, significant prior co-authorship perhaps? Just declaring that they are part if a ‘team’ and thus immediately conflicted doesn’t really count.

I am not simply “declaring” that Mann, Jones and Briffa are part of a “team”, Nor do I have time to survey the interconnectedness. There are many examples thorghout this blog. You can read the Climategate letters and look through the literature. Sorry to be curt.

Is the assertion that ONLY Mann’s co-authors reviewed Mann’s papers, or that SOME of Mann’s co-authors were among the referees who reviewed Mann’s papers. Very different things that are being mixed together.

Only in your world, Eli. No one could possibly know who all the “reviewers” were of all of Mann’s papers. And I’ve not seen that assertion. More, this conversation, in a larger sense, isn’t uniquely about Mann’s papers, but the papers of the clique. (Or the “team” as it is commonly known.)

Even if only SOME of Mann’s reviewers were his coauthors (and in Jones’ case close friend) this is problematic–isn’t it? I have sent back papers when the authors included close friends or my boss–doesn’t everyone?

I discussed reviewing with a philosophy professor at the University of Toronto a while ago. He said that, in their field, if they knew the author, they could not act as a reviewer and had to return the manuscript. Climategate attracted a little interest among philosophy professors a year or so ago, but none, to my recollection, opined on this aspect of Climategate.

1. Not related by kin or marriage
2. Not employed by the same institution
3. Not in business together (as in for $$)
4. Not thesis advisors or students
5. Not co-authors or grant collaborators in the last 48 months
6. Not co-editors in the last 24 months
7. General get out of town rule, if you think that you have too close a personal relationship you should step back. That is pretty much up to the person asked to review.

People sometimes invoke the last one to avoid giving a negative review. This is the only subjective criterion.

Provide the names of all persons, participants and affiliates with
potential conflicts of interest as specified in the NSF GPG. For
each person, enter the first name, last name, and
institution/organization. For each person listed on the project
personnel list, include all co-authors/editors and collaborators
(within the past 48 months); list all graduate advisors and
advisees; list all subawardees who would receive funds through the
Expeditions award.

Some comments on this. Many years ago when everyone got sick of sixty page cv’s in proposals, NSF cut the allowed limit to two pages for PIs and CoIs with at most ten publications. They put in the list of collaborators and the advisers so the program officers could avoid picking conflicted reviewers.

Notice that Eli said that these were the US rules for most agencies. actually NIH uses three years for co-authors. Each journal and each country has somewhat different standards. Many leave it to the judgement of the reviewers, Science for example’
——————-
Conflict of Interest: If you cannot judge this paper impartially, please notify us immediately. If you have any financial or professional affiliations that may be perceived as a conflict of interest in reviewing the manuscript, please describe those in your confidential comments.
——————–

Steve: once again, can you provide me with a reference for the list of criteria in your previous comment. I presume that the matter has previously been dealt with in a professional way and that the climate community has not left it up to “Eli Rabett” to set out its policies on this important matter.

From your silence on the matter, I presume that you agree that Jones was NOT an “independent” reviewer of the four articles discussed in the post.

In a narrow field this might be difficult. AMybe close friends would be an acceptable criteria but by attending conferences one gets to know just about everyone.

My own opinion and experiecne is that the opposite factor would be more important. If one writes a paper that6 is critical of method favored by a significant member of the field then one is always in danger of being reviewed by one of his/her acolytes (toadies). Even if you are right, you are wring.

It is very clear from the CRU emails that while Mann, Jones, Briffa, etc may not have been actively writing papers with one another at the time, they were actively corresponding and talking about exactly what we’re discussing. So due to their secrecy, editors may not have known about their close relationship.

Not sufficient. As far as I can see the only prior co-authorship of Jones with Schmidt was on a Santer et al 17 author paper in 2008. Hardly a close connection. I can’t see any coauthorships with Wahl at all. And Amman is only a co-author with Jones on a Mann et al 2003 paper related to Soon and Baliunas. Again a many-author paper (13). None of this is indicative of any significant conflict.

Steve: Ammann was coauthor with Mann on multiple papers. Schmidt and Mann are both principals of realclimate.

Sorry, but we are talking about Jones here, not Mann. If Mann had reviewed the Schmidt or Wahl papers you might have had a point. But he didn’t. What does the Schmidt paper have to do with Mann in any case? You appear to be appealing to a COI twice removed, an argument that is nowhere to be found in any manual of ethics.

Re: Nick Stokes (May 24 09:37), Actually for santer 2005 there are only two outside readers selected. With Santer, Wigly Thorne, Hansen and Schmidt on the Author list and Jones as the outside referee the conflict of interest Jones has is pretty severe. he has close friends on the author list,
employees on the list. You’d be hardpressed to find a more conflicted reviewer. In fact if you look at all the author affliations you be hard pressed to find ANYBODY who wasnt working in the same organization as the authors.

Re: steven mosher (May 24 17:10),“In fact if you look at all the author affliations you be hard pressed to find ANYBODY who wasnt working in the same organization as the authors.”
Indeed so. So what’s an editor to do?

How about, for starters, following Wegman’s suggestion (and one that has been seconded on this site probably hundreds of times) asking statistical experts to look at these papers, which are largely statistic based, and do some real peer reviews.

Nick, if (as in many cases within climate science) a new statistical method was being used in a paper, or in a new way, i have no doubt you would find numerous statisticians who were willing to review the methodology used.

Statistics is an ever-evolving field and statisticians would be delighted to examine new methods which may assist them in their own work. Applying statistical methods to climate science is not unique, data is data (albeit with different underlying issues) and hence there are always many applications for any given method.

Re: slowjoe (May 24 11:50),
Well, presumably if they provide a list of nominees, they would wish for at least one of those to be selected (else why name them?). And the journal makes it clear that they will decide, and reputable is a likely requirement.

The current comments in the media still don’t disprove or really
dispute interpersonal/social networking conclusions and the possible
profound implications on the peer-review system used by most scientific
and academic journals as described in the original Wegman Report.

The Said,et al., 2008, Computational Statistics and Data Analysis
paper being withdrawn for usages without proper attributions, a.k.a.
plagiarisms in the introductory section, does not invalidate
the earlier Wegman Report of 2006.

By the way, you’ll notice those comments from Garry Robins, Associate
Professor and Reader at the University of Melbourne, were regarding
Said, et al., 2008, “W. 5. 6. 3 Comments on [SAI2008] from an
expert” and not the Wegman report of 2006.

Robins will be leading a workshop and be the keynote speaker at the
4th Annual Political Networks Conference and Workshops this coming
June 14-18, 2011, at the Gerald R. Ford School of Public Policy,
University of Michigan, Ann Arbor.

Established procedures are usually defended as being merely imperfect when they are actually inadequate. The practise of making a reviewer of the person who would write the rebuttal is bizarre in the extreme.

I used to work for a big organisation that tried the concept of ‘360 degree’ review as part of annual assessments. The idea was that you selected six people you had regular worktime contact with …co-workers, customers, suppliers, whatever and asked them to write a couple of paragraphs about your strengths and weaknesses to assist in your assessment and personal development.

Though there is a nugget of a good idea somewhere there, this system rapidly became ‘corrupted’. Discreet personal gifts (alcoholic or not) were exchanged. Early drafts from colleague B of discussions about A’s performance somehow came into A’s hands in exchange for allowing B the same courtesy about A’s remarks on him. Colleagues who privately might be vociferous in their arguments suddenly managed to remember only the good things about their co-workers. A love-fest broke out and the entire point of the exercise was lost. The experiment lasted only two or three years before it was discarded as unfit for purpose.

I am hard pushed indeed to believe that such corruption does not occur in academe. Especially among a group who view themselves as having spacial and important insights unknown to the outside world. And who believe themselves ot be under siege from frightening corporate bogeymen.

But perhaps the most telling is there continual unjustified reliance on using ‘peer-review’ to imply complete scientific truth. It is not and never has been. I suspect that, like the 360 degree review process, it is no more than a back-scratching exercise.

it might be useful to seek the observations of journal editors. They will certainly know what happens in “peer” review; what careless and unfounded papers were prevented publication; what ill formulated efforts were brought to sharp focus.

it does not seem surprising to me that papers from a niche branch of scientific inquiry might be most effectively reviewed by other nichers.

It does seem a failure of the system to have not had a first rate statistician be a part of a review team where the analysese might include tormenting data. But if the journal editor was insensitive to that possibility?

The real misdemeanor here is more the use of the term “peer reviewed” to suggest incontestable reliabiity.

“Here” was the wrong word. Maybe I should have said that “peer review” seems a term in wide use as a qualifier by people outside science to suggest incontravertible truth. And it is this usage that I think a misdemeanor.

I do think it implicit in the Wegman Report and even “here” at Climate Audit that “peer review” at least should be a more reliable qualifier.

shouldn’t the meaning of “peer review’ be instead limited to “worth publication in this journal, no obvious internal inconsistencies, theses supported?”

To repeat. The editors surely know the effectiveness of the peer review sieve.

Almost any journalist covering the issue has quoted climate scientists talking about “peer review”. The terms “evidence based” and “peer reviewed” have entered the political lexicon. They mean that one’s views ahve been validated by the rigorous application of science as opposed to the baseless prejudices that motivates one’s opponents proposal. Post normal science is what is practiced today. Science by press release touting the latest peer reviewed paper showing some dramatic result are all too common. Climate scientists, and others of course, know exactly what their listeners are hearing when they say “peer review”.

MarchesaRosa,
You said it better than I. I recently sat through a session where what I thought to be cogent observations on a subject were dismissed as ravings because the speaker had not published recently in a “peer reviewed” journal.

What’s been described appears to now be normal in the climate science field. Neither the journal editors nor the submitting scientists (provided they are saying the right things) have any problem with this. The hard ride given to sceptical papers is also considered normal. The scientists behave this way because the editors connive with them to do so. The journals have been protected by the anonymity.

If the journals behave this way over climate science, why should I believe they are any better in areas such as genetics, medicine, physics, or any other field? Why should we beieve that the majority of peer reviewed science is not in fact mere shoddy pal-review?

The problem with peer review is the “peer” part. By limiting the population of reviewers you limit the chances of bad work being rejected, no matter how careful you are to tinker with the mechanisms that supposedly assure fairness. American court trials require unanimity of votes on a jury of twelve “disinterested” persons. Although still imperfect in some judgements, juries with this size and standard gives the public sufficient confidence to accept decisions in the main, if there is grumbling about particular cases. Maybe it’s time for peer review to get more statistically rigorous by increasing the n size and formalizing the review criteria.

Nick, I think Steve Probably understands the review process rather too well.
Perhaps with his persistence Steve is trying to bring have a higher standard applied to academic journals with potential for severe economic impacts. Perhaps similar to the higher standards applied to Canadian Markets post CNI-43-101.

Currently the low bar set in Australia for reporting standards to the ASX is made a mockery of by most companies, which continue to raise money off the back of hapless investors, while able to cloak the most rudimentary yet serious, technical or economic flaws of a project.

I hope that the science to be used for the basis for any legislation is in turn kept to a standard with the aid of legislation.

Was Climategate the result of a malicious hack, or an inside job, as Richard Muller believes?

Steve’s conclusion —

However, within this controversy, it should not be overlooked that the Wegman Report was vindicated on its hypothesis about peer review within the Mann “clique”. The Wegman Report (and Said et al 2008) hypothesized, but were unable to prove, that reviewers in the Mann “clique” had been “reviewing other members of the same clique”. Climategate provided the missing evidence, Climategate documents showed that clique member Phil Jones had reviewed papers by other members of the clique, including some of the articles most in controversy – confirming what the Wegman Report had only hypothesized.

–invites speculation that if it was an ‘inside job,’ perhaps this was one of his or her motives?

Now, our host typically discourages speculation about motives. But this one is different since the presumed leaker has not come forward to claim credit and ignomy. And he may never.

Or will we end up with a true “deepthroat”? At any rate, many mysteries about climategate persist. Surely another one besides Fred Pearce’ effort will see the light of day before long.

An obvious part of the problem here is that the editors seem prone to picking the friendly (extra friendly, and conflicted) reviewers in the climate arena. And that reviewers don’t seem to notice that giving their friends a hand is undermining quality, credibility, and progress. This is corruption. PCA mines for hockey sticks whether one likes it or not, for example, and “getting the right answer” is not a criterion of truth. I recently reviewed a paper where I very much agreed with the conclusions but the authors made a fundamental mistake and I pointed this out.

It’s interesting that, despite Steve’s code being available, his cherry-picking still made it through peer-review.

Steve: I have no idea what allegation you’re disseminating here. The principal points of our article are unarguable: 1) Mannian principal components mined data sets for HS-shaped data; 2) Mann’s reconstruction failed standard verification tests including the verification r2 test illustrated for later reconstruction steps; 3) the Mann reconstruction was overweighted to bristlecones, known to be flawed proxies.

One of the reasons for providing code is to enable interested parties to examine methodology and see if there were any errors. It was my intention that the source code be available for interested parties, not just peer reviewers, so that the validity or invalidity of our results could be independently verified. It is certainly not my view that “making it through peer review” is a talisman of truth – quite the opposite.

Your MM05 code mines the simulated pc1′s for hockeysticks by saving the 1% of simulations with the highest hockeystick index.

So what do you think that Mann does by only choosing those proxies which display his prized “climate signal”.

A selection procedure which rejects or accepts proxies based solely on their correlation with the “blade” of modern temperatures is doing exactly that… or don’t you understand that aspect of it. His proxies mimic the blade from the selection procedure, but then without the presence of the “signal” in the past tend to average out toward a flattened “stick”.

And exactly what ESP ability (and/or statistical insight) are you using to state so categorically that Prof. Wegman did not understand what he was doing?

And maybe Wegman understood what was done, maybe not. But in his Fig 4.4, when he says “The MBH98 algorithm found ‘hockey stick’ trend in each of the independent replications.” he isn’t telling Congress that he’s showing a selection from the top 1% of hockey-stickness.

And exactly what ESP ability (and/or statistical insight) are you using to state so categorically that Prof. Wegman did not understand what he was doing?

In figure 4.4 he describes the red noise as “AR(1) with parameter=0.2”. In fact the red noise is generated from a fractional arima process.

The point is that Wegman never bothered to replicate MM’s work, he just copy and pasted, much like the rest of the report.

[RomanM: I have already stated that this is OT in this thread and I have said that I won’t continue the discussion here. This is Steve’s blog and he is entitled to discuss the topic he has raised and not what you may want in this case]

“The point is that Wegman never bothered to replicate MM’s work, he just copy and pasted, much like the rest of the report.”
I read Wegman’s report and know this to be inaccurate.

However, are you stating Wegman was retained to replicate MM? My impression is he was to “review” the work of MBH and MM, then prepare a report and give testimony to Congress regarding the statistical accurancy of each.

Pete, are you the guy that posted on DeepClimate’s thread, or did you pick up this point somewhere else? You should have read a little further into the comments. Here is DeepClimate later on:

Even a median HSI (about 1.6, I reckon) upward bending simulated PC1 would recognizably be a hockey stick. The point, though, is that the 1% solution is just one more choice that exaggerates the “hockeystick” effect.

Now I clearly recognize that the “short-centered” PCA does promote “hockey stick” patterns to the first PC, and that this is not a proper methodology.

I see a new subject being taught in the universities: climategatology. Statistical methods, PCA, network analysis, public domain data; and a commitment to scientific integrity. (Not to be confused with climatology).

You and Wegman state that Congress should have asked about peer-review. I think that’s out of line. Having Congress trampling academic freedom and demanding the names of reviewers is probably not a good headline.

Steve: Good point as to Congress. My comment was directed at the Muir Russell review where one of the issues in their terms of reference was peer review. I think that Muir Russell should have examined the question of pal review.

They seem to have the option of bringing in statistics experts for specialist review, as was the Wegman panel.

“Manuscripts judged to be of potential interest to our readership are sent for formal review, typically to two or three reviewers, but sometimes more if special advice is needed (for example on statistics or a particular technique).”

Although Carley would have recommended a major revision, she didn’t characterize anything within the article as ‘wrong’:
Q: (So is what is said in the study wrong?)
A: Is what is said wrong? As an opinion piece – not really. Is what is said new results? Not really. Perhaps the main “novelty claim” are the definitions of the 4 co-authorship styles. But they haven’t shown what fraction of the data these four account for.

In case people want to know what Carley thought, here’s the rest of the important part, Geez:

A: No – I would have given it a major revision needed.

Q: (How would you assess the data in this study?)

[A:] Data: Compared to many journal articles in the network area the description of the data is quite poor. That is the way the data was collected, the total number of papers, the time span, the method used for selecting articles and so on is not well described.

Steve: In my description of Carley, I said that she “would have recommended a major revision”. How can you accuse me of not mentioning this when I did. I also said that she criticized the authors because they “had not been able to provide data to “support their argument” “. I am not suggesting that Said et al 2008 was a great article. My point here is the narrow one that people are overlooking: Wegman’s original “hypothesis” – that members of Mann’s clique were reviewing one another’s papers – was proved correct by the Climategate documents. I clearly stated that Said et al 2008 had not proven this.

Wegman’s original “hypothesis” – that members of Mann’s clique were reviewing one another’s papers – was proved correct by the Climategate documents.

Steve – you keep using the term “clique” to describe Mann and his co-authors and fellow climate scientists. Are you claiming that it is unusual or out of the norm in small specialist fields like paleoclimatology that most publishing authors know each other on some level and often co-author and review each other’s work?

Are you implying this and if so, on what do you base this? Have you done a thorough investigation into the practice of peer review in other science fields and thus can speak from a position of authority or is this just a “guess”?

Are you claiming that it is unusual or out of the norm in small specialist fields like paleoclimatology that most publishing authors know each other on some level and often co-author and review each other’s work?

Are you implying this and if so, on what do you base this? Have you done a thorough investigation into the practice of peer review in other science fields and thus can speak from a position of authority or is this just a “guess”?”

Nowhere in this post did I speculate on practices in other science fields. Nor did I imply anything about practices in other fields.

You don’t comment directly on whether this so-called “clique” behaviour Wegman described is abnormal for peer review in general and you don’t appear to have commented on whether this takes place in other fields and if this is common.

Similarly, Wegman “suspects” that tight relationships among authors may cause a problem with peer review in paleoclimate circles but does not provide any evidence that it does, nor does the WR provide a larger context for this social network analysis of Mann’s “clique” and how common it is or if it is a threat to peer review and science in general. This is clearly the implication and suggestion, however.

The WR states:

One of the interesting questions associated with the ‘hockey stick controversy’ are the relationships among the authors and consequently how confident one can be in the peer review process. In particular, if there is a tight relationship among the authors and there are not a large number of individuals engaged in a particular topic area, then one may suspect that the peer review process does not fully vet papers before they are published.

This statement seems rather ironic, given the ultimate fate of Said and Wegman’s paper, given the relationship between Wegman and Said and the friendship between Wegman and the journal’s editor, Azen, the lack of peer reviewers and the 6-day turnaround for acceptance.

This context and deeper analysis which you admit you have not done is something lacking in the WR re: social network analysis and references to it on websites such as CA and other so-called “skeptic” websites.

It would seem important to me to provide context and comparison to other fields in science in order to have an appreciation of whether this is unique to climate science or in fact common. I think it would also be important to actually provide evidence that this “clique” behaviour is somehow harmful rather than just suggest that it is. There is the clear implication that it is. But there is no evidence provided.

In other words, there is the assumption and suggestion that the connections between Mann and co-authors are tight, negative and abnormal, and somehow create a poisoned environment in climate science peer review.

There is no proof in the WR that there is an abnormal or unhealthy relationship between Mann and his co-authors that affects peer review but there is the implication and outright suggestion that it has. Without data and analysis, Wegman’s social network analysis is, as Carley suggests, mere opinion.

Indeed, despite pages of text devoted to social network analysis, there is nothing about it in the WR findings or recommendations, but much has been made of it nonetheless.

One might suggest it is akin to a smear of a whole field of science without substance.

This statement seems rather ironic, given the ultimate fate of Said and Wegman’s paper, given the relationship between Wegman and Said and the friendship between Wegman and the journal’s editor, Azen, the lack of peer reviewers and the 6-day turnaround for acceptance.

You might be courteous enough to observe that I had made precisely the same point in my post and that you were not, on this point, contradicting my post:

The final sentence of this paragraph is particularly ironic in view of the evidence of cursory peer review of Said et al 2008 itself, the cursoriness of which was perhaps due to longstanding association between editor Stanley Azen and coauthor Wegman.

Nor, as I stated in my post, have I “made much” of social networks analysis. Even in this post, I made a very limited observation that the Climategate documents provide evidence that was missing in the Wegman Report and Said et al 2008.

In practical terms, as the Wegman Report and Said et al 2008 observed, and as Robins agreed, given the anonymity of peer review, there is no obvious way of analysing peer review.

I observed that there was some actual data arising from the Climategate documents and that this data, limited though it was, showed that Wegman’s hypothesis – that members of the Mann clique were reviewing papers from other members of the clique – was true. This is data that arose not through social neworks analysis, but from the Climategate documents.

As I mentioned in my post, I had negligible interest in the social networks analysis, as evidenced by the fact that I never did a post on the topic.

I think that one can say that “clique” behavior is a departure from the assumption of “independent” peer reviewers – which Nature, for one, seems to think important.

In practical cases, I don’t think that Jones was an “independent” peer reviewer of Wahl and Ammann or Schmidt 2008. Moreover, I think that his reviews reflected his bias.

As I;ve said on other occasions, I don’t regard peer review for an academic journal as a talisman of truth. Hence my emphasis on data archiving and source code to ensure that verification by an interested party can be done efficiently. Also my emphasis on the obligation of academic authors to disclose adverse results.

You claim you don’t “make much” of SNA. If you don’t “make much” of social network analysis, why have you adopted its terminology with such apparent zeal?

I believe you used the terminology “clique” 30+ times in your post of 4,500 words. One might suspect that you have accepted SNA as legitimate given this repeated use of the terminology. Indeed, you use it more than even Wegman himself, who uses it only 11 times in the whole WR’s 30,000+ words…

Steve: in hundreds of previous posts, I had never considered “social networks”. Surely that constitutes overwhelming evidence of not “making much” of it. Is there any reason why you couldn’t simply agree with that point? The fact that I hadn’t taken an interest in the topic is logically distinct from whether or not I “accepted” it as “legitimate”. Nor does the post mean that I endorse or do not endorse any social networks analyses not directly commented on in the post.

> [I]n hundreds of previous posts, I had never considered “social networks”.

This might imply we stick to a strict interpretation of the operatives “posts” and “social networks” together with a lax interpretation of the quantifiers “hundreds” and “never”:

> I think that a more appropriate network analysis would have been based on the authorships of the various “independent” studies: Jones [Briffa et al], Briffa [Jones Schweingruber et al], Rutherford [MAnn Bradley Hughes Osborn Briffa Jones], etc. Also Bradley and Jones who go way back together and were really stood at the center of the Team in 1998.

The “independence” between the quantifiers (e.g. never in which hundreds?) deserves due diligence.

Steve: In the present post, I stated:

At Climate Audit, Wegman’s speculation about using social network attracted even less attention. I didn’t post at the time on social networks. To my knowledge, I only commented on the topic in passing twice (both much afterwards), each time being mildly critical, observing that both Jones and Bradley seemed far more central in the network as at 1998 than Mann

> Your and Susan’s implication that ‘social networks” have been a major interest at Climate Audit are totally absurd.

If it would be absurd to implicate such thing, perhaps a more plausible hypothesis is that there is no such implication. All depends on how a “major interest” appears on this very blog anyway.

Here’s a post questioning the “independence” of some studies:

> That the authors are not independent can be seen merely by inspecting the names of the coauthors of the Team studies in the usual spaghetti graph. Briffa et al [2001] with coauthor Jones is obviously not “independent” in authorship from Jones et al [1998] with coauthor Briffa. Jones and Mann [2004] and Mann and Jones [2004] are not independent of Briffa (Jones) et al 2001 or Jones (Briffa) et al (1998). MBH (Mann, Bradley and Hughes [1998, 1999]) is not independent of Bradley and Jones [1993], which in turn is not independent of Hughes and Diaz [1994] or Bradley, Hughes and Diaz [2003], etc etc. To say that these supposedly “independent research groups” are not in fact “independent” in any sense familiar to non-climate scientists is hardly “wild assertion/mudslinging” as Connolley claimed.

It might still be true that “social networks” have never been considered in hundreds of previous posts, but perhaps only if we adopt a strict meaning of “social network”.

In any case, that CA “never”, “in hundreds of posts”, “considered social networks”, does not answer Susan’s question in any relevant way.

Steve: My discussion of the non-independence of proxy selection and hockey team studies was expressed in similar terms prior to the term “social networks” being invoked. Nor was my discussion of these points motivated by social networks analyses by Wegman or anyone else. However, I suppose the non-independence of proxies is a sort of network thing and concede that point. However, my original point – that I hadn’t done a post on Wegman’s social network thing and had never mentioned Said et al 2008 – reamins.

> I suppose the non-independence of proxies is a sort of network thing and concede that point.

Glad to acknowledge this concession, even though the quote was leading with “that the authors are not independent”.

> However, my original point – that I hadn’t done a post on Wegman’s social network thing and had never mentioned Said et al 2008 – reamins.

We heartily agree: any reader that followed bender’s advice can concede that, before this post, there never was no mention to Said & al (2008), and that Wegman’s social network analysis has never been a main topic.

I observed that there was some actual data arising from the Climategate documents and that this data, limited though it was, showed that Wegman’s hypothesis – that members of the Mann clique were reviewing papers from other members of the clique – was true. This is data that arose not through social neworks analysis, but from the Climategate documents.

Was Ray “excuse me while I puke” Bradley considered part of Mann’s “clique”?

Having studied under several scientists during my tenure as a science undergrad student and listened to their casual conversations about colleagues in their fields, and having been a member of a university department as an sessional lecturer, I have seen rivalries exist between colleagues in the same university department and yet, who have a drink together at events despite these academic rivalries.

Are you claiming that peer review in general is fine, but that in this particular case — the case of the MBH 98/99 papers — peer review didn’t work? Or is peer review as it exists in general just not up to the task?

From my limited experience, people who work in science tend to recognize that peer review isn’t perfect and that balance / objectivity is the gold standard, but they don’t want to replace the existing system because over the long run, even when a few duds and frauds get through, science based on peer review has been vastly successful and beneficial.

This focus on peer review in paleoclimate or climate science, based on a few stolen emails taken out of context, appears as a paradigmatic cherry pick and special pleading.

Susan, I think that my own comments on peer review have been careful. And I don’t agree with everything that every reader says, though I don’t have time to reply to every comment. On many occasions, I’ve observed that I wished that reviewers paid attention to ensuring that authors had ensured that their results could be verified through ensuring that all relevant data was archived and that the methodology was meticulously described, preferably with the archiving of functional code. It also seems to me that the function of peer review is not well articulated by the journals – on the few occasions that I’ve reviewed, the instructions are not very clear.

I also think that one can be critical of peer review as presently practiced in climate science where the Team is engaged without necessarily throwing out the institution of peer review altogether. In Richard Horton’s comments for Muir Russell, he said that the Lancet now required statistical review of its publications as part of its peer review. This isn’t the case with climate science even though the articles that engage this site are statistical.

In addition, I think that it is reasonable to compare practices against ideal standards of “independent” peer review. Maybe the circumstances of the field prevent the use of truly “independent” reviewers, but I think that it’s not as hard as they think and that they’ve fallen into some poor practices. Whether or not the practice is avoidable, it remains my view that using Phil Jones to peer review things like Wahl and Ammann 2007 or Schmidt 2008 means that the assumptions of independent peer review represented to the public are not being upheld.

Indeed. Very careful. Not so for your “clique” who have felt free to make the allegations and innuendos. This appears, at least to me, to be a smear against the whole of paleoclimate and climate science peer review based on very sparse data and a not-very-scientific method.

Blog science, in other words.

Peer review by scientists who know each other isn’t in and of itself a negative thing – no one has shown this to be the case. Think of the small group of scientists working on the bomb during the 40s and 50s. Who peer reviewed their work but each other? Who else could? How did that science ever progress if such a small clique of scientists who were not independent and who worked closely together were responsible for checking each other’s work?

Scientists tend to want to make their own name and reputation and so are likely to be only quite too happy to find fault with colleagues when they can. I think it is expected, in the normal course of things.

On many occasions, I’ve observed that I wished that reviewers paid attention to ensuring that authors had ensured that their results could be verified through ensuring that all relevant data was archived and that the methodology was meticulously described, preferably with the archiving of functional code.

Motherhood and apple pie. We all wish that everyone would be good citizens and staff members and all-around-humans, but we are just that — humans. No system we ever developed to monitor any human process or system has been perfect or perfectly implemented, monitored and enforced.

I agree that people should live up to their obligations and follow the rules. Trouble is that humans have foibles, they have conflicts of interest which get in the way of them behaving well at all times. Look at how politicos tried to edit out politically unpalatable evidence from congressional reports. The system is not perfect and can be made to work more effectively but to condemn a system as venerable as scientific peer review the way that is being done by your clique and others is a very very sad development for science and our civilization.

I see this as just another attack on science by forces who refuse to play — or realize they will fail if they play — the public policy game by the rules and so they turn to attacking science instead. These allegations of broken peer review, the witch hunts being carried on by the various politically-motivated bodies such as the ATI, is deplorable. If people could step back a moment from their positions of self-interest to see the larger danger of such attacks, they might think twice.

Yeah, right…

It also seems to me that the function of peer review is not well articulated by the journals – on the few occasions that I’ve reviewed, the instructions are not very clear.

I’m sure that to those who regularly take part in the peer review system are pretty aware of the processes and function and rules of peer review, Steve. Perhaps it appears confusing to an outsider who isn’t part of the culture.

From my experience, graduate students are introduced to the rules of peer review and become familiar with how the system works before they even get their PhDs. After several rounds of submitting papers and making revisions and resubmitting, they are likely very aware of the rules for the journals they will be publishing in. I only made it a small way into this world so I can only speak from the point of view of a grad student, not a regular contributor. One shouldn’t confuse one’s own lack of familiarity and knowledge with a particular process with the process being unclear or confusing in general.

I also think that one can be critical of peer review as presently practiced in climate science where the Team is engaged without necessarily throwing out the institution of peer review altogether.

I have yet to see a systematic review of peer review in climate science that could offer a knowledgeable critique of it. All I see is a bunch of speculation and innuendo passing itself off as analysis.

it remains my view that using Phil Jones to peer review things like Wahl and Ammann 2007 or Schmidt 2008 means that the assumptions of independent peer review represented to the public are not being upheld.

Opinion — no more. No one has proven that having Jones review W&A leads to a poor outcome in the short or long run. You can only speculate that it does. Without access to the whole field, or a representative sample, no one can speak in anything more than opinion.

Raising unfounded alarm based on one’s personal lack of familiarity with the way peer review works in the real world seems rather, well, self-absorbed.

The question is whether the existence of “cliques” in climate science or other sciences has harmed science itself in the long run, and I don’t mean the public’s perception of the science — I mean the actual validity of the science. Is the science made invalid by any peer review that does not meet the highest standard of independence? This reminds me of Watts’ claims about bad siting of thermometers used in climate studies. I don’t know nor have I seen evidence that it does. Just allegations and innuendo.

Look at the Said and Wegman article — despite the use of inadequate pal review, it was eventually seen to be flawed and thus the system works in the long run.

contrary to recent false claims by USA Today, Said et al 2008 was not “a federally funded study that condemned scientific support for global warming”.

A link to this quote might be nice, as it might be hard for readers to find.

Here is the top result if you just put the whole phrase into Google. It’s a pretty terrible article; if you didn’t already know what was going on it’d be hard to figure out by reading it.

The full quote might even be nicer.

Don’t really see the need, but the full quote is “Evidence of plagiarism and complaints about the peer-review process have led a statistics journal to retract a federally funded study that condemned scientific support for global warming.”

The plural “claims” deserve due diligence.

Does it? The quoted passage made two distinct claims: that (1) the study was federally funded, and that (2) it condemned scientific support for global warming. If either or both parts of that are false, the truth is contrary to the combined “claims”, plural, made in USA Today.

> Here is the top result [http://www.usatoday.com/weather/climate/globalwarming/2011-05-15-climate-study-plagiarism-Wegman_n.htm] if you just put the whole phrase into Google. It’s a pretty terrible article; if you didn’t already know what was going on it’d be hard to figure out by reading it.

I’d rather click on a link to an article than having to Google it, more so when this article serves as a “backstory” to a blog post. Not including a link to a resource that is being critiqued is quite intriguing, condering the many resources being handwaved: Bouville, Clarke, Loui, etc. Perhaps this resource is the-One-to-which-we-should-not-link?

Readers that will consult that link above will see that the quote is not in the article itself, but in the standfirst. Incidentally, the conflation has not appeared in the main body of the article. Here is what Vergano says:

> The study, which appeared in 2008 in the journal Computational Statistics and Data Analysis, was headed by statistician Edward Wegman of George Mason University in Fairfax, Va. Its analysis was an outgrowth of a controversial congressional report that Wegman headed in 2006. The “Wegman Report” suggested climate scientists colluded in their studies and questioned whether global warming was real. The report has since become a touchstone among climate change naysayers.

Considering that the article contradicts it (“being and outgrowth” not preserving identity), one must presume that somebody else than Vergano wrote it. That might explain why the false “claims” are said to be “by USA Today” and not “by Vergano”.

The “claims” are distinct only if we decompose the proposition itself, analysis which was the subjected to an update in this blog post. The extent of the update might deserve due diligence. In any case, if we’re to accept Glen Raphael’s analysis, it comes from only one mistake: conflating Said & al 2009 with the Wegman Report.

This should be enough to show the that my questions were justified and that the quote deserved due diligence stands.

Anyone who follows bender’s advice might appreciate the rhetorical effect of the lack of the complete quote, the absence of a link to the resource, the exactness of the term “USA Today” and the plural of “claims”.

I am a little disappointed that Nick Stokes thinks that a) it is ok to try to pack the suggested reviewers with your pals and b) everyone does it. I view it as a temptation to be resisted rather than a “clever way of doing things” and as I mentioned above journals talk about “independent” reviewers [Nature cited above] or request avoiding those with conflicts of interest [Ecology journal]. Submitting your buddies names is gaming the system. Sorry Nick.

N one of this is new, BTW. When I was a student, long long ago, people calculated their Erdos number. I don’t think mine went below 6. Wegman seems to be laboriously developing a Mann number. And people then claim you need a Mann number of two or more to review a Mann paper. Or is it 3?

sorry Mr Nick Stokes…but do you have a positive IQ? The implications of the original post are very clear. Why do you feel the need for prevaricating, obfuscating and equivocating? Your every post makes your position seem more ridiculous.

lurking within this post is the question of the value of peer review. The committed CAGW adherents always claimn that peer-review is a gold standard. If some opinion/fact/belief has not been through the peer review process, then it can automatically be discounted. The objective evidence presented in this post suggests that peer review is just a rubber-stamp process for people who are frendly with you – like going through immigration and getting waved through by your uncle although you are carrying 10 kilos of heroin in your bag.

It’s petty well known that Nick is a water carrier to the hockey team. There’s no point in expecting a truthful or rational or answer from him. His job is to obfuscate, spin, side track, create strawmen and do everything to support the team’s shenanighans.

For those missing the point, a “clique” is a graph-theory term, meaning a set of points all of which are connected directly to each other. In this case, points are authors, and connections based on co-authoring papers.

Re: slowjoe (May 24 19:02),
Indeed so, and that’s why I don’t want to use the term. It doesn’t make sense here in its technical meaning, and I think Wegman used it for the obvious prejudicial effect. Any group of scientists who write just one paper together are defined as a clique. O’Donnell, Lewis, McIntyre and Condon are a clique. There’s even a risk I could find myself in a clique with Mosh. It really doesn’t mean what Congress and others think.

Nick Stokes–it absurd for you to infer that Wegman used the term “clique” for a “prejudicial effect.”
You prior state the term “clique” as used “doesn’t make sense here in its technical meaning”.
You don’t make sense (http://en.wikipedia.org/wiki/Graph_theory). Have you even scanned pages 39-46 of Wegman’s report?
I guess the graphs just don’t have anything to do with graphs for you.

Re: Gaelan Clark (May 25 12:19),
The reason it doesn’t make sense is this. The real complaint is that a reviewer may have previously written a paper with one of the authors. In that case they are by this definition members of the same clique (even if only a clique of two). So saying clique members are reviewing each other’s papers doesn’t add anything. And the number in the “clique” is unimportant. If A reviews B and A has previously written a paper with B, why would it be worse if the AB paper had coauthors?

Let me say again, by this definition, O’Donnell, Lewis, McIntyre and Condon are a clique. Now Steve might correct me, but I doubt that his relationship with Nic Lewis, say, would normally be described as cliquey.

If you want to be technical about it, in terms of your link, graph theory models pairwise relations between objects. And if they happen to form a totally connected subset (clique), that may be interesting in terms of graph structure. But if the event that creates the links (coauthorship) automatically creates cliques, that isn’t interesting.

Steve: Nick, please note that the terms of the discussion have changed. Previously critics Carley and Robins had said that the Wegman Report hadnt proved that members of Mann’s “clique” had reviewed one another’s papers – a point that they had only suggested. Now you’re saying that it doesn’t matter.

Re: Nick Stokes (May 25 18:35),
Steve,
Carley and Robbins said nothing about cliques. It is Wegman and you who use that prejudicial terminology, misusing something from mathematical graph theory. And you have used it with gusto – 31 times in your post.

Clique–In social network theory, as well as in graph theory, a clique is a subset of individuals in which every person is connected to every other person. Graphically, it is a subset of points (or nodes) in which every possible pair of points is directly connected by a line, and which is not part of any other clique.
(source–http://en.wikipedia.org/wiki/Clique)

From page 40 of the Wegman Report–A clique is a fully connected subgraph, meaning
everyone in the clique interacts with every one else in the clique.
(source–http://www.uoguelph.ca/~rmckitri/research/WegmanReport.pdf)

So, Mann is interconnected with at least 42 other researchers, the most interconnected is he with Rutherford, Jones, Osborn, Briffa, Bradley and Hughes—where have I heard those names before?–all authors with the IPCC, maybe….

And, of those 42 interconnections listed in the Wegman Report on page 40 there are only ten instance where the authors wrote outside of their “clique” as identified in the Wegman Report. Interesting indeed.

Mann, having soooooooooooo many interconnections makes one wonder if he actually does any work on these other papers or just allows his name to be used.

But for Nick and his inability to reconcile the graphs in the Wegman Report to graphs that would normally be reserved for “real” “cliques”–like school girls forming in the corner to spread some juicy gossip–it makes his responses seem as idiotic semantics games.

In juxtaposition to the term “denier”, I find it hard to feel sympathy for anyone being clumped into a “clique”, when that is exactly what they have clumped themselves as.

BTW Nick, no response on a list of the journals that you have published within so that I can parse the terms and conditions of their respective peer review clauses to see if you are as blind as I think you are.

Nick, Since you are obviously opposed to the definitions cited above–you provide your definition of “clique” with references for such and we can then see where you are coming from.
As it stands though, you have been thoroughly repudiated in your prior comment of the term “clique” as used by Wegman being not used correctly.
And if it scares you to give up a name of any journal you have published within, then give us the general field. I would be willing to bet that I will find a half a dozen examples of exclusionary language in the peer review sections on the first
two Google pages.

What’s REALLY interesting is that by Wegman’s own criteria he is guilty of having ‘pal review’ and being part of a ‘clique’. If we continue this to the conclusion Steve is trying to imply – that such collusion leads to poor science – we must assume that Wegman is poor. So therefore Wegman is wrong!

I was wondering what that sound was, as I clicked on the blog…. enough hair-splitting for a hundred blogs. Anyone that read the climategate e-mails knows that whatever used to be the review process was made subject to the whims and necessities of the team.

This post’s “discussion” is pointless even though the subject has merit and needs addressing. Hopefully this is a slow and somewhat painful step in that direction.

Regarding the friendly review by “pals”, was there anything revealed in the Climategate e-mails about a “pal” or “pals” looking over a paper prior to submission for publication, and then a subsequent e-mail indicating that the same “pal” or “pals” later acted as formal reviewers for the paper? In other words, acting as both pre and post submission reviewers?

Take a look at the correspondence regarding Santer et al 2008. Santer engaged in an email discussion with a group of authors including CRU’s Jones and Osborn. Osborn was on the editorial board of Int J of Climatology, which eventually published. Osborn (left off the author list) told Santer (843. 1199988028.txt) that they could use one of the pals on the email list as a reviewer, but “for objectivity”, the other reviewer should actually be “independent”:

Obviously one reviewer could be someone who is already familiar with this discussion, because that would enable a fast review – i.e., someone on the email list you’ve been using – though I don’t know which of these people you will be asking to be co-authors and hence which won’t be available as possible reviewers. For objectivity,the other reviewer would need to be independent, but you could still suggest suitable names.

It seems that Myles Allen and Francis Zweiers were the eventual reviewers of the article. Both were involved in Santer correspondence i.e. not “independent” even by Osborn’s standards.

I’m piping in after all this time, because the scientific method has long been a theme of my past postings.

This discussion is getting off track. The definition of the word “Independent” is irrelevant. Let’s review the facts at hand:

1) Scientific truth is not determined by peer review.
2) Peer review is merely a QA process used by scientific magazines.
3) AGW advocates have been deceptive, directly implying that truth is determined by passing peer review.
4) Wegman offered the hypothesis that AGW advocates operate as a clique, self reviewing.
5) Steve M has pointed out that ClimateGate has provided empirical confirmation of Wegmans hypothesis.

Conclusion: AGW advocates have conspired to positively review their own papers and obstruct ‘skeptic’ papers. Note that #5 proves #1. If Reviewer Independence matters, then #1.

>> But perhaps the most telling is there continual unjustified reliance on using ‘peer-review’ to imply complete scientific truth.

If the goal is trying to avoid the scientific method, then it’s quite justified to push an alternate means to judge scientific truth, which is under the control of fellow AGW advocates.

>> MikeN: Having Congress trampling academic freedom and demanding the names of reviewers

There is absolutely no connection between anonymous review of papers for a magazine and “academic freedom”. Everyone is completely free to advance whatever scientific theories they wish, and research whatever they want to. Saying that reviewers have to be anonymous is like saying that software testers need to be anonymous. It’s an unsupportable assertion.

[…] And it isn’t mere peer review but as McIntyre points out, stacked journal editorial boards too: Mann was an editor of Journal of Climate; Briffa was on the editorial board of Holocene, Osborn on International Journal of Climatology, Jones and Santer on Climatic Change. Climategate Documents Confirm Wegman’s Hypothesis […]

[…] NGOs and activists, the IPCC is being uncovered piece by piece. Hidden declines, deleting emails, manufactured "peer review", conspiring to have scienitists sacked, (and that is only from the supposed […]