Expert Credibility in Climate Change – Responses to Comments

Note: Before Stephen Schneider’s untimely passing, he and his co-authors were working on a response to the conversation sparked by their recent paper in the Proceedings of the National Academy of Sciences on climate change expertise. One of Dr. Schneider’s final interviews also addresses and discusses many of the issues covered here.

We accept and rely upon the judgment and opinions of experts in many areas of our lives. We seek out lawyers with specific expertise relevant to the situation; we trust the pronouncement of well-trained airplane mechanics that the plane is fit to fly. Indeed, the more technical the subject area, the more we rely on experts. Very few of us have the technical ability or time to read all of the primary literature on each cancer treatment’s biology, outcome probabilities, side-effects, interactions with other treatments, and thus we follow the advice of oncologists. We trust the aggregate knowledge of experts – what do 97% of oncologists think about this cancer treatment – more than that of any single expert. And we recognize the importance of relevant expertise – the opinion of vocal cardiologists matters much less in picking a cancer treatment than does that of oncologists.

Our paper Expert Credibility in Climate Change is predicated on this idea. It presents a broad picture of the landscape of expertise in climate science as a way to synthesize expert opinion for the broader discourse. It is, of course, only a first contribution and, as such, we hope motivates discussion and future research. We encourage follow-up peer-reviewed research, as this is the mark of scientific progress. Nonetheless, some researchers have offered thoughtful critiques about our study and others have grossly mischaracterized the work. Thus, here we provide responses to salient comments raised.

Definition of groups: The first of four broad comments about our study examines the relevancy of our two studied groups – those Convinced of the Evidence that much of the warming of the last half century is due in large part to human emissions of greenhouse gases, as assessed by the IPCC, which we term “CE,” and those who are Unconvinced of the Evidence (“UE”). Some have claimed that such groups do not adequately capture the complexity of expert opinion and therefore lose meaning. To be sure, anthropogenic climate change (ACC) is an immensely multi-faceted and complex area and expert opinion mirrors this complexity. Nonetheless, society uses simplifications of complex opinion landscapes all the time (e.g. Democrat versus Republican for political views) that don’t “lose their meaning” by ignoring the complexity of nuanced differences on specific topics within these broad groups.

The central questions at hand are: are these groups (1) clearly defined, (2) different in views of ACC, (3) reasonably discrete, and (4) in the main mutually exclusive? Our definition of groups, based entirely in the case of the UE group on their self-selected, voluntarily signed statements and petitions expressing various versions of skepticism about ACC, is clearly defined in the published paper. The strongest evidence indicating that our CE and UE groups satisfy the second and third criteria is that only three of 1,372 researchers fell into both groups—and in two of those cases, the researcher unwittingly added themselves to a statement they did not in fact support. Thus, if only one researcher of 1,372, or 0.07%, legitimately falls into both of our groups, this suggests that the two groups both differ starkly and are discrete. Any statistical analysis would be only trivially altered by having three redundant members of the cohort. Furthermore, the CE and UE groups are coherent, as around 35% of signers in each group also signed another statement in that set.

Another researcher suggests that his views have been “misclassified” by our inclusion of older public statements, as he signed a 1992 statement. Using a sweeping set of public statements that cover a broad time period to define the UE group allows us to compile an extensive (e.g. make an effort to be as comprehensive as possible) dataset and to categorize a researcher’s opinion objectively. However, were we to reclassify this researcher, it would only strengthen our results as then none of the top fifty researchers (rather than one researcher, or 2%) would fall in the UE camp.

Others have contended that the only experts we should have analyzed were those researchers involved specifically in detection and attribution of human-caused climate change. Importantly, much of the most convincing evidence for ACC comes from our understanding of the underlying physics of the greenhouse effect, illuminated long before the first detection/attribution studies, and these studies provide only one statistical line of evidence. The study could have been done in this manner but let us follow that logic to its conclusion. Applying this stricter criterion to the CE list does cause it to dwindle substantially…but applying it to the UE list causes it to approach close to zero researchers. To our knowledge, there are virtually no UE researchers by this logic who publish research on detection and attribution. Following this logic one would have to conclude that the UE group has functionally no credibility as experts on ACC. We would, however, argue that even this premise is suspect, as ecologists in IPCC have done detection and attribution studies using plants and animals (e.g. Root et al. 2005). Finally, applying a criterion such as this would require subjective judgments of a researcher’s focus area. Our study quite purposefully avoids making such subjective determinations and uses only objective lists of researchers who are self-defined. They were not chosen by our assessment as to which groups they may or may not belong in.

Somehave taken issue with our inclusion of IPCC AR4 WGI authors in with the CE group, in that the IPCC Reports are explicitly policy-neutral while the four other CE policy statements/petitions are policy prescriptive. However, we believe our definition of the CE group is scientifically sound. Do IPCC AR4 WGI authors subscribe to the basic tenets of ACC? We acknowledge that this is an assumption, but we believe it is very reasonable one, given the strength of the ultimate findings of the IPCC AR4 WGI report. We classify the AR4 WGI authors as CE because they authored a report in which they show that the evidence is convincing. Naturally, authors may not agree with everything in the report, but those who disagreed with the most fundamental conclusions of the report would likely have stepped down and not signed their names. The presence of only one of 619 WGI contributors on a UE statement or petition, compared to 117 that signed a CE statement, provides further evidence to support this assumption. Furthermore, repeating our analysis relying only on those who signed at least one of the four CE letters/petitions and not on IPCC authorship yields similar results to those published.

No grouping of scientists is perfect. We contend that ours is clear, meaningful, defensible, and scientifically sound. More importantly, it is based on the public behavior of the scientists involved, and not our subjective assignments based on our reading of individuals’ works. We believe it is far more objective for us to use choices by scientists (over which we have no influence) for our data instead of our subjective assessment of their opinions.

Scientists not counted: What about those scientists who have not been involved with the IPCC or signed a public statement? What is their opinion? Would this influence our finding that 97% of the leading researchers we studied endorse the broad consensus regarding ACC expressed in IPCC’s AR4? We openly acknowledge in the paper that this is a “credibility” study and only captures those researchers who have expressed their opinions explicitly by signing letters/petitions or by signing their names as authors of the IPCC AR4 WGI report. Some employers explicitly preclude their employees from signing public statements of this sort, and some individuals may self-limit in the same way on principle apart from employer rules. However, the undeclared are not necessarily undecided. Two recent studies tackle the same question with direct survey methods and arrive at the same conclusion as reached in our study. First, Doran and Kendall-Zimmerman (2009) surveyed 3,146 AGU members and found that 97% of actively publishing climate researchers believe that “human activity is a significant factor in changing mean global temperatures.” A recently published study, Rosenberg et al (2010), finds similar levels of support when surveying authors who have published during 1995-2004 in peer-reviewed journals highlighting climate research. Yes, our study cannot answer for – and does not claim to – those who have not publically expressed their opinions or worked with the IPCC, but other studies have and their results indicate that our findings that an overwhelming percentage of publishing scientists agree with the consensus are robust. Perfection is not possible in such analyses, but we believe that the level of agreement across studies indicates a high degree of robustness.

Publication bias: A frequent response to our paper’s analysis consists of attributing the patterns we found to a systematic, potentially conspiratorial suppression of peer-reviewed research from the UE group. As of yet, this is a totally unsupported assertion backed by no data, and appears untenable given the vast range of journals which publish climate-related studies. Notably, our publication and citation figures were taken from Google Scholar, which is one of the broadest academic databases and includes in its indexing journals openly receptive to papers taking a different view from the mainstream on climate. Furthermore, recently published analysis (Anderegg 2010) examines the PhD and research focus of a subsample of the UE group, compared to data collected by Rosenberg et al. 2010 for a portion of the climate science community publishing in peer-reviewed journals. If the two groups had similar background credentials and expertise (PhD topic and research focus – both non-publishing metrics), it might indicate a suppression of the UE group’s research. They don’t. The background credentials of the UE group differ starkly from that of the “mainstream” community. Thirty percent of the UE group sample either do not have a documented PhD or do not have a PhD in the natural sciences, as compared to an estimated 5% of the sample from Rosenberg et al; and nearly half of the remaining sample have a research focus in geology (see the interview by Schneider as well).

“Blacklist”: The idea that our grouping of researchers comprises some sort of “blacklist” is the most absurd and tragic misframing of our study. Our response is two-fold:

Our study did not create any list. We simply compiled lists that were publicly available and created by people who voluntarily self-identified with the pronouncements on the statements/letters. We did not single out researchers, add researchers, drop researchers; we have only compiled individuals from a number of prominent and public lists and petitions that they themselves signed, and then used standard social science procedure to objectively test their relative credibility in the field of climate science.

No names were used in our study nor listed in any attachments. We were very aware of the pressure that would be on us to provide the raw data used in our study. In fact, many journalists we spoke with beforehand asked for the list of names and for specific names, which we did not provide. We decided to compromise by posting only the links to the source documents – the ‘raw data’ in effect (the broader website is not the paper data), where interested parties can examine the publically available statements and petitions themselves. It is ironic that many of those now complaining about the list of names are generally the same people that have claimed that scientists do not release their data. Implying that our list is comparable to that created by Mark Morano when he worked for Senator Inhofe is decidedly unconvincing and irresponsible, given that he selected individuals based on his subjective reading and misreading of their work. See here for a full discussion of this problematic claim or read Schneider’s interview above.

In sum, the various comments and mischaracterizations discussed above do not in any way undermine the robust findings of our study. Furthermore, the vast majority of comments pertain to how the study could have been done differently. To the authors of such comments, we offer two words – do so! That’s the hallmark of science. We look forward to your scientific contributions – if and when they are peer-reviewed and published – and will be open to any such studies. In our study we were subjected to two rounds of reviews by three social scientists and in addition comments from the PNAS Board, causing us to prepare three drafts in response to those valuable peer comments that greatly improved the paper. We hope that this response further advances the conversation.

People are missing the point. It is not that being on the “unconvinced” list makes a scientist a bad scientist or that the convinced experts are generally more skilled than the unconvinced experts.

Rather, it is that if your preconceptions require you to reject something as fundamental as anthropogenic climate change arising from increasing CO2, this causes you to reject science in the models of climate that is very important.

Anthropogenic causation is a direct and unavoidable consequence of a significant forcing due to CO2. If you reject this, or if you are spending your time looking for chimerical negative feedback mechanisms, you will not be able to contribute significantly to understanding climate. Period. That is the proper interpretation of the study.

As far as publishing this type of criticism, what journal would you suggest? Statistics journals do not publish papers showing that someone misused a statistical technique.

This is a shame, especially after the whole MM/PCA brouhaha, since people seem to be saying that professional statisticians should be more involved in climate science, while you seem to be saying that they can’t be bothered.

My suggestion is that you publish it in the same publication as the original, not to advance statistics, but to advance climate science.

Also, you may not realize it, but the inappropriate Y2K and SARS analogies are frequently by those in denial about AGW, so you trotting them out here does not do much to further your argument.

Someone trying to brush off the Y2K thing is like waving a red rag at a bull to me as well. It seems some people are all too willing to bet that everything they know so little about is a conspiracy, and unfortunately, the media makes the problem even worse with their “equal time for both sides” nonsense.

On another forum recently, I was trying to engage with an AGW denier when he tried to play down the Y2K problem. When I pushed him on it, he told me he was the manager for the whole NE U.S. of a large telecomms company at the time the millennium rolled over, and he was opining about how Y2K would have little impact on his customers because it was all overblown. He was basically calling these Fortune 500 companies that were his customers suckers for spending so much money on a ‘non-problem’.

Well, I happen to be an electrical engineer by education, and a professional programmer since the early 80’s. I did the Y2K due diligence for our company, and had all the paperwork locked in our attorney’s safe so we could prove we did everything we could if there was a problem and one of our customers took us to task for it. This guy couldn’t bullsh#t me.

So yeah, if they think Y2K was a conspiracy, they probably think there’s a conspiracy under every rock. And these are the kind of people we’re trying to have rational discussions about the science with?

1. If the consensus among experts was that the tetrachloroethylene in my tap water was safe, with only one in a million expected to die from it; while a few outlier experts said it was unsafe, with more like 1 in 973 likely to die from it, I would probably switch my drinking water — even if those outlier experts had less pubs and citations to their record.

2. If the consensus among experts was that there is not enough proof that AGW is happening; and only a few experts said it was real and very dangerous, well, I actually did start mitigating back in 1990 (before studies started reaching .05 in 1995), and have saved many $1000s ($2000 on that $6 lowflow showerhead with off/on soap-up switch alone) over the years since, reducing about 60% GHGs below my 1990 level. And I’m left thinking, what if I’d know about this “not yet proven threat” a decade earlier — I could have saved even more money by reducing even more GHGs!

3. If there is a strong consensus among experts (like 97%) that AGW is happening and it’s dangerous, and only 2-3% of experts say either it’s not happening, or even if it is, it is not much and not dangerous, I’m wondering how did we ever get to this point?!! We should have been mitigating like crazy 20 or more years ago, halting this crazy experiment and destroying its evidence, so that there never would have been such a strong consensus among experts in 2010 about the reality and dangers of AGW.

Of course, affected industries who might lose profits might have a perspective different from mine … unless they are sincerely into diversifying. Beyond Petroleum.

RomanM (#197) I think Roman M’s suggestion that the best way to correct a flawed statistical technique used in a paper is to have a corrigendum from the authors in the original publication is correct. This is something that I think should be improved on since I think part of the bad blood between skeptics and experts lies in the fact that the skeptics will just throw up these critisism on blogs/comment threads, while experts seem to find this method to only be attacks on their credibility (which may be the case for some of the critisism, but I would argue that not all are and some are valid critisism’s). In particular, there are lots of experts in the use statistical techniques, and while not all the expertise would be directly applicable, it could help improve statistics use in climate science. Personally, I think this is something that affects all disciplines to a certain extent and partly caused statistics reputation as a minor detail as well as poor outreach by the statistics community in order to teach the latest methods as well as clarify existing misinterpretations.

I would love to see forums that allow criticism be raised in a non attack manner and the recipient of the critisism be able to debate, reject or accept the criticism. This would all be based off of honest debate however, which I think most here would agree is not easy to foster.

As for RomanM’s specific comments here, his critisism seems to be a genuine attempt from his part to engage (notwithstanding his moral critiques). Strictly from his comments, he does seem to have a good understanding of statistics. At least from my perspective… more than me which means I can’t judge if he knows statistics more or less than say Gavin or the Authors of the study. As for my understanding of statistics… more than some and less than others is all I would/could say.

This is a shame, especially after the whole MM/PCA brouhaha, since people seem to be saying that professional statisticians should be more involved in climate science, while you seem to be saying that they can’t be bothered.

Not at all. What I said is that “mistakes” do not merit publication in statistics journals because they should be fixed at the original point of publication.

I fully agree that statisticians in conjunction with climate scientists should be developing new methodology for use in climate research. Such methodology would definitely be published in good statistics journals because interactions with other statisticians would help move the subject forward.

Decades ago back in my junior high days, I did a science project and chose man made climate change as the topic. I compared and contrasted human caused warming vs cooling. I ended up choosing cooling because I thought the idea of glaciers knocking down cities was really cool and there seemed to be a lot of airplane contrails blocking the sun. I hereby recant my conclusion in light of overwhelming evidence to the contrary. Sorry for any confusion it caused the general public.

And still you think people are saying your method doesn’t do what you say it does!

Of course it does. And yet you continue to miss the point.

The paper puts it better than I can: We examined a subsample of the 50 most-published (highest-expertise) researchers from each group. They are comparing the very best with the very best, not bending over backwards to impose an equality that doesn’t exist.

If you start comparing the top 5%, then you have to add the caveat “but there are nearly 9 times more in the CE group, so not only are the best experts better, there are also more of them”. The same goes for your Monte-Carlo method.

And despite all this, every single method still shows a huge disparity between the two groups. You just keep picking on the one that highlights the difference best.

All of this highlights one of the dangers of statistics. It’s no use using statistical methods in the dark. You have to understand what the data means, and have sufficient domain knowledge to make sense of it and use statistics appropriately.

Contrary to what RomanM is claiming (that his qualms are falling on deaf ears) Prall has in fact said at #126 that:

“On the questions about methods raised by RomanN in #64 and #118, I’ve emailed our first author, Bill Anderegg, who did the statistical analysis. He’s the right person to address those comments.”

The authors of the paper are clearly receptive to critique.

Anyhow, what is lost on the contrarians is that the following claim made by them is simply not true, no matter how you manipulate the data:

“A vocal minority of researchers and other critics contest the conclusions of the mainstream scientific assessment, frequently citing large numbers of scientists whom they believe support their claims”

As for accusations made by RomanM about people posting here, he needs to take a long, hard look in the mirror and reflect on how contrary views are dealt with at CA and WUWT. That does not justify what I or others may have said in frustration here, but it is simply noting the reality of blogs. What none of this bickering changes are the facts, and the facts point to the contrarians (UE group) having very little authority, experience and having contributed much less to the ways on advancing climate science than those in the CE group.

RomanM @ 197. You are claiming the “top 50″ analysis will skew the results but the example you give is only true of populations of different size but the same underlying distribution. Look at the paper — the populations are clearly not equivalent. The “top 50″ in each population are clearly separated. In fact, one could probably run the numbers for equivalent populations, as you have done, and then use this to further prove that the underlying populations are not the same distribution. The median values are about 300 apart in the real data, not 60 as you expect from your analysis.

However, that is not the point of that part of the analysis, which is stated in absolute terms. Which population contains the more active, expert publishers in the field? Clearly this is the CE population. No question.

What I said is that “mistakes” do not merit publication in statistics journals because they should be fixed at the original point of publication.

But obviously mistakes can be made, and can get by the review process prior to publication, and often do. The initial review process before publication is just the first cut, but not the only or even the main cut. And when something erroneous is published, the way to “correct” the error is generally to have someone else publish an alternative study, either refuting or improving on the original work.

Wishing that what you perceive to be mistakes were never made doesn’t change anything. It seems you have three options:

1) Contact the authors, present your credentials, build a working relationship with them, sell your case to them, and help or at least convince them to produce a corrigendum.

2) Produce your own study, using different techniques and possibly producing different results and different conclusions, which will then either supplant their work, or require that they take their own effort a step further (and science marches on).

3) Post criticisms on a blog without any detail supporting your insights or credibility, which can never be challenged, but in the end can also have no effect on the course of science.

When does the signal of ACC become demonstrtaed in the weather? I mean natural variability is so large that I doubt the monsoon in Pakistan or the fires in Russia can in any way be attributed but its their intensity, frequency and duration that can so we all have to wait for the next ones to come along before we can link it all to ACC?

Yeah, the consensus is not always a plausible defense like in the case of rushed FDA approval of certain drugs with questionable industry publications the main rational. However, in this case there are multiple lines of evidence supporting AGW and not just one source out to many, alhtough in some cases data will converge at several points from one or two major processing centers. Still, it really is not a ‘debate’ as to whether us humans affect climate because we know we do. We also see several outside reliablity and validity analyses including stat analysis showing the climate scientists’ claims are accurate and the papers’ data robust. CA has been uncharacteristically cool this year. Not just from my meager ‘3’ years of living here, but from reports from the weather channel, all of the network forcasters/meteorologists, and some available data on google scholar as well. Of course this is weather and not climate. This is also a single year and of course changing temp gradients and resulting pressure changes are sure to cool some regions as many others become warmer.

A point of etiquette first: Some commenters are telling RomanM to stop complaining about the Anderegg paper and go publish his own. I think it odd to tell RomanM off for contributing to a critical discussion of the Anderegg et al. paper, when that is exactly what this post is about, and why comments are open.

But on substance, I think RomanM is missing the point about the “top 50″ approach. Martin Vermeer 181) nailed it with the “best football nation” analogy, which I found rather clearer than the conclusion Anderegg et al. formulated on this point.

For my money, the most convincing step in the Anderegg paper was the 20-paper cutoff and what it did to the group of bandied-about group of UE scientists. *Poof*. Never mind the top 50, look at how the bottom 379 fall out of the 472 UE statement signers when they’re required to have established a track record in climate science.

I wanted to add a comment on Tom “CRUtape” Fuller having the nerve to pose here as a champion of scientists’ privacy (cf. Bob at #121), but on reflection, I’d better [edit myself].

Look at the whole picture. If the authors wish to summarize the top 50 in each group, they are completely within their rights to do – no problem. Do all the graphs and calculate all of the statistics you want. However, when you are interpreting those statistics, remember that the differences are distorted by the imbalance in the sample sizes by amounts which you do not know.

However, the authors first stated they were doing a procedure which would allow them to do a particular statistical test as part of their “evidence”. They then selected the top fifty from each and followed it up with the test not realizing that the assumptions of the test were severely violated by the way the selections were made. The test result was used in the subsequent statement about the groups. This is supposed to be a scientific paper and what they did was incorrect. If the same approach is used in another situation where the difference may not be as large, the end result could be false. Scientists have the habit of using methods that someone else has used in previous papers so I would think that a correction is in order.

#210 Gator. The relevant feature is that the samples are of a different size, not the population.

#211 Bob (S) . If the authors had been members of my own academic environment, I might have tried approach 1.

Producing my own study – spending how many weeks or months gathering data to do something that I strongly feel should never have been done in the first place – not very realistic.

I posted this on a blog where the paper was being presented by the authors. I was not aggressive about it nor have I said anything that could be interpreted as an attempt to embarrass anyone. Everyone here seems to place an inordinate importance on credentials. I did not flash a badge nor did I feel it would be appropriate to stomp in announcing myself before posting my comment. Everything that I have written is on the record. If I am wrong, it is there to be pointed out by people who can understand the situation. In the end it can have as much of an effect on the science but that may depend on how the authors view it. I have offered to be part of the process away from the blog.

I have commitments which will prevent my continuing this discussion for a day or so at least . As it is, the discussion of this topic seems to be pretty much exhausted.

The paper could have been entirely correct and sufficiently damning if the results section only consisted of the parts

“The UE group comprises only 2% of the top 50 climate researchers as ranked by expertise (number of climate publications), 3% of researchers of the top 100, and 2.5% of the top 200″

and

“In addition to the striking difference in number of expert researchers between CE and UE groups, the distribution of expertise of the UE group is far below that of the CE group (Fig. 1). Mean expertise of the UE group was around half (60 publications) that of the CE group (119 publications; Mann–Whitney U test: W = 57,020; P < 10−14), as was median expertise (UE = 34 publications; CE = 84 publications)."

But I have to agree with RomanM that this part

"We examined a subsample of the 50 most-published (highest-expertise) researchers from each group. Such subsampling facilitates comparison of relative expertise between groups (normalizing differences between absolute numbers). This method reveals large differences in relative expertise between CE and UE groups (Fig. 2)"

is wrong for obvious reasons and should not have passed peer review. Also, RomanM's solution is the one that immediately came to my mind as well: Just take a proper random subsample from the CE group with the same size as the entire UE group and take the 50 most published from this subsample. Repeat this procedure until the results have converged in a trivial Monte Carlo application.

Or just skip this whole part because it doesn't add all that much beside confusion. But then there's probably not enough material for a whole paper, aside from the two statements of the obvious I quoted above…

Aside from sample size issue, RomanM quickly goes where I can't follow. I think a study of this kind can certainly be relevant for policy makers who have no time to dig in the science themselves and have to rely on what they are told by people they assume to be experts. Also I see no problem whatsoever in publicly linking citation numbers etc. to people who have already publicly voiced an opinion on climate change, be it through signing open letters, giving interviews or whatever. I find it hypocritical and a double standard to first claim expertise (even if only implicitly) by voicing an opinion but not allowing your expertise to be examined (I am not talking about blog discussions now, where people can wish to remain anonymous for various reasons). It goes without saying that I condemn threats and violence in the strongest words, but really that is not the issue here and I feel that when people bring it up it usually distracts from the more substantial discussion.

I think you misinterpret the attention you are getting. It may not be in agreement with you, and in climate discussions people tend to get testy, but you aren’t being dismissed or abused as far as I can see… only disagreed with. That’s what debates are about. You make points, they make points, and the dialogue keeps moving.

Partly, yes, maybe you didn’t flash a badge, but you didn’t contradict Steve Mosher when he claimed you were highly published in statistics. That raises expectations. Then there’s this from your blog:

I am a recently retired (after 40 years of teaching) professor whose academic interests have been stirred by the appallingly commonplace misuse of statistical methodology in climate science.

[Do you happen to have any statistics to back up that statement, by the way?]

All I’m saying is that it’s a chance to actually affect things instead of just complaining (or blogging) about it. Obviously not everyone here agrees with you, and at the same time no one seems to be able to shoot you down completely… although neither have you succeeded in shooting them down. I personally disagree with what you are saying, but I’m far from an expert in statistics, and in that event I would tend to bow to someone with more education… but not based purely on vague blog comments.

…spending how many weeks or months gathering data to do something that I strongly feel should never have been done in the first place…

What does that mean? That because you don’t approve of the techniques used, it shouldn’t even be attempted? That even attempting to quantify and qualify the participants is at its root flawed, or immoral? Or that because the end result will inevitably not shine favorably on the denial crowd, that you wouldn’t engage in such a study?

in sports, we could compare the top 50 athlets of ne country, with the top 50 of another one. and the smaller country, or the country with less people active in the sport, could still score higher in this comparison.
it would be a relevant test, to counter (or support) the argument, that it is only the average hat is higher, or just a few top guys being better.

in climate science, the UE group did not manage to score high in that category either. that is the relevant result of this part of the paper.

apart from that, we have a rather typical situation: “sceptics” want to dismiss a paper on a minor point. and they are unwilling to even show, whether the result of any alternative approch would significantly change the outcome.

While I accept the argument from (RC) authority of your in-line response, I’m surprised to note that BPL has posted on the Spencer/Miskolzci blog (his post dated 7 Aug begins “You’re known by the company you keep”). Is it worth paying more attention to that thread?

RomanM @ 217. The point is you are assuming that the populations in fact do have a similar distribution of expertise. They do not. Although the top 50 is not meant to show this (it is a look at the “best team” analogy), one can estimate the effect of the sampling scheme on the end result. Your initial cut at estimating the effect of the sampling on the median citations of each population shows that the effect of the sampling is small compared to the actual data.

PS — as others are pointing out, no one would have questioned your credentials if Steve Mosher had not made a big point of it. No one here was supposed to question you because of your expertise. Funny appeal to authority, eh? It is interesting seeing your point of view and critique, so please continue to contribute!

A frequent response to our paper’s analysis consists of attributing the patterns we found to a systematic, potentially conspiratorial suppression of peer-reviewed research from the UE group. As of yet, this is a totally unsupported assertion backed by no data\n
I was reading the essay The Climategate Emails by John Costella linked to here:

A while back and from what I read it seemed to me that there was significant evidence that some degree of boycotting journals that published research from climate sceptics.

Perhaps this was not quite done with quite the level systematic organisation to merit the phrase conspiracy but the email excepts I read left me with no doubt that there *was* publication bias.

[Response: Decisions about which journals to support (by submissions, reviews etc.) are made all the time and are based on their reputation, area, reach and impact. Associating with journals that publish rubbish (Energy and Environment for instance) is not conducive to presenting your work in the best light (to real scientists at least). If you have a great paper where would you prefer it to be published? Nature? or the Journal of Random and Unsupported Claims? This is a perfectly natural state of affairs – journal reputation is a fragile and complicated thing. But it is not publication bias. – gavin]

RE: “Perhaps this was not quite done with quite the level systematic organisation to merit the phrase conspiracy but the email excepts I read left me with no doubt that there *was* publication bias.”

I suppose you *could* use the term “publication bias” to refer to a bias on the part of *authors* about where to submit articles and where not to, and I think Gavin’s response deals with that. In general, you want to publish in the “best” journal that will print your work, where “best” is judged by factors such as reputation for printing high-quality research, reputation for printing envelope-pushing research, readership, etc. If you see that a particular journal has printed work that seems to you to be of low quality, it is entirely rational to not want your own work associated with that. No boycotting is required; it’s just the normal functioning of a marketplace–if one product or service seems problematic or inferior to your other options, you simply don’t use it. You could call it a bias in a strict definition of the word, but the connotation of ‘bias’ is that the judgment made is wrong or bad, and I don’t think that fits in this case.

On the other hand, in the context of “potentially conspiratorial suppression of peer-reviewed research” it sounds like “publication bias” refers to a bias on the part of the *publishers* regarding what to publish (i.e. for reasons other than legitimate ones such as scientific merit, relevance to the journal’s intended subject matter, etc.). That’s an altogether more serious accusation, and to my mind requires highly convincing evidence.

John McCone:
Problem 1 is the narrative of John P. Costella, which is based on “there is a conspiracy, so I will find and connect the dots that show there is a conspiracy”.
Problem 2 is that the “publication bias” you claim, as in some people not publishing in some journals, does not affect the data in Anderegg et al. They looked at number of publications and citations.
Problem 3 is that their method (google scholar) includes publications from Energy & Environment, which is a fringe journal that gladly publishes sub-par articles, as long as they are critical of (aspects of) AGW.

Several people have asked for a posting from economists.
Click the appropriate link in the right sidebar for such material.

“… debate, which involves economists and various federal agencies, is over the social cost of carbon (SCC).

The SCC may be the most important number you have never heard of. It asks, how much will each ton of carbon dioxide that we release into the atmosphere cost us in damages, both today and in the future? If the answer is a big number, then we ought to make great efforts to reduce greenhouse gas emissions. If the answer is a small number, then the case for reduction is weaker, and only easy or inexpensive changes seem warranted….”http://realclimateeconomics.org/wp/archives/247

[Response: The M&W paper will likely take some time to look through (especially since it isn’t fully published and the SI does not seem to be available yet), but I’m sure people will indeed be looking. I note that one of their conclusions “If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years” is completely in line with the analogous IPCC AR4 statement. But this isn’t the thread for this, so let’s leave discussion for when there is a fuller appreciation for what’s been done. – gavin]

http://www.abc.net.au/news/stories/2010/08/16/2984597.htm says …
Sceptics to challenge climate science in court
Climate change sceptics in New Zealand are taking the government’s climate agency to court over the validity of its evidence on global warming. The New Zealand Climate Science Coalition has accused the National Institute of Water and Atmospheric Research of tampering with official weather records to make the case for global warming.

Responde to 235. I read the paper M&W a few thing bother me. First, the inclusion of a stocastics component, which is difficult to justify physicaly. This probably blow up the error bars. Nevertheless, the smoothed version of their model is smoother than any ther reconstruction? Also, the fact they could not match the 1990 with their model ring some bell. I find difficult to have a model that is not able to catch the highest signal ratio part of the signal. Finally, their model looks like a carbon copy of the Kaufman reconstruction for polar regions. Maybe, their is a problem of geographical weigthing (Mercator vers spherical?).

The biggest flaw with the paper seems to be that if confuse science with politics. It assume that facts compel certain political positions. If one are critical to the Kyoto protocol due to believing it is economically inefficient, one can end up in the UE group. There is clearly a hidden poltical agenda that motivate the paper.

Several of the UE group members are clearly politically motivated and some of the petitions they have signed are foremost political: we dont want to pay higher taxes, slow down economic growth, this is not very dangerous according to our value system, and the like. thats the underlying message in several petitions and also explicit in some of them.

The flaw is however not as big as I thought, most if not all includes at least a sentence that contest the veracity of climate change, and many of them focus entirely on that. nevertheless, the confusion by the authors of science and politics is troublesome, for scientific as well as democratic reasons.

It’s good to be worrying for the world for once instead of countries, religions, nationalities, personal wealth…Different views will provide data from different sources and will be a platform to identify the easons behind global warming and thus create solutions.

i had a conversation with a Dem. candidate for the Mi House of Reps today and,although he was no doubt concerned with environmental issues, he was hesitant to affirm his support about anything to do with global warming stating that there was much debate on the issue – very frustrating to me as a rank layman who has studied this issue for ten years. my question – is what we now know of the detrimental effects of excess carbon in the atmoshphere additive to Arrhenius theory or in addition to it. in other words can i correctly state that there is no question as to the science as the theory is 100 years old? be gentle on me as i’m only trying to educate people

You may also enjoy my “life & times articles” on various scientists who worked on climate theory. The series actually starts with Fourier, but I’ll only link from Arrhenius on (you can easily work backwards to Tyndall, Pouillet & Fourier if you want to.)