Expert Credibility in Climate Change – Responses to Comments

Note: Before Stephen Schneider’s untimely passing, he and his co-authors were working on a response to the conversation sparked by their recent paper in the Proceedings of the National Academy of Sciences on climate change expertise. One of Dr. Schneider’s final interviews also addresses and discusses many of the issues covered here.

We accept and rely upon the judgment and opinions of experts in many areas of our lives. We seek out lawyers with specific expertise relevant to the situation; we trust the pronouncement of well-trained airplane mechanics that the plane is fit to fly. Indeed, the more technical the subject area, the more we rely on experts. Very few of us have the technical ability or time to read all of the primary literature on each cancer treatment’s biology, outcome probabilities, side-effects, interactions with other treatments, and thus we follow the advice of oncologists. We trust the aggregate knowledge of experts – what do 97% of oncologists think about this cancer treatment – more than that of any single expert. And we recognize the importance of relevant expertise – the opinion of vocal cardiologists matters much less in picking a cancer treatment than does that of oncologists.

Our paper Expert Credibility in Climate Change is predicated on this idea. It presents a broad picture of the landscape of expertise in climate science as a way to synthesize expert opinion for the broader discourse. It is, of course, only a first contribution and, as such, we hope motivates discussion and future research. We encourage follow-up peer-reviewed research, as this is the mark of scientific progress. Nonetheless, some researchers have offered thoughtful critiques about our study and others have grossly mischaracterized the work. Thus, here we provide responses to salient comments raised.

Definition of groups: The first of four broad comments about our study examines the relevancy of our two studied groups – those Convinced of the Evidence that much of the warming of the last half century is due in large part to human emissions of greenhouse gases, as assessed by the IPCC, which we term “CE,” and those who are Unconvinced of the Evidence (“UE”). Some have claimed that such groups do not adequately capture the complexity of expert opinion and therefore lose meaning. To be sure, anthropogenic climate change (ACC) is an immensely multi-faceted and complex area and expert opinion mirrors this complexity. Nonetheless, society uses simplifications of complex opinion landscapes all the time (e.g. Democrat versus Republican for political views) that don’t “lose their meaning” by ignoring the complexity of nuanced differences on specific topics within these broad groups.

The central questions at hand are: are these groups (1) clearly defined, (2) different in views of ACC, (3) reasonably discrete, and (4) in the main mutually exclusive? Our definition of groups, based entirely in the case of the UE group on their self-selected, voluntarily signed statements and petitions expressing various versions of skepticism about ACC, is clearly defined in the published paper. The strongest evidence indicating that our CE and UE groups satisfy the second and third criteria is that only three of 1,372 researchers fell into both groups—and in two of those cases, the researcher unwittingly added themselves to a statement they did not in fact support. Thus, if only one researcher of 1,372, or 0.07%, legitimately falls into both of our groups, this suggests that the two groups both differ starkly and are discrete. Any statistical analysis would be only trivially altered by having three redundant members of the cohort. Furthermore, the CE and UE groups are coherent, as around 35% of signers in each group also signed another statement in that set.

Another researcher suggests that his views have been “misclassified” by our inclusion of older public statements, as he signed a 1992 statement. Using a sweeping set of public statements that cover a broad time period to define the UE group allows us to compile an extensive (e.g. make an effort to be as comprehensive as possible) dataset and to categorize a researcher’s opinion objectively. However, were we to reclassify this researcher, it would only strengthen our results as then none of the top fifty researchers (rather than one researcher, or 2%) would fall in the UE camp.

Others have contended that the only experts we should have analyzed were those researchers involved specifically in detection and attribution of human-caused climate change. Importantly, much of the most convincing evidence for ACC comes from our understanding of the underlying physics of the greenhouse effect, illuminated long before the first detection/attribution studies, and these studies provide only one statistical line of evidence. The study could have been done in this manner but let us follow that logic to its conclusion. Applying this stricter criterion to the CE list does cause it to dwindle substantially…but applying it to the UE list causes it to approach close to zero researchers. To our knowledge, there are virtually no UE researchers by this logic who publish research on detection and attribution. Following this logic one would have to conclude that the UE group has functionally no credibility as experts on ACC. We would, however, argue that even this premise is suspect, as ecologists in IPCC have done detection and attribution studies using plants and animals (e.g. Root et al. 2005). Finally, applying a criterion such as this would require subjective judgments of a researcher’s focus area. Our study quite purposefully avoids making such subjective determinations and uses only objective lists of researchers who are self-defined. They were not chosen by our assessment as to which groups they may or may not belong in.

Somehave taken issue with our inclusion of IPCC AR4 WGI authors in with the CE group, in that the IPCC Reports are explicitly policy-neutral while the four other CE policy statements/petitions are policy prescriptive. However, we believe our definition of the CE group is scientifically sound. Do IPCC AR4 WGI authors subscribe to the basic tenets of ACC? We acknowledge that this is an assumption, but we believe it is very reasonable one, given the strength of the ultimate findings of the IPCC AR4 WGI report. We classify the AR4 WGI authors as CE because they authored a report in which they show that the evidence is convincing. Naturally, authors may not agree with everything in the report, but those who disagreed with the most fundamental conclusions of the report would likely have stepped down and not signed their names. The presence of only one of 619 WGI contributors on a UE statement or petition, compared to 117 that signed a CE statement, provides further evidence to support this assumption. Furthermore, repeating our analysis relying only on those who signed at least one of the four CE letters/petitions and not on IPCC authorship yields similar results to those published.

No grouping of scientists is perfect. We contend that ours is clear, meaningful, defensible, and scientifically sound. More importantly, it is based on the public behavior of the scientists involved, and not our subjective assignments based on our reading of individuals’ works. We believe it is far more objective for us to use choices by scientists (over which we have no influence) for our data instead of our subjective assessment of their opinions.

Scientists not counted: What about those scientists who have not been involved with the IPCC or signed a public statement? What is their opinion? Would this influence our finding that 97% of the leading researchers we studied endorse the broad consensus regarding ACC expressed in IPCC’s AR4? We openly acknowledge in the paper that this is a “credibility” study and only captures those researchers who have expressed their opinions explicitly by signing letters/petitions or by signing their names as authors of the IPCC AR4 WGI report. Some employers explicitly preclude their employees from signing public statements of this sort, and some individuals may self-limit in the same way on principle apart from employer rules. However, the undeclared are not necessarily undecided. Two recent studies tackle the same question with direct survey methods and arrive at the same conclusion as reached in our study. First, Doran and Kendall-Zimmerman (2009) surveyed 3,146 AGU members and found that 97% of actively publishing climate researchers believe that “human activity is a significant factor in changing mean global temperatures.” A recently published study, Rosenberg et al (2010), finds similar levels of support when surveying authors who have published during 1995-2004 in peer-reviewed journals highlighting climate research. Yes, our study cannot answer for – and does not claim to – those who have not publically expressed their opinions or worked with the IPCC, but other studies have and their results indicate that our findings that an overwhelming percentage of publishing scientists agree with the consensus are robust. Perfection is not possible in such analyses, but we believe that the level of agreement across studies indicates a high degree of robustness.

Publication bias: A frequent response to our paper’s analysis consists of attributing the patterns we found to a systematic, potentially conspiratorial suppression of peer-reviewed research from the UE group. As of yet, this is a totally unsupported assertion backed by no data, and appears untenable given the vast range of journals which publish climate-related studies. Notably, our publication and citation figures were taken from Google Scholar, which is one of the broadest academic databases and includes in its indexing journals openly receptive to papers taking a different view from the mainstream on climate. Furthermore, recently published analysis (Anderegg 2010) examines the PhD and research focus of a subsample of the UE group, compared to data collected by Rosenberg et al. 2010 for a portion of the climate science community publishing in peer-reviewed journals. If the two groups had similar background credentials and expertise (PhD topic and research focus – both non-publishing metrics), it might indicate a suppression of the UE group’s research. They don’t. The background credentials of the UE group differ starkly from that of the “mainstream” community. Thirty percent of the UE group sample either do not have a documented PhD or do not have a PhD in the natural sciences, as compared to an estimated 5% of the sample from Rosenberg et al; and nearly half of the remaining sample have a research focus in geology (see the interview by Schneider as well).

“Blacklist”: The idea that our grouping of researchers comprises some sort of “blacklist” is the most absurd and tragic misframing of our study. Our response is two-fold:

Our study did not create any list. We simply compiled lists that were publicly available and created by people who voluntarily self-identified with the pronouncements on the statements/letters. We did not single out researchers, add researchers, drop researchers; we have only compiled individuals from a number of prominent and public lists and petitions that they themselves signed, and then used standard social science procedure to objectively test their relative credibility in the field of climate science.

No names were used in our study nor listed in any attachments. We were very aware of the pressure that would be on us to provide the raw data used in our study. In fact, many journalists we spoke with beforehand asked for the list of names and for specific names, which we did not provide. We decided to compromise by posting only the links to the source documents – the ‘raw data’ in effect (the broader website is not the paper data), where interested parties can examine the publically available statements and petitions themselves. It is ironic that many of those now complaining about the list of names are generally the same people that have claimed that scientists do not release their data. Implying that our list is comparable to that created by Mark Morano when he worked for Senator Inhofe is decidedly unconvincing and irresponsible, given that he selected individuals based on his subjective reading and misreading of their work. See here for a full discussion of this problematic claim or read Schneider’s interview above.

In sum, the various comments and mischaracterizations discussed above do not in any way undermine the robust findings of our study. Furthermore, the vast majority of comments pertain to how the study could have been done differently. To the authors of such comments, we offer two words – do so! That’s the hallmark of science. We look forward to your scientific contributions – if and when they are peer-reviewed and published – and will be open to any such studies. In our study we were subjected to two rounds of reviews by three social scientists and in addition comments from the PNAS Board, causing us to prepare three drafts in response to those valuable peer comments that greatly improved the paper. We hope that this response further advances the conversation.

“We trust the aggregate knowledge of experts.” The “knowledge of experts” doesn’t aggregate. Sounds like Wittgenstein’s “totality of the facts”. He rejected that and reminded us that it is difficult it is to understand what other people mean.

Will the dirty oil used in shipping cool or heat the Earth over the next 50 years? The answer seems to depend on the expert you choose. On that topic can someone tell me if Wired.com get it right when they say:

“Sulfate, unlike black carbon, reflects incoming warmth rather than absorbing it. On its own, sulfate could help cool the atmosphere, but when mixed with black carbon, it acts as a net for all the warmth that would otherwise pass through. A plume with lots of sulfate appears to give the surrounding black carbon a second chance to absorb that heat.”

They appear to be reporting Ramana et. al. “Warming influenced by the ratio of black carbon to sulphate and the black-carbon source”. Is there a difference between this and Unger et. al. “Attribution of climate forcing to economic sectors”?

[Response: You’re mixing up different types of topics. The paper in question is addressing one that draws on many lines of evidence, and which is addressed in exhaustive synthesis reports, to reach a broad, general conclusion regarding the cause of warming over the last 1/2 century, whereas your example is about a much narrower topic (aside from questions of the weights of the evidence thereon). Also, I utterly disagree that knowledge doesn’t aggregate–climate science is a classic example of not only the existence of, but the need for, aggregated knowledge from many subdisciplines–Jim]

“Names are sorted in descending order of number of published works that match the word ‘climate’ in Google Scholar. This total could include non-refereed pieces such as commentary, editorials, or letters to the editor.”

Koss is correct in that the “Climate Total” column is not citations, but rather matches to “climate” in Google Scholar. One could certainly argue that this is not the best way to count publications, but in order to show that it is fatally flawed one would have to show bias or inconsistency that favors climate scientists. Koss can check on google scholar that the totals are roughly correct:

This method does include the possibility of increasing publication #s where a 2nd author with climate related publications shares the same last name and first initial. However, unless for some reason climate scientists have common names and contrarians have unique names, this is a non-bias type of error. (with M-Collins I will note that on the 10th page of google scholar results, “Intestinal nematode infection ameliorates experimental colitis in mice” is written by a different M-Collins) (I searched my own publication record this way: the total # is about 4x my peer-reviewed publication count, because it includes some AGU talks, my PhD thesis, a webpage for a course that I co-taught, and a couple of hits that were clearly erroneous but were tagged by Google with my name and the word climate. But again: consistency, lack of bias, etc. My “real” # might be somewhere between peer-reviewed publications and the total # reported, but as long as on average anyone’s total # is similarly inflated over their real #, it is a usable metric for comparison).

In addition, both the paper and the website _also_ use the number of citations to the author’s 4th highest cited paper, which is probably a somewhat more rigorous method of comparing researchers (and is also a consistent methodology).

Gavin, Your selective moderation of Tom Fuller is pretty telling. You allow rhetoric every bit as heated as Fuller’s when it’s coming from a position closer to your own.

[edit]

Let’s be honest for a minute: AGW doesn’t cease to be a reality if we confess that this “study” was a sloppy, ill-considered, exercise in polemics. And the study itself doesn’t stand or fall based on your unwise, selective moderation in defense of it.

[Response: There is a big difference between criticising a study because they used Google scholar rather than IoS and speculating about what difference it made, and accusing people of breaking rules or being unethical because of such a choice. The former is legitimate, the latter not. Whatever happens to the study or further work has nothing to do with this thread, but them’s the rules. Fuller is free to fulminate at his own site. If you don’t get that there is a difference, then I’m sorry, but that’s the way it is. Further discussion is OT. – gavin]

Hank Roberts wrote: “Tom Fuller confuses the number of publications with the number of citations …”

Gavin wrote: “Most of this comment and the previous one was simply a list of accusations and insults, and not appropriate in this forum.”

Tom Fuller is well known on the web as a deliberate purveyor of what he well knows to be falsehoods. His standard MO is to first assert that he “believes” the basic scientific basis of anthropogenic global warming, “but” … and the “but” is followed with the usual, predictable, scripted litany of denialist bunk.

And whatever “insults” he may have lobbed at RC in his comment were no doubt mild-mannered compared to the sneering invective that he directs at this site, its maintainers, and its frequent commenters in his posts elsewhere.

Bob Koss is wrong above in writing
> … Mann the Climatologist is nowhere to be found

Sorry, Bob. Look at the other columns, you’ve missed the obvious.

The info listed there is for Mann the climatologist (RC, IPCC, etc.); the error is a bad link to the UCLA instead of the Penn State scientist by the same name.

You found one error. Got another one? This is how it’s done, finding errors in other people’s work is part of the process:

“That’s how science works. It’s not a hippie love-in; it’s rugby. Every time you put out a paper, the guy you pissed off at last year’s Houston conference is gonna be laying in wait…. This is how it works: you put your model out there in the coliseum, and a bunch of guys in white coats kick the shit out of it. If it’s still alive when the dust clears, your brainchild receives conditional acceptance. It does not get rejected. This time.” http://www.rifters.com/crawl/?p=886

Since publication counts aren’t accurate, I have no reason to think citation counts are any better. They get listed by google in the publication search. Did they tediously go through each publication to ensure they had attributed the citations to the correct person? I think not.

It’s also interesting to look at polls and statements by various scientific organizations and to try to decide what their financial interests are. In some ways the 47% of petroleum geologists who believed that human activity was warming the planet in a poll published in EOS was more impressive than the 97% of climatologists who felt that way given the financial interests and questions of conscience involved.

This is an important insight. To explain that 47% as self-interested, you’d have to claim someone (Al Gore, maybe?) was bribing them. That might actually be the case, but I’d want to see the evidence.

One issue with Google Scholar is that it matches any middle initial when none is specified, so it seems M Collins has matched many other authors. This is less of an issue for more unusual names, or where authors have one or more middle initials.

Any inaccuracy on the CE side will, on average, be matched on the UE side, since the names are presumably independent of their climate positions. Adding the “climate” term was designed to minimise interference from similarly named authors, but I don’t think it was very effective, at least for common names.

The citation count doesn’t suffer from this problem, because the authorship of the relevant papers was verified:

Overall number of publications was not used because it was not possible to provide accurate publication counts in all cases because of similarly named researchers. We verified, however, author identity for the four top-cited papers by each author.

Unfortunately, there seem to be a number of technical shortcomings in the paper. Several of these induce biases in virtually every aspect of the analysis of the data.

The samples were not a random selection from a larger population, but rather the selection included the AR4 working group (over 600 of the 903 CE group subjects) who themselves had been chosen for their prolific publication and citation records. The UE group were chosen from individuals who had expressed opinions regarding the evidence for global warming. the numbers in the two groups do not properly represent the relative numbers in the population.

There was no control for the actual number of authors on each paper. Thus, if there were 10 authors on a particular publication with 100 citations, each author received credit for both the publication (total of 10) as well as the citations (total 1000) that the paper received. If the mean number of authors is higher for one of the groups, then this biases the results in favor of that group and exaggerates the extreme high values of the most prolific authors. Furthermore, because the counts are not independent, it puts into question the validity of using the Mann-Whitney test for analyzing that data.

Figures 1 and 3 reflect the disparity in size of the two samples by graphing counts rather than percentages.

However, there is also a major statistical error on page 2:

We examined a subsample of the 50 most-published (highest expertise) researchers from each group. Such subsampling facilitates comparison of relative expertise between groups (normalizing differences between absolute numbers). This method reveals large differences in relative expertise between CE and UE groups (Fig. 2). Though the top-published researchers in the CE group have an average of 408 climate publications (median = 344), the top UE researchers average only 89 publications (median = 68; Mann–Whitney U test:W= 2,455; P < 10−15). Thus, this suggests that not all experts are equal, and top CE researchers have much stronger expertise in climate science than those in the top UE group.

If one were to take two samples of sizes 903 and 472 randomly from the same population, order them by size and then take the largest fifty from each, it is a virtual certainty that the average (and/or median) of the 50 from the larger 903 subject group will be greater than that for the smaller sam0ple group. The exact amount (which may be very large) will depend on the distribution of the values from which the samples were selected. The analysis, the MW test statistic and p-value are meaningless here and the conclusion is unwarranted.

“I’d be very concerned, if, for example, I read that airline X had just laid off all their senior mechanics in order to save money.”

You would not be concerned because the airline would not be flying. Aircraft are not allowed to take off unless their maintenance/repair is current and in order.

Anne van der Bom @33 says:

“the example of the airplane mechanic was a metaphor. Metaphors are imperfect by definitio’”

It does not work for me as a metaphor and after reading what it was supposed to convey I think it is a very bad one.

Ray Ladbury @37 says:

“I’m left wondering not only whether you’ve ever talked to a scientist, but whether you’ve ever even talked to a mechanic!”

I have a BSc, am a product manager and my brother verifies aircraft maintenance histories when they get bought and sold.

Bob (Sphaerica) @40 says:

“People are…..not capable of determining whether or not the plane is capable of flying.”

Correct. That’s why there is a system of regulations and compliances

Also you say:

“knowledge and expertise” in place of “judgment and opinion.” It reads the same.”

Not in my book. We are talking about compliance to given standards.

Donna @43 says:

“When judgement/opinioon is based on the evidence then its use reflects well on the expertise of the person”

This is true. However, this is not the case when complying to given standards that an aircraft engineer works to.

Bob (Sphaerica) @45 say’s

“My guess is that he’s an engineer.”

I’m more an engineer than scientist. So you are correct. As a product manager I cannot get out of R&D and into design/engineering quick enough. Both disciplines are needed: however you can trust engineers and respect scientist. They operate in entirely different ways and need to.

Bob (Sphaerica) @47 says:

“(like Titus) who have fallen for the Denial Team bag of “tricks” (there’s that word again!) by twisting “expert” into a dirty word.”

I have not knowingly twisted anything into a dirty word. I respect opinions and I use them to make judgments of my own. I respect scientist but I do not trust there work until it gets to design/engineering. Both are need in this community, they just have different roles to fulfill.

That’s all got time for. Might get on late this evening if any one follows up.
Thanks
Tim

“knowledge and expertise” in place of “judgment and opinion.” It reads the same.”

Not in my book. We are talking about compliance to given standards.

No, “we” are not talking about compliance to given standards, you are.

We are talking about the need to recognize and respect the statements made by experts in fields that are too complex for any individual outside of that field to properly evaluate.

You’ve tied yourself in knots over picayune phrasing, and keep missing the point. You cannot claim to be open minded, and then refuse to read something after the first sentence offends your sensibilities. That is insanity. For once in your life act like a true “skeptic” and try to learn and understand before forming an opinion.

Our commenter #1 comes in, basically says “It’s all crap and I don’t believe you”, leaves and does not respond to any comments on his post.

I’ve been seeing more of this recently. Rush to be the first to comment on a climate science post. Deny it out of hand. Make semi plausible statements which are really throwning dirt. Then leave and make sure your position sits as the first comment anyone will read. Obviously anyone who has a clue about science (and specifically climate science), will know that it’s BS; but what about the millions of others?

I have decided that this can be categorised as CDS “Climate Disinformation SPAM”. My conention is that you treat it as spam. Remove it and only put it back up for consideration if the author comes back to challnege the responses (which the author wont’ because they know it’s SPAM).

That’s my take on it. This is an interesting article which is trying to communicate something very important. Namely that a very small, select, community of people are trying to discredit some of the most important science that humanity has ever seen. And the whole thread descends into chaos because that same community is the first to post on it using the very tactics the post is trying to highlight.

OK the study may have flaws and I’m sure they’ll be addressed. But the conclusion is valid and even obvious to anyone who is looking with open eyes.

Not on those grounds, anyway! The disparity in the sizes of the CE and UE groups is a result of the study (and supported by other similar studies, I recall). Did you not even read the article at the top of the page before jumping in? They said “Furthermore, repeating our analysis relying only on those who signed at least one of the four CE letters/petitions and not on IPCC authorship yields similar results to those published.”

There are more technical arguments against your complaint, but we don’t need to get into that. It falls at the first hurdle.

But, if you disagree, then by all means go hunting for more UE candidates that have published highly cited papers. Good luck….

Chris Colose — I opine there are physical limits on climate response to forcing, K/W/m^2 cannot be very large by energy balance alone. However, I don’t know a similar principle to apply to the low end.

Titus – your inital comment – ““Judgement and opinion” cannot be compared to “airplane mechanics”. You stopped me dead from further reading at that point.

Aircraft are repaired and maintained to strict rules and standards which get signed off, documented and audited as a matter of process. No room for judgement and opinion here.”

I would ask you to consider how the rules and standards you cite are created. Groups of knowledgable persons get together and debate and decide based on their judgements of risk and their opinions as what language to use in the rules and standards that will be best understood.
There is a lot of judgement and opinion going into those rules and standards that you seem to think have none.
Also I’ll make you a bet that there are differences of opinion as to what exactly the words in the rules and standards mean. People fight citations by government regulators or other auditors because their opinion of what the rules and standards mean do not match what a particular auditor/regulator thinks and their judgement of what the rules and standards actually require also differ. Auditing would be far less necessary if everone read the same rules and standards exactly the same. Rules and standards get revised in part because those resposible for them recognize that honest difference in judgement and opinion can result in people not doing exactly what the authors had in mind.

End result – the simple use of an analogy not intended to be perfect is hardly a reason not to read past a few lines. And to try and claim that judgement and opinion aren’t used in “airplane mechanics” seems to ignore a lot of evidence to the contrary.

We examined a subsample of the 50 most-published (highest expertise) researchers from each group. Such subsampling facilitates comparison of relative expertise between groups (normalizing differences between absolute numbers).

This claims that that selecting in this fashion somehow “equalizes” the imbalance in sample sizes and that is patently false. The statistics and subsequent analysis based on this sub-selection are meaningless and wrong. By all means, let’s hear your technical complaints on this and other issues.

Yes, I read the head post. Did you actually view the results of “repeated analysis” or are you simply repeating the arm waving?

I think if you check RomanM’s publication history in statistics You’d probably value his expert judgement over any of the authors or most commenters here, I’m failing to find any citations for Didactylos

[Response: We can well have a discussion on statistical issues, but keep it strictly to that please.–Jim]

The paper Expert Credibility in Climate Change, published in PNAS by Anderegg, the late Stephen Schneider, James Prall and Jacob Harold attempts to measure the credibility of climate scientists by counting how many papers they have published and how often their work has been cited by others.

This led to the creation of a blacklist that will be used to injure the careers of those who have signed letters or petitions that do not agree with the Al Gore/James Hansen position on climate change, and to intimidate future scientists, effectively silencing dissent.

The paper is poorly done, as I’ve explained elsewhere. They used Google Scholar instead of an academic database. They searched only in English, despite the global nature of climate science. They got names wrong. They got job titles wrong. They got incorrect numbers of publications and citations.

As I’ve mentioned, the highly respected Spencer Weart dismissed the paper as rubbish, saying it should not have been published.

But the worst part of this is the violation of the rights of those they studied. Because Prall keeps lists of skeptical scientists on his weblog, obsessively trawling through online petitions and published lists of letters, and because those lists were used as part of the research, anyone now or in the future can have at their fingertips the names of those who now or in the past dared to disagree.

[edit of likely libelous statement]

It doesn’t matter that the nature of the letters and petitions they signed varied widely, from outright skepticism to really innocuous questioning of the state of the science.

The paper is tagged ‘Climate Deniers.’ Now, so are they.

This is an outright violation of every ethical code of conduct for research that has ever been published.

They violate several sections of the American Sociological Association Ethical Guidelines:

“Sociologists conduct research, teach, practice, and provide service only within the boundaries of their competence, based on their education, training, supervised experience, or appropriate professional experience.”

The members of the research team were operating outside their areas of professional competence.

“Sociologists refrain from undertaking an activity when their personal circumstances may interfere with their professional work or lead to harm for a student, supervisee, human subject, client, colleague, or other person to whom they have a scientific, teaching, consulting, or other professional obligation.” The subjects of their research–the scientists on the list–risk grave harm as a result of this paper.

“11. Confidentiality
Sociologists have an obligation to ensure that confidential information is protected. They do so to ensure the integrity of research and the open communication with research participants and to protect sensitive information obtained in research, teaching, practice, and service. When gathering confidential information, sociologists should take into account the long-term uses of the information, including its potential placement in public archives or the examination of the information by other researchers or
practitioners.

11.01 Maintaining Confidentiality

(a) Sociologists take reasonable precautions to protect the confidentiality rights of research participants, students, employees, clients, or others.

(b) Confidential information provided by research participants, students, employees, clients, or others is treated as such by sociologists even if there is no legal protection or privilege to do so. Sociologists have an obligation to protect confidential information and not allow information gained in confidence from
being used in ways that would unfairly compromise research participants, students, employees, clients, or others.

(c) Information provided under an understanding of confidentiality is treated as such even after the death of those providing that information.

(d) Sociologists maintain the integrity of confidential deliberations, activities, or
roles, including, where applicable, that of professional committees, review panels,
or advisory groups (e.g., the ASA Committee on Professional Ethics).

(e) Sociologists, to the extent possible, protect the confidentiality of student records,
performance data, and personal information, whether verbal or written, given in the context of academic consultation, supervision, or advising.

(f) The obligation to maintain confidentiality extends to members of research or training teams and collaborating organizations who have access to the information. To ensure that access to confidential information is restricted, it is the responsibility of researchers, administrators, and principal investigators to instruct staff to take the steps necessary to protect confidentiality.

(g) When using private information about individuals collected by other persons or institutions, sociologists protect the confidentiality of individually identifiable information. Information is private when an individual can reasonably expect that the information will not be made public with personal identifiers (e.g., medical or employment records).”

I think it is clear that the paper, wrong on the facts, is unethical in its intent and outcome. I call for the pape to be withdrawn and for Prall’s website to take down the Blacklist.

[Response: hmm… A long cut and paste about the need for sociologists to maintain confidentiality. But perhaps you could point out what it is about an open letter in the New York Times or full page ad in the WSJ and google scholar that is at all confidential? – gavin]

RomanM: I don’t think you understand the purpose of the sub-sampling. It is not to put both groups on an equal footing. Nothing will do that! The groups just aren’t equal.

You must have read the paragraph before your quote, where the mean publication counts are compared: “Mean expertise of the UE group was around half (60 publications) that of the CE group (119 publications)”. So, instantly your concept of “random selection from a larger population”. The two groups have different characteristics as a whole.

The sub-sampling merely removes any watering-down caused by a long tail. You just don’t like it because it widens the distance between the CE experts and the UE experts.

If you found the wording in the paper misleading or debatable, then you could have just said that, instead of throwing accusations about. I have to say, the wording isn’t as clear as I would like. The “normalising” comment can be read to imply that a different process is being performed.

You haven’t explained how you propose to fix this unfixable problem. How can you make the groups “equal”? If you start adding restrictions to cut down the size of the CE group, the UE group will vanish to nothing. Any method of compiling a list of prominent CE and UE scientists will necessarily arrive at near-identical results. And the gap will remain.

Oh – and what makes you think the UE group has lower author counts? But even that concern is addressed in the paper.

It is interesting to see who turns up at the goring of a symbolic ox, feels compelled to rise to the defense of the sacred cow. Apparently “lack of consensus” is a very important canon of contrarianism.

Didactylos, what do you think the sentence “Such subsampling facilitates comparison of relative expertise between groups (normalizing differences between absolute numbers)” means? I didn’t suggest doing that – the authors did.

There is nothing to “fix” because this approach is wrong right from the start. You clearly still do not understand my initial explanation.

To make it more obvious, if I have two populations of numbers with the same distribution and I select 500 from one group and 50 from the second, I should get about the same median in both. Now, if I take the largest 50 from the first group and the largest 50 from the second (i.e. all 50) and take the median from the sub-samples, what will happen? The first will have all values (probably considerably) above the old median so the new median will be larger. The second still has the same median as before. The difference I am seeing is due purely to selecting in this fashion from unequal size groups. It is NOT any sort of comparison between the values in the two identical groups in question. The same will hold true for the average of the sub-samples. The point is that as done by the authors, any differences are distorted and no longer meaningful.

The sub-sampling merely removes any watering-down caused by a long tail. You just don’t like it because it widens the distance between the CE experts and the UE experts.

In fact, it is just the opposite. This procedure extracts” the long tails” which are naturally longer in the larger group. It widens the difference, but I “don’t like” it because it is wrong. if you don’t believe what I say ask a real statistician.

RomanM: the two groups don’t have the same distribution. They are not drawn randomly from some larger sample.

Wrong premise, wrong conclusion, and that just makes you wrong. It doesn’t matter how right you are in theory (and your theoretical example is of course correct) – your argument just doesn’t apply.

I think your problem is that you wanted the authors to have done what you thought they meant, instead of what they actually meant – and actually did. But what they actually did had a meaning, since it speaks to the relative expertise among the most prolific authors in both groups. Taking two completely random subgroups would not be useful, since they have already analysed the full dataset, nor would it be meaningful, since the two groups are unequal in size because there is simply more of one than the other, not due to any obscure sampling issues.

[edit]
I know this comment is probably going to be rejected out of hand. So be it. But for your own edification please consider seriously the issues raised by these guidelines, and ask the study’s authors to do the same next time they undertake research of this kind.

[Response: No, actually it wouldn’t have been had you not accused the authors, and/or RealClimate, of “blanket denials and cover-ups”.–Jim]

Gavin says, “But perhaps you could point out what it is about an open letter in the new York times or full page ad in the WSJ and google scholar that is at all confidential?”

For your own edification, again, I just want to highlight this quotation. (cf. 11.07 Minimizing Intrusions on Privacy(a) To minimize intrusions on privacy, sociologists include in written and oral
reports, consultations, and public communications only information germane to
the purpose for which the communication is made.)

This goes for data gathered from the New York Times, observations in a public park, info from arrest records, or any open source. It doesn’t matter that the information was publicly available when you found it; you have an obligation to include only information necessary to the purposes of your research, and to protect or otherwise anonymize all other information (you can share non-germane data with other professionals, of course; that doesn’t mean you’re allowed to just throw it up there for all comers). The identities of the subjects of this study were in no way necessary for its purposes.

This doesn’t mean the study’s authors are unethical cretins. It does mean, however, that they slipped up on a fine but important ethical point. It happens. It’s not the end of the world or proof positive that they’re gussying up a blacklist as social science research. But it’s a mistake that should be owned up to. At a minimum, it’s a mistake that should never be repeated.

[Response: Sorry, but this is completely bogus. People do not sign open letters in order to be anonymous. It is not the same as being overheard in the park. Indeed the whole point about signing such letters (whether they be a set of Nobel Prize winners, or scientists, or doctors or whomever) is to imbue them with authority based on the authority of the signers. It is therefore completely legitimate to assess that implied authority – just as people have done less systematically for any number of petitions. – gavin]

Being public is not the same as being labeled as a climate denier by an IT administrator and a grad student doing what they call ‘research.’ Those… people… had a duty of care to their research subjects which they totally failed to observe.

[Response: Rubbish. If Lindzen or Michaels or whoever sign a public letter, I owe them no duty of care whatsoever in mentioning it wherever I like. The same would be true of them in mentioning the fact I signed the Bali letter. The same is true for the authors of this paper. Had a comment of support been made in a private email or an interview in which confidentiality had been assured, it would be a completely different story. (PS. I can see that you are trying not be overtly insulting, but please try a little harder – such language is not conducive to discussion). – gavin]

Signing a letter or a petition should not earn you what somebody else uses as an insult or a weapon to deny them future employment, tenure or a grant. And that is surely what will happen.

Roberts, what part of “I have to admit that this paper should not have been published in the present form. I haven’t read any other posts on this; the defects are obvious on a quick reading of the paper itself.” is difficult to understand?

[Response: Well, it’s not hard to understand, but it’s not the same as “this study is rubbish” as you implied at first. That I assume is his point. Given that you pretending to be a champion of ethical scholarship here, I suggest that you be more careful in your quotes. – gavin]

Perhaps climate scientists should compare notes with vaccine experts, as they face similar a similar dilemma. They know what they’re talking about, and they have the expertise and evidence to back it up, but face an uphill battle against folks who willfully choose to ignore the evidence.

In the case of vaccines, Jenny McCarthy (ex-Playboy model – how’s that for expertise!) and others campaign against vaccines, based on a supposed link between vaccines and autism. The fact that there’s no evidence for that link doesn’t matter. There’s enough fear that vaccination rates and ‘herd immunity’ are now dropping in parts of the US, leading to the possibility of outbreaks of diseases that for decades had been virtually eliminated.

Widespread false beliefs can have serious consequences… but alas expert credibility is no match for public credulity.

Climate scientists need to get someone of Jenny McCarthy’s stature (and figure?) on their side! ;-)

A suggestion, instead of throwing stones and taking pot shots, might I suggest that you follow the advice of the authors above, namely:

“Furthermore, the vast majority of comments pertain to how the study could have been done differently. To the authors of such comments, we offer two words – do so! That’s the hallmark of science. We look forward to your scientific contributions – if and when they are peer-reviewed and published – and will be open to any such studies. “

Questions is are the “skeptics” up to the challenge? If history is anything to go by, the answer to that question is a resounding no.

Also, it is worth reiterating what Didactylos said above:

“But, if you disagree, then by all means go hunting for more UE candidates that have published highly cited papers. Good luck….”

and

“You haven’t explained how you propose to fix this unfixable problem. How can you make the groups “equal”?”

You are going to have to do some creative statistical manipulation to refute the inconvenient fact that skeptics’ publication list when compared to those who actually comprehend the problems at hand, not to mention the relatively low number of people citing the few papers (in a relative sense) the “skeptics” have managed to get in print.

Now the denialists (on a blog I visisted) are claiming one has to be an expert — a scientist with a Ph.D. — to accept AGW and talk about it. I say that would just take too long (for everyone to get a Ph.D. in climate science), and surely some would fall asleep in class.

As one who accepted AGW years before the 1st studies reached scientific confidence in 1995, and as one who’d like to avoid the FALSE NEGATIVE (failing to mitigate ACC when it is actually happening) and is not in the least afraid of the FALSE POSITIVE (that AGW is untrue, but we mitigate it anyway…saving $$ in the process and mitigating a plethora of other problems, saving even more $$ and lives — the BEST OF ALL WORLDS scenario), I have this to say:

It doesn’t take a climate scientist with a Ph.D. to understand the basics of AGW and realize the threats, and it doesn’t take a rocket scientist to turn off lights not in use, or the 1000s of other small and big things we need to do. We don’t need .05 significance on this one to have started mitigating.

And, “there is no such thing as a rational, economic man.” That’s what I’ve learned from my years of trying to explain AGW and get people to mitigate.

Thanks very much Gavin and the whole group at RC for posting this response. Thanks also for constructive comments and feedback.

First, I originally built these lists to be an online resource, including a way to find widely published and highly cited authors on climate-related topics, and to link to their academic homepage and photo. (I’ve left photos out of most pages since they were making page loads too slow.) Readers can then visit an author’s site and see what they teach and where; their areas of research; their CV; and who their departmental colleagues are. Having taken a number of courses on climatology and climate change myself, I’ve appreciated getting better acquainted with the many authors whose papers were assigned or cited in our coursework assignments, and putting faces to the names.

I also include an extremely brief, telegraphic few words on what I noted as each author’s top research area or paper topic – another detail I found helpful and thought others might appreciate; but those notes were not used in any way in the PNAS paper. Of course a few words of notes in a box cannot do justice to the breadth of interests or expertise of the many great minds covered in the listings. I’ve seen some online comments suggesting that if those very short notes are oversimplifications (which of course they are) then the citation statistics can be dismissed as well. I think that’s grasping at straws.

If there is an occasional mistaken link such as the one to a different Michael Mann, sorry, but that too has no bearing on the PNAS paper as it did not make use of the homepage links (nor the photos). However, I welcome any corrections to these to enhance the value of my listings as a directory, and I’ve updated that link to the correct Penn State homepage.

To the question of which figures were included in the analysis in the PNAS paper: the paper looked at both the number of papers matching “climate” for each author, and the number of citations to each author’s top four papers, as returned by Google Scholar.

Because Google Scholar tends to keep separate entries for variant citations to the same work, it tends to return higher paper and citation counts than do private, subscription-only databases such as Scopus or ISI Thompson or an exact count you can find in an author’s CV. But as both M in #57 and Didactylos in #63 correctly point out that any such effect cannot introduce any bias between two groups under comparison – what’s sauce for the goose is sauce for the gander. Google Scholar’s higher numbers do not in any way invalidate Google Scholar results as a sound *comparative* index.

As I mention on my website, I chose Google Scholar over the other paywalled services so that I could make my listings self-documenting: only Google Scholar has free, open access. In each listing I include links to all the searches so readers can repeat them for themselves. As M notes in #57, the count for “author:M-Collins climate” (which he checked using the link I provide) does indeed appear to be inflated by some false positives of other authors with the same first initial and last name (Matthew Collins either does not have or does not use a middle initial in published papers) but I assert that is a rare occurrence: most authors were searched with both first and middle initial, and it’s rare for an author to have a namesake match who also publishes on climate as in this one case.

In gathering the citation counts to each author’s four most cited papers, I did indeed apply “due diligence” to ensure I was finding only works by the specific author in question – if another author of the same initial and last name showed highly-cited papers on a topic unrelated to climate, such as the other widely cited James E Hansen who writes on medicine, I excluded such false positives.

The complaint from Tom Fuller that my website violates anyone’s privacy is unconvincing. All the statements I compiled, both those affirming and those dissenting from mainstream climate science, were public declarations, often widely promoted by their organizers. For instance, one led by the Cato Institute was run as a full-page ad in major newspapers. All were posted on the web, and our PNAS paper points to only one page on my website, the one that links to the original source documents on the web. I haven’t “outed” anyone or exposed anyone’s private communications. The “blacklist” complaint was hyped by Marc Morano, yet he himself published and widely promoted a list of most of the same names when he worked for Senator Inhofe. Our response in the original posting above also speaks to this point, as do Steve Schneider’s interviews on this – do check them out.

One other point: the difference in publication and citation rates that we documented in the PNAS paper is so large that it is pointless to look for a higher degree of precision that what was possible using Google Scholar. If there were a narrow margin, I could see asking for a recount using ISI or Scopus. But consider this comparison:
median # of papers on climate for 619 IPCC AR4 wg1 authors: 93
median # of papers on climate for 472 UE signers in PNAS study: 2
I don’t think that specific distinction was mentioned in the paper, but the numbers are there to see on my site.

Mr. Prall, I’m really shocked at your lackadaisacal attitude towards the implications faced by those named on your list as climate deniers. In this charged atmosphere it will serve as an impediment to their careers. As some of the letters and petitions you use as reference are fairly innocuous, there are many scientists who do not consider themselves climate deniers who have now been named as such by your paper.

And you confuse your duty to them as a researcher. It is not to protect their privacy. It is to dissassociate their identities from your labels. You can say climate deniers publish 2 papers to climate consensus holders publisher 93 papers all you want. I have no problem with that.

As Spencer Weart said, “The statistics are certainly interesting, but must be interpreted as “2-3% of people who have published 20 climate papers are willing to publicly attack the IPCC’s conclusions.” That is, to me, a surprisingly high fraction, although I think it can largely be attributed not to the scientific process but to the unfortunate extreme political polarization, which can induce blindness… on both sides.” (Gavin and Hank Roberts, note the use of quotation marks to indicate direct quotes.)

But you harm these people when you make it possible for them to be identified as climate deniers. Even if some of them are.

I actually think what you’ve done borders on being actionable. It unquestionably violates the UK’s and EU’s Data Privacy Acts. It violates every research code of conduct or code of ethics I have ever seen.

I’ve been doing this type of research for 15 years and I have never–never–seen the privacy of research subjects treated in such a cavalier fashion.

[Response: But no private information was sought or used. Therefore there is no privacy to protect. And I would be astonished if any research code anywhere forbade commentary or labeling of public figures and their public positions. No data privacy act outlaws commentary on public actions. Certainly in the US, it is common practice to comb public archives and cross tabulate different fields. And that isn’t even dealing with the fact that the paper does not say that the UE group are ‘deniers’ in the first place. The only quote in the text refers to how “[t]his group, often termed climate change skeptics, contrarians, or deniers, has received large amounts of media attention” – which is certainly true. They have often been called these things, and they have received disproportionate media attention. Frankly, your level of outrage over this paper is similarly disproportionate. – gavin]

Signing a letter or a petition should not earn you what somebody else uses as an insult or a weapon to deny them future employment, tenure or a grant. And that is surely what will happen.

So Fuller is on record as stating that an academic geologist who openly writes the WSJ that she believes the world is only 6,000 years old, should not have this held against her when she applies for an academic position (let’s just say, for the hell of it, researching fossil fuels formation and deposits, the entire profession of which is based on old-earth reality).

Sorry, Fuller, incompetence is an entirely reasonable “weapon” to be used to deny the incompetent future employment, blah blah blah.

I love the fact that Fuller believes that competence should have no bearing on future employment, that apparently he believes that the incompetent should be favored (as long as they hold his libertarian political beliefs).

There is one issue that is tangentially related to this that I was curious about. In some of Judith’s comments, she mentioned how ‘citizen science’ could be extremely beneficial to climate science, much the same way amateur astronomy is very important in pushing the boundaries of that field. I know there is a lot of bad blood between many of the players in this debate, however I look at something like the Clear Climate Code and the string of bloggers who are comparing and contrasting temperature reconstructions and I see a lot of good work that could help progress in this area. Even the work that Watts et al. have done (re: surfacestations) has been impressive in terms of mobilizing volunteers in order to evaluate the quality of the temperature stations (notwithstanding any other issues you may or may not have with him/his blog). Does anyone from either side care to suggest where efforts like this could be most beneficial? I know many consider the differences between both sides to be too big for efforts at cooperation to suceed, but I’m really more interested in what could be, not what is feasible in this political climate (no pun intended).

Also, I’m not trying to setup ‘citizen scientists’ as the experts. Even McIntyre says that were he in government it is only sensible to listen to what the experts are saying, which is typically advice give in the form of the IPCC or other scientific bodies.

[Response: Sure. Look at Ron Brobergs’ efforts with the GSOD data. In scanning large amounts of data where eyeballs are more important than computers, amatuers can help tremendously – the US phenology project is one thing along those lines. Finding errors in databases is also helped by having as many users looking for them as possible. Mining the climate model databases is also something people might want to do more of – there is much in there that has not been made clear because of the software constraints rather than the lack of ideas. Even paleo-climate reconstructions could benefit if people took on the data with a constructive attitude. But that last point is key. People have to want to help understand the issue. It is not enough simply to want to score points – compare and contrast Joe D’Aleo with Zeke Hausfeather for instance. The former spends his time trying to find misleading soundbites and making up misleading graphs, while the latter actually built from scratch the tools that enabled him to show that D’Aleo was very wrong. – gavin]

Gavin, I dearly want to read from opinions of the quiet IPCC guys and gals, we should tap this resource, also I really think we need to rebut every contrarian article and or viral publicity driven grossly erroneous opinions from every newspaper or blog which has a significant readership. Its the least we can do, being anti-bodies of the cancer which propagandizes false climate notions is a duty to perform from the experts. Remaining quiet and simply teach correct science is not an option. I am afraid more experts are needed to speak up in forums such as RC, but more systematically. I read many articles in the Main stream which get through without a serious response, eventually they become stubborn falsehoods difficult to redress,
If every negative contribution dedicated to a large audience is corrected nearly immediately, propaganda efforts would be more discouraged. Many reporters interviewed publish contrarian views alone, a good quick response redresses the flawed opinion and replaces it with the correct one. In other words, a propaganda piece would serve the spreading of correct science…

The careers of reputable scientists who were grouped (following their own public declarations) in the UE group have not suffered, nor will they. I challenge Fuller to demonstrate and quantify how the careers of prominent AGW/ACC “skeptics” such as Lindzen, Christy, Spencer and Pielke Snr have been impeded. When they signed those petitions they did so knowing full well that it was a public disclosure of their position of AGW/ACC. As Prall noted, the CATO petition appeared in a full page ad in a national newspaper for goodness’ sakes.

If I were Tom, I would save my indignation and rage for the real black list created by Inhofe et al. (aka Inhofe’s 17). Now that IS over the line, inappropriate and potentially actionable. It seems that his rants are motivated, not by conscience or ethics, but rather to feed the ‘skeptics’ and add to their paranoia.

Tom’s comments also do not change the painful fact that ‘skeptics’ have published far, far few papers and have been cited much, much less than their colleagues who are convinced by the data and physics that AGW/ACC presents a legitimate concern. That is why the “skeptics” are upset about his paper, because their inexperience and lack of credibility and intellectual weight in the field of climate science have been shown to be seriously wanting.

All this bluster and ridiculous paranoids about “black lists” is just a smoke screen, and Fuller is a willful participant in that. Why am I not surprised….

> note the use of quotation marks to indicate direct quotes.
Noted.
See how this works?
You can’t write “rubbish” when you cite and quote the source.
You could still write rubbish, of course.
But it’d be your word, not Weart’s.