Posts Tagged ‘scientific consensus’

Most scientists agree that current climate change is mainly caused by human activity. That has been repeatedly demonstrated on the basis of surveys of the scientific opinion as well as surveys of the scientific literature. In an article published today in the journal Environmental Research Letters (ERL) we provide a review of these different studies, which all arrive at a very similar conclusion using different methods. This shows the robustness of the scientific consensus on climate change.

This meta-study also shows that the level of agreement that the current warming is caused by human activity is greatest among researchers with the most expertise and/or the most publications in climate science. That explains why literature surveys generally find higher levels of consensus than opinion surveys. After all, experienced scientists who have published a lot about climate change have, generally speaking, a good understanding of the anthropogenic causes of global warming, and they often have more peer-reviewed publications than their contrarian colleagues.

Figure: Level of consensus on human-induced climate change versus expertise in climate science. Black circles are data based on studies of the past 10 years. Green line is a fit through the data.

The video below provides a great overview of the context and conclusions of this study:

Surveys show that among the broad group of scientists who work on the topic of climate change the level of consensus is roughly between 83 and 97% (e.g. Doran, Anderegg, Verheggen, Rosenberg, Carlton, Bray, Stenhouse, Pew, Lichter, Vision Prize). If you zoom in on the subset of most actively publishing climate scientists you find a consensus of 97% (Doran, Anderegg). Analyses of the literature also indicate a level of consensus of 97% (Cook) or even 100% (Oreskes). The strength of literature surveys lies in the fact that they sample the prime locus of scientific evidence and thus they provide the most direct measure of the consilience of evidence. On the other hand, opinion surveys can achieve much more specificity about what exactly is agreed upon. The latter aspect – what exactly is agreed upon and how does that compare to the IPCC report- is something we investigated in detail in our ES&T article based on the PBL survey.

As evidenced by the many –unfounded- criticisms on consensus studies, this is still a hot topic in the public debate, despite the fact that study after study has confirmed that there is broad agreement among scientists about the big picture: our planet is getting warmer and that is (largely) due to human activity, primarily the burning of fossil fuels. A substantial fraction of the general public however is still confused even about the big picture. In politics, schools and media climate change is often not communicated in accordance with the current scientific understanding, even though the situation here in the Netherlands is not as extreme as e.g. in the US.

Although science can never provide absolute certainty, it is the best method we have to understand complex systems and risks, such as climate change. If you value science it is wise not to brush aside broadly accepted scientific insights too easily, lest you have very good arguments for doing so (“extraordinary claims require extraordinary evidence”). I think it is important for proper democratic decision making that the public is well informed about what is scientifically known about important issues such as climate change.

John Cook warned me: if you attempt to quantify the level of scientific consensus on climate change, you will be fiercely criticized. Most of the counterarguments don’t stand up to scrutiny however. And so it happened.

Richard Tol comes to very different conclusions regarding the level of scientific consensus than the authors of the respective articles themselves (Oreskes, 2004; Anderegg et al., 2010; Doran and Kendall Zimmerman, 2009; Stenhouse et al., 2013; Verheggen et al., 2014). On the one hand, he is using what he calls “complete sample” results, which in many cases are close to meaningless as an estimate of the actual level of agreement in the relevant scientific community (that counts most strongly for Oreskes and Anderegg et al). On the other hand he is using “subsample” results, which in some cases are even more meaningless (the most egregious example of which is the subsample of outspoken contrarians in Verheggen et al).

The type of reanalysis Tol has done, if applied to e.g. evolution, would look somewhat like this:

Of all evolutionary biology papers in the sample 75% explicitly or implicitly accept the consensus view on evolution. 25% did not take positon on whether evolution is accepted or not. None rejected evolution. Tol would conclude from this that the consensus on evolution is 75%. This number could easily be brought down to 0.5% if you sample all biology papers and count those that take an affirmative position in evolution as a fraction of the whole. This is analogous to how Tol misrepresented Oreskes (2004).

Let’s ask biologists what they think of evolution, but to get an idea of dissenting views let’s also ask some prominent creationists, e.g. from the Discovery Institute. Never mind that half of them aren’t actually biologists. Surprise, surprise, the level of agreement with evolution in this latter group is very low (the real surprise is that it’s not zero). Now let’s pretend that this is somehow representative of the scientific consensus on evolution, alongside subsamples of actual evolutionary biologists. That would be analogous to how Tol misrepresented the “unconvinced” subsample of Verheggen et al (2014).

Tol selectively quotes results from our survey. We provided results for different subsamples, based on different questions, and based on different types of calculating the level of agreement, in the Supporting Information with our article in ES&T. Because we cast a very wide net with our survey, we argued in our paper that subgroups based on a proxy for expertise (the number of climate related peer reviewed publications) provide the best estimate of the level of scientific consensus. Tol on the other hand presents all subsamples as representative of the scientific consensus, including those respondents who were tagged as “unconvinced”. This group consists to a large extent of signatories of public statements disapproving of mainstream climate science, many of whom are not publishing scientists. For example, some Heartland Institute staffers were also included. It is actually surprising that the level of consensus in this group is larger than 0%. To claim, as Richard Tol does, that the outcome for this subsample is somehow representative of the scientific consensus is entirely nonsensical.

Another issue is that Richard Tol bases the numbers he uses on just one of the two survey questions about the causes of recent climate change, i.e. a form of cherry picking. Moreover, we quantified the consensus as a fraction of those who actually answered the question by providing an estimate of the human greenhouse gas contribution. Tol on the other hand quantifies the consensus as a fraction of all those who were asked the question, including those who didn’t provide such an estimate. We provided a detailed argument for our interpretation in both the ES&T paper and in a recent blogpost.

Tol’s line of reasoning here is similar to his misrepresentation of Oreskes’ results, by taking the number of acceptance papers not just as a fraction of papers that take position, but rather as a fraction of all papers, including those that take no position on current anthropogenic climate change. Obviously, the latter should be excluded from the ratio, unless one is interested in producing an artificially low, but meaningless number.

Some quotes from the other scientists:

Oreskes:

Obviously he is taking the 75% number below and misusing it. The point, which the original article made clear, is that we found no scientific dissent in the published literature.

Anderegg:

This is by no means a correct or valid interpretation of our results.

Neil Stenhouse:

Tol’s description omits information in a way that seems designed to suggest—inaccurately—that the consensus among relevant experts is low.

Doran:

To pull out a few of the less expert groups and give them the same weight as our most expert group is a completely irresponsible use of our data.

(5 Sep 2015): US Presidential candidate Rick Santorum used an erroneous interpretation of our survey results on the Bill Maher show. My detailed response to Santorum’s claim is in a newer blogpost.Politifact and Factcheck also chimed in and found Santorum’s claims to be false. The blogpost below goes into detail about how different interpretations could lead to different conclusions and how some interpretations are better supported than others.

To quantify the level of agreement with a certain position, it makes most sense to look at the number of people as a fraction of those who answered the question. We asked respondents two questions about attribution of global warming (Q1 asking for a quantitative estimate and Q3 asking for a qualitative estimate; the complete set of survey questions is available here). However, as we wrote in the ES&T paper:

Undetermined responses (unknown, I do not know, other) were much more prevalent for Q1 (22%) than for Q3 (4%); presumably because the quantitative question (Q1) was considered more difficult to answer. This explanation was confirmed by the open comments under Q1 given by those with an undetermined answer: 100 out of 129 comments (78%) mentioned that this was a difficult question.

There are two ways of expressing the level of consensus, based on these data: as a fraction of the total number of respondents (including undetermined responses), or as a fraction of the number of respondents who gave a quantitative or qualitative judgment (excluding undetermined answers). The former estimate cannot exceed 78% based on Q1, since 22% of respondents gave an undetermined answer. A ratio expressed this way gives the appearance of a lower level of agreement. However, this is a consequence of the question being difficult to answer, due to the level of precision in the answer options, rather than it being a sign of less agreement.

Moreover, the results in terms of level of agreement based on Q1 and Q3 are mutually consistent with each other if the undetermined responses are omitted in calculating the ratio; they differ markedly when undetermined responses are included. In the supporting information we provided a table (reproduced below) with results for the level of agreement calculated either as a fraction of the total (i.e., including the undetermined answers) or as a fraction of those who expressed an opinion (i.e., excluding the undetermined answers), specified for different subgroups.

For the reasons outlined above we consider the results excluding the undetermined responses the most meaningful estimate of the actual level of agreement among our respondents. Indeed, in our abstract we wrote:

90% of respondents with more than 10 climate-related peer-reviewed publications (about half of all respondents), explicitly agreed with anthropogenic greenhouse gases (GHGs) being the dominant driver of recent global warming.

Fabius Maximus goes further down still, claiming that the level of agreement with IPCC AR5 based on our survey results is only 43-47%. This result is based on the number of respondents who answered Q1b, asking for the confidence level associated with warming being predominantly greenhouse gas-driven, as a fraction of the total number of respondents who filled out Q1a (whether with a quantitative or an undetermined answer). As Tom Curtis notes, Fab Max erroneously compared our statement to the “extremely likely” statement in AR5, whereas in terms of greenhouse gases AR5 in Chapter 10 considered it “very likely” that they are responsible for more than half the warming. Moreover, our survey was undertaken in 2012, long before AR5 was available, so if respondents had IPCC in mind as a reference, it would have been AR4. If anything, the survey respondents were by and large more confident than IPCC that warming had been predominantly greenhouse gas driven, with over half assigning a higher likelihood than IPCC did in both AR4 and AR5.

Let me expand on the point of including or excluding the undetermined answers with a thought experiment. Imagine that we had asked whether respondents agreed with the AR4 statement on attribution, yes or no. I am confident that the resulting fraction of yes-responses would (far) exceed 66%. We chose instead to ask a more detailed question, and add other answer options for those who felt unwilling or unable to provide a quantitative answer. On the other hand, imagine if we had respondents choose whether the greenhouse gas contribution was -200, -199, …-2, -1, 0, 1, 2, … 99, 100, 101, …200% of the observed warming. The question would have been very difficult to answer to that level of precision. Perhaps only a handful would have ventured a guess and the vast majority would have picked one of the undetermined answer options (“I don’t know”, “unknown”, “other”). Should we in that case have concluded that the level of consensus is only a meagre few percentage points? I think not, since the result would be a direct consequence of the answer options being perceived as too difficult to meaningfully choose from.

Calculating the level of agreement in the way we suggest, i.e. excluding undetermined responses, provides a more robust measure as it’s relatively independent of the perceived difficulty of having to choose between specific answer options. And, as is omitted by the various critics, it is consistent with the responses to the qualitative attribution question, which also provides a clear indication of a strong consensus. If you were to insist on including undetermined responses in calculating the level of agreement, then it’s best to only use results from Q3. Tom Fuller’s 66% becomes 83% in that case (the level of consensus for all respondents), showing the lack of robustness in this approach when applied to Q1.

Some other issues that came up in recent discussions:

We cast a very wide net of respondents, including scientists who study various parts of climate change including impacts and mitigation.

We made special efforts to include people with skeptical points of view, not all of whom are publishing climate scientists. As such, we probably slightly underestimated the strength of the scientific consensus.

Our results are in good agreement with other opinion surveys, including e.g. Doran and Kendall-Zimmermann. Literature surveys such as by Cook et al generally find higher levels of consensus, since -as we also found, see figure just above- more published scientists are generally more convinced of human causation of global warming.

A survey among more than 1800 climate scientists confirms that there is widespread agreement that global warming is predominantly caused by human greenhouse gases.

This consensus strengthens with increased expertise, as defined by the number of self-reported articles in the peer-reviewed literature.

The main attribution statement in IPCC AR4 may lead to an underestimate of the greenhouse gas contribution to warming, because it implicitly includes the lesser known masking effect of cooling aerosols.

Self-reported media exposure is higher for those who are skeptical of a significant human influence on climate.

In 2012, while temporarily based at the Netherlands Environmental Assessment Agency (PBL), my colleagues and I conducted a detailed survey about climate science. More than 1800 international scientists studying various aspects of climate change, including e.g. climate physics, climate impacts and mitigation, responded to the questionnaire. The main results of the survey have now been published in Environmental Science and Technology (doi: 10.1021/es501998e).

Level of consensus regarding attribution

The answers to the survey showed a wide variety of opinions, but it was clear that a large majority of climate scientists agree that anthropogenic greenhouse gases are the dominant cause of global warming. Consistent with other research, we found that the consensus is strongest for scientists with more relevant expertise and for scientists with more peer-reviewed publications. 90% of respondents with more than 10 climate-related peer-reviewed publications (about half of all respondents), agreed that anthropogenic greenhouse gases (GHG) are the dominant driver of recent global warming. This is based on two different questions, of which one was phrased in similar terms as the quintessential attribution statement in IPCC AR4 (stating that more than half of the observed warming since the 1950s is very likely caused by GHG).

Figure 1. The more publications the respondents report to have written, the more important they consider the contribution of greenhouse gases to global warming. Responses are shown as a percentage of the number of respondents (N) in each subgroup, segregated according to self-reported number of peer-reviewed publications.

Literature analyses (e.g. Cook et al., 2013; Oreskes et al., 2004) generally find a stronger consensus than opinion surveys such as ours. This is related to the stronger consensus among highly published – and arguably the most expert – climate scientists. The strength of literature surveys lies in the fact that they sample the prime locus of scientific evidence and thus they provide the most direct measure of the consilience of evidence. On the other hand, opinion surveys such as ours can achieve much more specificity about what exactly is agreed upon and where the disagreement lies. As such, these two methods for quantifying scientific consensus are complementary. Our questions possibly set a higher bar for what’s considered the consensus position than some other studies. Furthermore, contrarian viewpoints were likely overrepresented in our study compared with others.

No matter how you slice it, scientists overwhelmingly agree that recent global warming is to a great extent human caused.

IPCC stands for Intergovernmental Panel on Climate Change. It is a scientific body set up by the World Meteorological Organization (WMO) and by the United Nations Environment Programme (UNEP) in 1988. The IPCC gives a summary of the latest state of scientific knowledge regarding climate change (working group 1), its impacts and adaptation (working group 2) and its mitigation (prevention; working group 3) typically every five to six years. These assessment reports are written by hundreds and reviewed by thousands of scientists. I am most familiar with the science of climate change, so this discussion pertains mainly to working group 1.

IPCC Assessment Reports

The reports basically give an overview of the recent scientific literature (i.e. published in peer reviewed journals and not those articles in your local newspaper). The IPCC does not do research itself, although the authors are all practicing scientists. All scientific literature covering the topic is assessed; so called “skeptical” journal articles just as well as those agreeing with the “mainstream”. It just so happens that there are far more journal articles agreeing with the mainstream than disagreeing. And disagreement is possible in many different shades of grey and in different directions: Some claim that climate change is less problematic, while others that it is more problematic than the mainstream scientific opinion.

IPCC Summary for Policymakers

The Summary for Policymakers (SPM) is the document that will be most widely read (mainly because of its more manageable volume) and is regarded as the most influential publicly and politically (scientifically much less so). While the main assessment reports are written solely by scientists, the Summary for Policymakers requires word by word approval from all government delegations. It must at the same time be consistent with the rest of the report. The “skeptical” governments come to the plenary approval meeting determined to insure that none of the statements in the SPM are overly confident or alarmist.

This procedure, plus the fact that scientists are professionally focused on uncertainty and often wary of overstatements, causes the reports, and especially the SPM, to be on the conservative side in their assessment of the risks of climate change. However, through a meticulous comparison with the underlying report, the SPM is also scientifically sound.

Criticism on the IPCC

The IPCC has been criticized (mainly by “skeptics”) to be a political (rather than a scientific) body and people have tried to discredit their reports as being biased. However, the IPCC office exists of a few handfuls of people doing mainly secretarial work, while the IPCC process and the actual assessments are undertaken by hundreds of scientists from the whole world. These are scientists by profession, voluntarily participating in the IPCC process. Thus, although the IPCC is a UN body, the actual work of assessing the scientific evidence is done by scientists active in the field. Therefore I do not agree with this accusation of bias.

Furthermore, since its mandate is to assess the scientific literature, it weighs the amount of evidence according to this literature. Minority theories which are considered highly uncertain and/or highly disputed are mentioned, but do not get equal weight as mainstream theories which have strong and for the most part undisputed evidence behind them. That is the whole point of such an assessment in my opinion: putting things in perspective, namely the perspective of the recent scientific literature.

Example

In case you favor a certain theory (e.g. that the sun has caused the majority of the recent warming) and you find that it is underrepresented in the IPCC assessment report, you would be tempted to automatically conclude that the report is biased. Such “minority theories” are mentioned, but also put in the context of the heavy criticism that they have endured in the scientific arena (e.g. the fact that the solar output has not increased since the 1950’s, so it was not likely responsible for recent warming). A different conclusion would be that your favorite theory is not supported by the scientific evidence, and -as a result- not by the majority of scientists either. (Not just by the scientists having written the particular chapter, but by the majority of authors having published on the topic in recent years.) You can then conclude that all those scientists are wrong (and discard the evidence to the contrary), or you can conclude that they are likely right and change your own opinion about the subject. Needless to say, most people tend to react in the former rather than the latter manner. It probably feels great thinking that you’re smarter than all those scientists. If you genuinely believe that you are right, I suggest you write up your argumentation, submit it to Science or Nature, and wait for a Nobel Prize.

Not all scientific theories have equal merit, and it is one of the tasks of these assessment reports to clarify the relative merit of different theories. According to the evidence available (as published in the scientific literature), what is most likely the case? And what is less likely, but not impossible? Both need mentioning, but they should not be treated with equal footing. As an aside, this is a critical mistake that the popular media often make: they give the appearance of equal merit to different theories, whereas in the scientific discussion they often have vastly different strengths of evidence to support them. This so called “balanced reporting” gives a false picture of the scientific thinking about the subject.