Habitual skeptic

Category Archives: Climate Scam

The “97% consensus” claim has been made frequently by proponents of anthropogenic global warming (AGW), most relying on a 2013 study “Quantifying the consensus on anthropogenic global warming in the scientific literature”, Cook et al. This is an unscientific study relying on online volunteers to rate a tiny sample of abstracts. It is essentially useless to understanding the thinking of scientists on this issue. Below I detail the glaring problems with this study.

To start, the study authors admit that its purpose is to change public perception of the global warming issue to be pro-AGW. It is not a disinterested search for the truth, as a scientific study should be. It is very blatantly a political document, whose authors wanted to reach the conclusion reached.

Only 33% of abstracts endorsed anthropogenic global warming (AGW). The 97% figure is derived from abstracts that are interpreted by volunteers to take an explicit position on the issue. Those are only a fraction of the total. But scientific papers will not necessarily take an explicit position on what is broadly perceived to be a politically sensitive topic. There is also no objective measure for what entails an explicit position, since this determination was subjectively made by each individual volunteer. Such a high “No Position” percentage, 66%, indicates that there is a high degree of uncertainty on global warming. This is even hinted at in section 3.2 regarding self-ratings: “Among self-rated papers not expressing a position on AGW in the abstract, 53.8% were self-rated as endorsing the consensus.”

The data were compiled using online volunteers on a website, not university researchers in a lab setting. There is no verification of the credentials of these volunteers, or what methods were used to control for bias or incorrect data.

The volunteers rating the papers all believe in AGW, so their ratings are completely biased. The study authors themselves admit in section 4.1 on uncertainties, that “given that the raters themselves endorsed the scientific consensus on AGW, they may have been more likely to classify papers as sharing that endorsement.”

The volunteers had a 33% disagreement rate on whether or not an abstract endorses AGW. This means there was absolutely no objective measure for this process and was subject to individuals’ biases and preferences. The study states in section 4.1 that “In some cases, ambiguous language made it difficult to ascertain the intended meaning of the authors”. This means volunteers had to subjectively interpret the meaning.

Only articles were considered, “excluding books, discussions, proceedings papers and other document types”. This is not a comprehensive survey of the corpus on climate change.

Fewer than 10% of articles on climate were considered. Only articles containing the keywords “global warming” or “global climate change” were considered. This excludes papers which do not use those specific keywords, but use other keywords. For example, “anthropogenic climate change”, “man-made climate change”, “increasing global temperatures”, etc. It also excludes authors who deliberately avoid using these terms because they are perceived as politically sensitive. The study in section 4.1 on uncertainties, states: “Nevertheless, 11 944 papers is only a fraction of the climate literature. A Web of Science search for ‘climate change’ over the same period yields 43 548 papers, while a search for ‘climate’ yields 128 440 papers.”

The number of authors was not counted, only the number of abstracts. There can be many abstracts per author, as is implied in the study itself.

Only abstracts were looked at, not the full papers themselves.

No attempt is made to control for level of expertise and no data is presented regarding level of expertise. It is logical to assume that there are more authors with lower expertise and fewer with high expertise. A higher number of abstracts or authors does not mean a higher level of expertise; in fact, it could mean the opposite, the dilution of expertise.

The self-rating portion of the study, where authors rated their own papers in response to an email survey, had only a 14% response rate. This is a self-selecting group, not a scientific random sample. Authors most motivated on this issue would be the most likely to respond and hence skew the results.

In conclusion, the study fudges the numbers to reach a predetermined conclusion, driven by the admitted biases of the authors. It looks at a tiny percentage of the available literature on the subject and claims to draw broad conclusions about the scientific community at large. It is a self-described tool of political propaganda, not at all a scientific study.