Monday, May 27, 2013

Richard Tol is quite busy trying out for his new career as a web based comedian over at twitter. If there is one thing that Cook et al. are to be thanked for it is the meltdowns that they have set off at Lucia's and Tol's. Eli has been having a bit of fun here and there, but he needs to share the glory with Tom Curtis at Brisbane's Waters and our friend Willard in the comments at Rabett Run, Dana and many others.

To be honest, and Eli is an honest Rabett, Tol has dug this one so deep that he has called out the Chewbacca team.

I have one final thing I want you to consider. Ladies and gentlemen, this is Richard Tol. Richard Tol is a famous scientist economist who was the scholar most-cited by the Stern Review of the Economics of Climate Change. Richard Tol comes from the Netherlands. But Richard Tol lives on the planet England. Now think about it; that does not make sense! Why would a Tol, an 8-foot-tall Tol with very bad hair living under a bridge somewhere, want to publish 122 qualified papers while the bunch of 2-foot-tall Ewoks lead by John Cook can only find 10 of them? Even though that is what was found in Web of Science using the search strings written about in the paper that the 2-foot-tall Ewoks lead by John Cook published. That does not make sense! But more important, you have to ask yourself: What does this have to do with this case? Nothing. Ladies and gentlemen, it has nothing to do with this case! It does not make sense!

And Richard Tol thinks some of the abstracts from those miserly only 10 papers out of 122 from the the scholar most-cited by the Stern Review of the Economics of Climate Change were rated differently (only slightly) from what Richard Tol the scholar most-cited by the Stern Review of the Economics of Climate Change would have if he had replied to the invitation to provide his own rating from those 2-foot-tall Ewoks lead by John Cook.

Especially when they excluded all those scientifical journals from the underbelly of scientific publishing like the Journal of Scientific Exploration and the Journal of American Physicians and Surgeons, the ones that come in plain brown wrappers and are stored under the shelf at the Sky Dragon Megastore. Journals that are open access because you have to give them away, and even then no one takes them and certainly no one except Willard Tony reads them. Look at me. I'm the scholar most-cited by the Stern Review of the Economics of Climate Change, and I'm talkin' about Richard Tol! Does that make sense? Ladies and gentlemen, I am not making any sense! None of this makes sense! And so you have to remember, when you're in that jury room deliberating and conjugating the future of poptech, , does it make sense? No! Ladies and gentlemen of this supposed jury, it does not make sense! If Richard Tol lives on England, you must acquit! The defense rests.

Anyhoo, this one has morphed from Tol's original cry of

Why did you only rate 10 of my 122 qualified papers.

Short Answer: Because the Web of Science Topic search using the terms "global warming" and "global climate change" for articles (not books, not proceedings) published between 1991 and May 2012 only found 10.

It would appear that to be comprehensive, an index of the scholarly
journal literature might be expected to cover all journals published. It
has been demonstrated, however, that a relatively small number of
journals publish the majority of significant scholarly results. This
principle is often referred to as Bradford's Law.

In the mid-1930's, British mathematician and librarian S.C. Bradford
realized that the core literature for any given scientific discipline
was composed of fewer than 1,000 journals. . . .

Each year, the Thomson Reuters editorial staff reviews over 2,000 journal titles for inclusion in Web of Science.
Around 10-12% of the journals evaluated are accepted for coverage.
Moreover, existing journal coverage in Thomson Reuters products is
constantly under review. Journals now covered are monitored to ensure
that they are maintaining high standards and a clear relevance to the
products in which they are covered.

If you go through the sKs database for the ABSTRACTS that were rated you find that, for example, Judith Curry only has five and Mike Mann two. This, however, raises an interesting point as to whether the ratings should be normalized so that no one author has many more ratings than another.

Then there is Richards complaint that the ABSTRACT ratings were wrong, to which there are two answers: a) respond to the survey that Cook et al sent to the authors and b) read your own damn abstracts. Willard not Tony and Tom have more to say about that.

164 comments:

Do you mean to say that only two of Michael Mann's approx 150 relevant articles were rated, and both those rated as neutral? Unconscionable bias /sarc

Of course I am approaching this entirely the wrong way, I'm sure. As Poptech and Tol clearly demonstrate, if you want to show bias in the study, you should only sample deniers and their fellow travelers. A fair sample might not yield the conclusion you desire.

While commenting, Tol claims: > "Climate change is a problem where complexity meets poor data meets ethical choices. You can't be clear and honest at same time."

That is not at all the same as Schneider's claim that:>"Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both."

It is not the same because Schneider holds out the option of, and the hope that we will be both. Not only that but he went into considerable detail as to how to be both, and practiced the principles he enunciated in his interviews, talks and articles.

It is also not the same because being clear is not the same as being effective. In fact, clarity is required both for effectiveness and honesty. If you are unclear, that means only that your communication is ambiguous, ie, that those to whom you communicate are either left in doubt as to what you intended, or worse, pick among several possible meanings, only one of which can be correct. Being deliberately unclear is only a way of lying, while lying to yourself about what you are doing.

So, I like Schneider think we should be both effective and honest in our communications. That sacrifice of the later is never justified to enhance the former. But what is more, I believe that we cannot choose between being honest and clear for only by being clear can we be honest.

To me, Tol lost all his credibility by joining the Advisory board of, and stalling some students at, the Dutch prganization De Groene Rekenkamer, a bunch of angry libertarians denying every ecological issue they can think of.

Yes I'm fully aware it's a guilt by association, but sorry, that organization is not about science. Not even close.

As an example, Here's their list of credible climate science blogs:http://www.groenerekenkamer.nl/blogosfeer/

I found that whole thing very odd. Why would such a high-profile researcher care that much? Made himself look like a bit of a twit.

Anyway, I did a quick WoK search for papers on the topic "global warming" and "global climate change" authored by Tol R* and indeed you find 10 between 1991 and 2011. His fault then for either not making the topic clearer. Admittedly, if you search under any topic, you get 132 papers published by Tol R*. Was he expecting them to rate his recent paper on "The impact of tax reform on new car puchases in Ireland"?

"No. Web of Science excludes many peer-reviewed journals. I gather you did not study your data source."

Um. Did Tol really think that Cook et al. thought they had managed to snag EVERY SINGLE CLIMATE CHANGE PAPER in the studied period??? As far as I can tell, he has presented no argument that the subset of papers that meet the stated criteria would be biased, and certainly hasn't shown that it would be humanly possible to complete this task with the 100s of thousands of abstracts that would meet an expanded criteria, so...

(mind you, I think calling him a "denier" wasn't really a useful response either... ignoring him, or politely showing how his statements were without merit would be the better approaches in my opinion)

If there's any bias in Web of Science it is positive, toward selecting for representation of high quality science.

I wonder if Tol would sample the opinions of taxi drivers if conducting an equivalent survey in an economic context? That's not far from what he was advocating for his version of Cook et al. Perhaps indulgence of dog astrology is a valid option in Tol's line or work (2008 anyone?), but in the hard sciences there are standards...

Sadly for Richard he's opened his twittering mouth too much, and all doubt has been removed...

And please, can we award to Eli, Willard and Tom for their efforts Nobel pins made from their internet gold? I'm not sure that Richard realises how much they've shaved the fleece from his hide.

So, Richard Tol is credited in this post as discovering that a search of SCOPUS using the same terms (or the "correct search" according to the post) as that used by Cook et al. delivers 19147 results, 7473 higher than found by Cook et al. using ISI WoK. Well, if I go to SCOPUS and do a search using "global warming" or "global climate change", restrict my search to the physical sciences only, and refine it to include only articles, I get 13127 results - very similar to Cook et al. 12464. So, as far as I can tell, Richard Tol isn't even doing the correct search when he criticises the Cook et al. result.

Please note Robert's contribution at the genial Mecca can be considered as a fair solution of this erasion (and a bit more) without taxing anyone too much:

> Nevertheless, words are tricky, and they often get away with people, especially when used in anger. Maybe Dana wishes he had said “I expect silly attacks like this from deniers, not you.” Then again, maybe Tol wishes he had said “I have a couple questions about how you chose your sample and how you categorized my papers. When I categorize my papers, I get a different answer.” Unfortunately Twitter is probably the worst possible vehicle for an exchange of views like that, since the 140 character limit enforces pithiness, at the expense of thoughtful qualification.

> Questioning a sampling strategy is the new anti-science http://www.guardian.co.uk/environment/climate-consensus-97-per-cent/2013/may/28/global-warming-consensus-climate-denialism-characteristics … @dana1981

https://twitter.com/RichardTol/status/339467588419588096

To which I asked:

> Which part of "I think your sampling strategy is a load of nonsense" is a question to @dana1981, @RichardTol ?

John Mashey, after an extended email exchange in which Tol was reticent to give a straightforward answer, he finally told me that he had been "... on the GWPF Academic Advisory Board from the start." I take that to mean he was on the Academic Advisory Council of the GWPF from the foundation of that organization.

> Contrary to claims however, this makes their literature search incomplete. It is neither ‘comprehensive’ nor produces the “largest” possible data set. The finding of incomplete search has further implications as it affects all conclusions drawn in the paper.

Of course, the statement " abuse is THE ONLY response of environmentalists to questions about the representativeness of a sample" can be falsified by a single example of a non-abusive response by an environmentalist - no sampling required. I am sure that there are many out there already, but just to make sure.

- Dear Dr. Tol, I respect your work (and love your hair). I have reviewed your comments on representativeness of samples with great interest, but, with all due respect, am struggling to find a coherent position therein. May I respectfully suggest that you publish a response to Cook et al. in a peer-reviewed journal, which would allow you to make your points more clearly?

First, the essential difference between Shub Niggurath's search and that conducted by Cook et al appears to be that he uses single quotation marks rather than double quotation marks around the search phrases. On google scholar, the result appears to be that you search for papers including the words "global" or "warming" or "climate" or "change". Is that also the case for searches on WoS and Scopus, or do you search for ("global" AND "warming") OR ("global" AND "climate" AND "change")?

Second, out of interest I searched Google Scholar with searches excluding patents and citations, for the period 1991-2012 for the terms "global warming" or "global climate change" and then for "global warming" or "climate change". The former search returned 17,800 hits while the later returned 17,600 hits.

Google scholar includes books, government reports and UN reports in its database so these figures are commensurate with the research results from Cook et al. What is interesting is the mere 200 (1.1%) difference from excluding "global" from the second search phrase. That meager difference makes a nonsense of Tol's claims that restricting the search with the term "global" biased the results. (I presume even Tol does not think one of the disjunctions in the search terms is the solitary word "warming".)

Do searches in WoS and Scopus also show this minimal difference?

If so, it appears Tol's and Shub's complaints about search terms is that Cook et al searched by phrases rather than single words. While the later is certainly likely to include far more irrelevant (not climate related) papers, it is difficult to imagine it would otherwise bias the results.

From @RichardTol: "Yes, but I'm bigger on ISI" and "I rate implication papers as neutral because presuming AGW does not necessarily mean I endorse it", and "Yes, saying that @dana1981's sampling criteria is a #LoadOfNonsense does not mean there exists one".

Paraphrasing, of course.

Meanwhile, @RogTallbloke tried to sell us that assuming does not necessarily implies endorsement, because, well, he does not have time to play WeaselWords games, and that he's more interested in climate science (spiced up with lots of #warmists) which he considers constructed around of logic of necessary and sufficient conditions, leading to this:

As a personal note, I never thought that 140 characters could be so much fun. Also, please bear in mind that only calmness can make me do the work Richard, Shub, and TallOne are doing for Dana, right now. They now have to come up with a criteria for representativity by themselves, or else keep spitting in the air.

Please think about that the next time you want to flame a critic. This applies also to Dana, who had another suboptimal day, with his #generous and his #projection.

These are worst than worthless rhetorical tricks: every 140 characters you type can be used against you.

> The ISI search generated 12 465 papers. Eliminating papers that were not peer-reviewed (186), not climate-related (288) or without an abstract (47) reduced the analysis to 11 944 papers written by 29 083 authors and published in 1980 journals.

since #scopus may not offer that.

If someone knows #scopus, please try to reproduce my research. Beware that I am not a scientist. Only a ninja.

Cook and co-authors did not search with single-quotes. They describe it as such in their paper. Strike one.

If you search different databases with the proper search terms "global warming" OR "global climate change", with the exact search conditions as described in the paper, you get close to 10k articles greater than what they analysed. Strike two.

anonymous (probably Shub, but demonstrating the cowardice of anonymity) shows his lack of basic reading comprehension.

"[M]ost comprehensive of its kind" means, more comprehensive than others of its kind. As such it does not state, and does not purport to state that it is completely comprehensive. Given that the largest prior literature survey to examine the acceptance of AGW. Comparing this survey of 12,000 abstracts to the survey of 928 (Oreskes 2004) or 539 (Schulte 2008), that Cook et al's survey was more comprehensive than any other of its kind is difficult to dispute, unless you are prepared to bend the language to make a rhetorical point.

Indeed, as "comphrehensive" means "So large in scope or content as to include much" or "of broad scope or content; including all or much", even ignoring the qualifier used by Cook et al does not sustain anonymous (or Shub's) point. There is, again, no doubt that Cook et al surveyed much of the literature, and ergo it is a comprehensive survey. It is not a completely comprehensive survey, but neither Cook nor his coauthors have claimed that it is. (Note, as an aside, the need for the qualifier "completely" for "comprehensive" to have the meaning anonymous wants to give it.)

anonymous pretends to believe that Cook et al did not indicate that the search was by phrases, not by simple conjunctions of words. That is the search terms were "global warming" OR "global climate change", not ("gobal" AND "warming") OR ("global" AND "climate" AND "change"). Searching for the former on Scopus, and restricted to the physical sciences, yields 13,127 hits - well less than the "close to 10K articles" more claimed by anonymous. On Google Scholar you get close to 6k more hits, but that figure includes books, reports, and non-peer reviewed content. The actual number or articles will be well less than that figure, and likely close to 12 K.

Because I can, would say Mason. Seriously. I just can. I'm testing #scopus, which I never used, and there is a remote possibility that the two search engines behave differently.

Besides, why not? MORE papers might not change the type of result if the number is not near #ALLTHEPAPERS. The possibility that MORE papers could lead to different results has not been established. Unless Shub rates the extra 10k or so papers, this amounts to pure #FUD for now.

The point of contention is about the #representativity of the sampling. Either we see an argument against that, or we get other rounds of 140 characters amounting to say "yeah, other keywords and other databases can lead to MORE papers for others to rate."

#GoodLuckWithThat.

***

> The authors claim their study is the "most comprehensive of its kind".

The authors also review the lichurchur, where the number of papers sampled was even lower.

Do you know of a more comprehensive study of this kind?

I don't think the authors meant that it would be impossible to do a more comprehensive review than that.

Interestingly, if you restrict the ratings of Cook et al to "impacts" papers, you get the following results:

Endorse 21.37Neutral 78.34Reject 0.29

Endorse 98.64Reject 1.36

In other words, when rating "impacts" papers, the rating team for Cook et al found a far higher proportion of neutral papers, and continued to find rejection papers. That is unsurprising. Impacts papers are not dealing directly with attribution, and may be conditional on results that are not accepted by the authors. That last fact, however, does not mean that authors cannot express their opinion about attribution, or matters with implications regarding attribution in those papers. The suggestions by Tallbloke and Tol (among others) that impact papers should all be neutral is a non sequitur. The best that can be said for them is that they confuse a survey of the opinions of scientists as expressed in the scientific literature with a survey of the evidence in support of AGW.

Bernard J"Which studies of this kind are more comprehensive than Cook et al? Which study of this kind is the most comprehensive?"

You missed the point. The study is claimed to be comprehensive because the search (a) it fetches every possibly relevant paper addressing the "global warming/global climate change" issue (b) because it spans the period in which the consensus is supposed to have developed. We know now that it does not analyse every possible paper it claims to. In fact, there are as many un-examined ones as there are included in the paper.

You can say the study sample is large. But the paper fails to do what it says it set out to do.

Compare Oreskes 2004 to the Cook group paper. The former examined 928 abstracts and found that none disagreed with the 'consensus' (which she defined, by the way). The latter found 999 abstracts that agreed with the consensus. You decide if the Cook group paper is any different from Oreskes'.

Importantly, the paper is qualitatively no different from a 'paper' like Oreskes 2004. It is funny that examination of 20 years worth of literature turned up numbers no different.

"anonymous (probably Shub, but demonstrating the cowardice of anonymity)"

Yeah, Says the guy who hides behind the moderators' skirts at Skepticalscience.

Your posts simply display utter ignorance in the use of academic databases like Scopus WoK etc. My advice to you would be to catch hold of someone who knows how to do it and learns the ropes. Instead of fumbling around and searching "just restricted to physical sciences".

"comprehensive" does not mean "large". And there are no such things as "completely comprehensive", or "imcompletely comprehensive".

More comprehensive (8,580,000 googles) (wiki) means larger than a study that was already large and have you never seen the description totally comprehensive or even somewhat comprehensive

Oh, and BTW the search was not in Web of Knowledge but in Web of Science, which is a subset focused on the scientific literature and journal articles. WoS, itself comes in a couple of flavors, one based on Sci Citation, the other Sci Citation Extended.

> The process of determining the level of consensus in the peer-reviewed literature contains several sources of uncertainty, including the representativeness of the sample, lack of clarity in the abstracts and subjectivity in rating the abstracts.

We address the issue of representativeness by selecting the largest sample to date for this type of literature analysis. Nevertheless, 11 944 papers is only a fraction of the climate literature. A Web of Science search for 'climate change' over the same period yields 43 548 papers, while a search for 'climate' yields 128 440 papers. The crowd-sourcing techniques employed in this analysis could be expanded to include more papers. This could facilitate an approach approximating the methods of Doran and Zimmerman (2009), which measured the level of scientific consensus for varying degrees of expertise in climate science. A similar approach could analyze the level of consensus among climate papers depending on their relevance to the attribution of GW.

http://iopscience.iop.org/1748-9326/8/2/024024/article

From this quote, we can see that:

1. The limitation has already been acknowledged in the text.

2. The idea of comprehensiveness peddled by Simple (#2) above is not in the text.

Simple ought to fiddle a theory around what the single fact brandished so far.

"It’s certainly true that there is a great deal more published climate research than the 12,464 abstracts matching the search terms “global climate change” or “global warming”.- The Authors

In reality, there is a great deal more published climate research matching the exact search terms "global climate change" or global warming".

The authors write:"An analysis of abstracts published from 1993–2003 matching the search 'global climate change' found that none of 928 papers disagreed with the consensus position on AGW"

This is further evidence of the authors' failure in understanding academic search. Oreskes performed a Web of Science keyword search for "climate change". The authors' do not appear to recognize this. Secondly, they say she searched for "global climate change". Where did they get the word 'global' from?

You've been using WoS and Scopus now. You know the alteration of search words and other params is going to alter your search results, sometimes radically.

How do the authors lay claim to "comprehensive"?

"Through analysis of climate-related papers published from 1991 to 2011, this study provides the most comprehensive analysis of its kind to date in order to quantify and evaluate the level and evolution of consensus over the last two decades."

"We address the issue of representativeness by selecting the largest sample to date for this type of literature analysis."

"The narrative presented by some dissenters is that the scientific consensus is '...on the point of collapse' (Oddie 2012) while '...the number of scientific "heretics" is growing with each passing year' (Allègre et al 2012). A systematic, comprehensive review of the literature provides quantitative evidence countering this assertion."

The paper is not a systematic comprehensive review. It is not systematic: its criteria are not cast wide into the relevant pool, but rather into a smaller subset. It is not comprehensive as a result.

"The final sentence of the fifth paragraph should read “That hypothesis was tested by analyzing 928 abstracts, published in refereed scientific journals between 1993 and 2003, and listed in the ISI database with the keywords ‘global climate change’ (9).” The keywords used were “global climate change,” not “climate change.”

anonymous#2 charges me with not knowing how to conduct search on WoS or Scopus. As I have access to neither, I am guilty as charged. That would be why I have asked questions about the results of searches on WoS or Scopus rather than simply reporting the results. I gather from anonymous#2's comments, therefore, that he cannot distinguish between assertions and questions.

He goes on to say there is no such thing as "incompletely comprehensive". As it turns out, a google search shows the phrase occurring 353,000 times on the web. "Totally comprehensive" (63,600 hits) and "somewhat comprehensive" (22,500 hits) are also well attested. Although he sets his anonymous expertise above that of dictionary definitions, google search shows he hasn't a clue about what he is talking about.

I join in with Willard, Tom and others in not being able to figure out exactly how Shub would find a rather large sample of the liturchur not representative. Really, you need to show how something like this is NOT representative. Give it a try Shub, you big dummy.

Tom CurtisYou don't have access to Web of Knowledge or Scopus. Yet you managed to write so much about it.

willardDo large numbers mean nothing at all then? Of course not. ~12k is a huge sample. But the sampling is neither systematic - 'cause it doesn't accomplish what it says it does, nor random. It is a bit of a mess. Sloppy work.

RattusHow can a large sample end up non-representative?

Cook et al are not trying to represent all of climate literature. They are attempting to gather the portion, in the exact manner as Oreskes 2004, which directly deals with the domain of "global climate change". This is a fairly large, but specific, and finite subset of the whole. Cook do not attempt to obtain a representative cross-section of all of climate literature, but rather, attempt to capture entirely a portion which fulfills certain fixed, pre-defined criteria. These limits and the reasoning behind them are explained in the paper. 1991 represents one of the thresholds. The exact matching with Oreskes' search terms is another.

There may be lots of papers, in climate science and allied areas, that do not fulfill their search criteria. There can be a few that meet the criteria but were failed to be identifed. There should not be lots of papers that meet the criteria but were yet failed to be identified.

In statistical sampling, there are various methods of ensuring representative-ness. There are post-sampling tests to detect the adequacy and coverage of sampling. In the Cook group case, the authors attempted a target coverage of 100% papers that meet fixed criteria.

Upto what percent coverage would still consider ok in a targeted search-based sampling method? 80%? 90%?

The Anonymous who can't even understand that a conversation makes more sense when it's predicated on identified participants said:

"You can say the study sample is large. But the paper fails to do what it says it set out to do. "

Why does the paper fail? Where's your documented analysis based on statistics?

Do you even understand why statistics would be used in this case, as they are in Cook et al and in any other scientific study?! Do you understand what "representative" means? Do you understand what "sampling" means?

Do you understand the difference between a survey and a census? Do you understand why this difference is in so many ways important?

Shub said:

"Sheer numbers means [sic] nothing by themselves. An equal sheer number has been left behind.

So, Shub, you should know from the principles of power analysis that unless there is some extraordinary dichotomy between the many thousands of papers captured by Cook et al and those papers and sundry publications excluded by them, even doubling (as you imply) the sample size will have negligible effect. And if the excluded papers are significantly different from the captured papers, why does such a difference occur? What is it about papers not captured by the Cook et al search parameters that camouflages them from discovery? Is it a conspiracy? Is it a fraud? Is it that those search terms aren't sufficiently specific? And if the latter, what does this imply about the excluded material?

What exactly is it, Shub? What?! What is it that's distinguishing the captured literature from the excluded literature? Please enlighten me - I am intrigued to know your answer. You see, I've calculated today that between my professional use of WoS and of Scopus I've conducted more several thousand (yes, truly) searches in the last 18 months alone, and I have more than a passing understanding of how these databases function. I'm curious to hear why they don't reflect the consensus of professional, expert opinion, thought and analysis.

Once more - please explain the putative mechanisms that would result in a broader survey of the professional science giving a result substantively different to Cook et al.

"In statistical sampling, there are various methods of ensuring representative-ness. There are post-sampling tests to detect the adequacy and coverage of sampling. In the Cook group case, the authors attempted a target coverage of 100% papers that meet fixed criteria.

Cook et al have hit 50%, when trying for 100%"

So, tell us about the other 50%, and of what what it's likely to be composed, and of the reliability of the material, and of how the this material appropriately weighted is likely to alter the outcome of Cook et al. Please.

And representatively-sampled and appropriately-documented examples would be nice...

BernardYou are an ecologist, right? You are the last person to be talking about sampling and representative-ness. You guys are the masters at drawing inferences on small populations and extrapolating to the whole, for ideologic, environmental reasons.

> [Tom Curtis] managed to write so much about [Web of Knowledge or Scopus].

I don't recall Tom writing much about that. I recall him saying:

> The suggestions by Tallbloke and Tol (among others) that impact papers should all be neutral is a non sequitur. The best that can be said for them is that they confuse a survey of the opinions of scientists as expressed in the scientific literature with a survey of the evidence in support of AGW.

If the contention is that the papers considered are a subset of all possible papers that could have been considered, then it is a straightforward exercise in hypergeometric statistics to determine confidence intervals on the proportion that endorse or reject AGW.

Interested readers can refer to this conversation at Bart's, to prepare for the scientific #machismo that is looming:

> I would suggest the Robert Laughlin approach, at this juncture. Ecosystems don’t need us, nor do they need our understanding of them. System-wide perspectives are necessarily of a different species in human thinking, whereas molecular biology remains more rooted in experimental science.

"BernardYou are an ecologist, right? You are the last person to be talking about sampling and representative-ness. You guys are the masters at drawing inferences on [sic] small populations and extrapolating to the whole, for ideologic, [sic] environmental reasons."

Ah, so you're as expert in the ways of ecological science as you are in the conduct and output of climatological science...

Riiight...

Again I would draw your attention to the fundamentals of the statistics - basic statistics - with which scientists of a variety of disciplines analyse their data. In ecology, as in any scientific field, ecologists very much tend to reach conclusions based on the power of their work. Science is a cut-throat endeavour and anyone who cannot substantiate their conclusions leaves themselves open to rebuttal by other workers. It is quite a reliably self-regulating process.

As in any discipline ecological research is expanded upon and refined, but I will call you and confront you on your claim that my colleagues in ecology and I are "are the masters at drawing inferences on [sic] small populations and extrapolating to the whole, for ideologic, [sic] environmental reasons". I challenge you, as others have, to support your claims by detailing specific examples of inappropriate "extrapolati[on]... for ideologic[al and] environmental [whatever that means] reasons". Sample from the professional ecological literature and detail this professional malfeasance that you allege is perpetrated by my nefarious discipline.

Please sample representatively from some of my favourite journals:

Annual Review of EcologyAquatic ConservationBiodiversity and ConservationBiological ConservationBiological ReviewsBiological SciencesConservation BiologyConservation GeneticsEcological MonographsEcologyEcology LettersEnvironmental ConservationEvolution and SystematicsFrontiers in Ecology and the EnvironmentGlobal Change BiologyGlobal Ecology and BiogeographyMarine and Freshwater EcosystemsNatureOikosOryxPacific Conservation BiologyPhilosophical Transactions of the Royal Society of LondonPLoS BiologyProceedings of the National Academy of Science of the USAScienceTrends in Ecology and Evolution

and defend your claim of professional scientific impropriety. Show us just how pervasive is this "extrapolati[on] to the whole, for ideologic, [sic] environmental reasons".

""It is very likely that before the end of the century the stock market, as we know it, will disappear as a factor in the lives of individuals".

-Paul Ehrlich, ecologist, 1974."

This should serve as a warning for those who venture outside their fields of expertise. An ecologist commenting on economics looks silly. Likewise, many economists blabbering about ecology, climate science, etc look silly.

In Shub's case, he should probably avoid wandering at all, since I doubt there's any "field of expertise" at work here at all ...

1) Having had his self ratings included in the self ratings reported in the paper, his self ratings can be taken in isolation and rebut the statistics of all those self ratings from all those other authors; and2) His declaring a sample to be biased without evidence is sufficient to establish that the sample is biased.

Willard, Tol's latest tweets are hard to stomach. Tol presents 13 self ratings by four authors as a superior index of the quality of abstract ratings in Cook et al than the 2,142 self rated papers by 1,189 authors reported in Cook et al; and he offers to teach Dana statistics?

His arrogance is sickening.

I found one comment of his interesting:

"Consensus is irrelevant in science. Consensus is relevant for policy."

Exactly!

And that is why, as a member of the Academic Advisory Council of an organization that attempts to influence policy and stands well outside the consensus (as Tol must know) he will certainly conduct no research into the actual state of the consensus on climate change. It is also why, I suspect, he is so eager to spread FUD, and such transparent FUD about a paper that does map the consensus.

I believe Tol's problem with Cook et al (2013) is not to be found in any minor flaws it may contain. It is that Cook et al does make a comprehensive survey, and does map a firm consensus that Tol finds inconvenient.

Not normally interested in this, but by coincidence reCaptcha has thrown up the latin form of my surname (Curtius)

I believe it's important tokeep the head up, the hockey on the ice and the eyes on the puck. If you take interest in the #FUD in Richard's latest tweets of consciousness, you have to stomach it. If you can't, stick to the #ConstructiveCriticism to which he committed by saying:

>@michstaff I'm an academic. I stand with appropriate methods, founded conclusions, reasoned & informed debate with public and policy makers.

Nevermind the patronizing dismissiveness, and look at the commitment: since Richard stands by proper methods, he has to use proper methods to substantiate that Cook & al 2013 is a silly idea poorly implemented.

This might explain why Richard does not back down from his claim. He's committed to substantiate it. Of course, there is #FUD, but if that motivates him to present substantial criticism, so much the worse for him.

A debate always contains #FUD, since it has an eristic part anyway:

http://en.wikipedia.org/wiki/Eristic

ClimateBall is not a deliberative process we'd wish. So be it.

***

Look at it from Richard's perspective. I don't think Richard wished to invest that much time on a paper he considers #silly. But he has no much choice now. If you focus on the #FUD, you might forget about his commitments. And then Richard may also "forget" about his commitments too.

Richard's committed to appropriate methods and founded conclusions. He's committed to provide #ConstructiveCriticisms. The more slap shots he takes at the paper, the more he needs to back them up afterwards. All one needs to do, after he took his shot, is to remind him of his commitments.

***

If some of Richard's criticisms, so much the better for the paper. Yeah, people will shout out and do touchdown dances. But they're already doing that. Let them dance.

It's great that a paper gets read and criticized. Most papers don't. #ConstructiveCriticisms of Cook & al. can only improve the next endeavours.

The beauty of all this is that the improvements will be mostly based on academics that brag about standing for appropriate methods and founded conclusions, in spite of their personal impressions.

> You're right, @RichardTol, which is why HELPING @dana1981 or @AGrinsted improve their work (#eg your #chartmanship) matters more than #FUD.

https://twitter.com/nevaudit/status/340105428711120899

Also, bear in mind that my use of #FUD is conditional to a lack of #ConstructiveCriticisms, lack that a simple mugshot does not suffice to compensate.

***

Now, let me remind you the questions I've asked you so far on this thread:

First, you have not justified why you interpreted Cook & al's "comprehensive" as meaning #ALLTHEPAPERS.

Second, you have yet to justify why you think #scopus would meet your #completeness criteria: #StatingTheObvious did not work.

Third, considering that

> An equal sheer number has been left behind.

does not mean much by itself, what should we conclude about that fact?

Fourth, you have not provided a quote and a citation for

> In the Cook group case, the authors attempted a target coverage of 100% papers that meet fixed criteria.

Fifth, you have failed to acknowledge that Tom Curtis' argument:

> The suggestions by Tallbloke and Tol (among others) that impact papers should all be neutral is a non sequitur. The best that can be said for them is that they confuse a survey of the opinions of scientists as expressed in the scientific literature with a survey of the evidence in support of AGW.

has not been countered yet.

Sixth, you have not said if your Ehrlich quote an example of #systematic sampling.

***

To these six points, and I hope I have not missed any, I could add three:

Seventh, were you the commenter I called Simple, who signed with #2?

Eighth, do you realize that providing an eventual justification would not be incompatible with #FUD, so far as this justification might not cover everything that as been raised as #concerns? [1]

Ninth, do you realize how many times I said or implied that Richard should improve Cook & al with #ConstructiveCriticisms, and therefore would welcome them?

***

Please mind these points before pontificating again about #understanding.

And also note that these questions do not cover what others asked you.

***

[1] I have in mind his :

> @geschichtenpost if you want to know the causes of climate change, you should restrict the survey to the papers that study that @dana1981

I'm sure we're all eagerly awaiting, Shub, or #2s, or whatevers, devastating indictment of the science of ecology. All I've seen so far is one irrelevant - and ancient - reference to Paul Ehrlich, and no-one could be daft enough to believe that constituted a proof of anything, other than the hazards of making pronouncements outside one's native field.

You made the remarkable sweeping claim entirely voluntarily, and even implied you were going to back it up. So, go ahead. Take up Bernard's challenge.

I wrote two long comments and lost both of them because I forgot to copy them before hitting 'submit'. Blogger takes me a Google account page and once that happens there is no way of getting your comment (which you happen to write in this stupid little text box) is lost.

Shorter version:willardAny search is for literature that is out there. WoK/WoS/Scopus are just portals to access it. If the literature that meets Cook's criteria is x, and he fetched x/2, he's gotten 50%. 'x', for the moment, is from Web of Knowledge - it throws up the highest number of entries for the Cook group criteria. This answers several of your questions.

billHere is a quote from EO Wilson: "We’re destroying the rest of life in one century. We’ll be down to half the species of plants and animals by the end of the century if we keep at this rate."

I don't 'indict' ecology. There are members in that discipline who however do what I said they do. (They are called ecologists)

"Any search is for literature that is out there. WoK/WoS/Scopus are just portals to access it."

Actually, no.

Web of Knowledge/Science and Scopus are bilbographic databases that collect from the overall body of literature. They are not portals to the "literature that is out there" unless that literature is within the prescribed high standards of quality that these databases hold.

There is much literature that is junk, borderline junk, or otherwise not of minimum standard required of acceptable science. Cook et al escued contamination of their survey of credible science by escuing consideration of the sub-standard work that is rejected by the aforementioned databases.

"If the literature that meets Cook's criteria is x, and he fetched x/2, he's gotten 50%. 'x', for the moment, is from Web of Knowledge - it throws up the highest number of entries for the Cook group criteria. This answers several of your questions."

Actually, no.

If A is the body (set) of top-class, reputable science and B is the body of literature cited by Web of Knowledge, Web of Science and Scopus, then you are effectively denying that the relative complements of these sets are defined such that B\A << A, and that A\B << B.

Tell us:

1) do you formally claim these conjectures?2) what proof have you sought to support these conjectures?

"billHere is a quote from EO Wilson: "We’re destroying the rest of life in one century. We’ll be down to half the species of plants and animals by the end of the century if we keep at this rate.""

Given humanity's current rate of extirpation of species and habitats EO Wilson's statement is on the money, especially in the context of the 'higher' plant and animal taxa.

As usual, you are invited to find the scientific evidence that contradicts him.

"I don't 'indict' ecology. There are members in that discipline who however do what I said they do. (They are called ecologists)"

billCook et al missed the 33 in the top 50 most cited papers on climate change. This shows the search strategy they employed misses several relevant papers. Which is the issue Rattus raised.

In the long comment I wrote, I didn't have much to say to you either. I can give you more examples of habitual doom-mongerers who insist the world should undertake drastic solution to what they think are 'problems' stemming from their own limited understanding of the world. Many of them are connected in some way to ecology.

Bernard"Cook et al escued contamination of their survey of credible science by escuing consideration of the sub-standard work that is rejected by the aforementioned databases."

Try to follow the argument. It was said that that Cook et al did not search WOK or Scopus, but WoS and that's what matters. Contrary to this however, WoS/Scopus/Wok are bibliographic databases that reflect literature that exists. Searching WoS-SCI alone therefore missed literature that exists.

If you take the EO Wilson quote and follow its antecedents you'll eventually end up with a few observational studies and a few computer-modeled ones. The loudness, and the universal applicability of the claims of "massive extinction" (for instance) do not match the quality and strength of the evidence underlying it.

The most interesting thing is, IMHO, that - apart from some dirt being thrown between those who always throw dirt at each other - we have a discussion here about some minor technical points pertaining to the methodology of the paper, not to its message. The paper is discussed for all sorts of reasons, save the one it was put out there for.

Does anyone notice how Kahan said more or less exactly that this is what would happen? Long after its apparently extremely short half life has pushed public perception of the paper to more or less nil, the only discussion is between people frantically insisting that it is a great achievement and people frantically insisting that it is bs. It doesn't even reach out from a narrow circle of twitterers and specific comment sections to people who do not already have an opinion on the topic, which, on the other hand, won't change even a bit (as these same comment sections show).

This blog entry is tantamount to the uselessness of the paper, if measured against the expressed purpose for which it has been written in the first place. That the question overwhelming the whole discussion is if the authors did everything exactly right or not shows, in and of itself, its failure.

#scopus gives me 19,415. And that's notwithstanding skimming articles with no abstracts and other steps Cook & al took, and your admission that sheer numbers are meaningless. In any case, I'm glad you admit that #WoS is an high-quality curated database.

***

It's nice to see Richard working so hard on an endeavour that he considers silly. Have you noticed the tweets where Richard is telling us to restrict to "cause", etc? By the way, this mug shot should be #understood according to this:

"This blog entry is tantamount to the uselessness of the paper, if measured against the expressed purpose for which it has been written in the first place. That the question overwhelming the whole discussion is if the authors did everything exactly right or not shows, in and of itself, its failure."

First, the question should not be whether the query produces the largest sample. Rather, the questions are relevance and whether the sample is representative. I would contend that since anthropogenic CO2 does indeed result in a global effect that the search "global climate change" is more likely to be relevant to the question at hand. I don't think Richard or anyone else has presented evidence that would suggest the subsample of papers examined by Cook et al. was not representative. Indeed, it produced results in line with several other methods--e.g. that >97% of climate experts agree that we are warming the planet.

The proper thing for Tol et al. to do would be to carry out an independent study to see if the changes they advocate yield results that are significantly different. This is what they would do if they were in fact interested in the science.

I also disagree that consensus is irrelevant to science. It is quite relevant, but difficult to measure.

Sorry to have forgotten you. I think your conclusion has two limitations:

> the only discussion is between people frantically insisting that it is a great achievement and people frantically insisting that it is bs

First, I'm not sure that what appears in Climate blogland is **the only discussion**.

Second, your predicate does not cover my own case. What I want is for Richard to turn his #concerns into #ConstructiveCriticisms.

Third, #parsomatics and #concerns may very well be what happens day in, day out, whatever comes out to be discussed. If you don't trust me, try:

http://judithcurry.com

***

Quite frankly, I could not care less if he finally shows that Cook & al 2013 is full of crap. What I care about is that we finally agree about how to proceed to make that kind of #crowdsourcing effort that would follow the most #lukewarm standards.

This requires effort over minutiae, of course. And this requires we quench #foodfights. To that effect, both Dana and Richard were quite #suboptimal, to say the least. Nobody's perfect, as Richard said in his tweet about Bart Verheggen.

Which is another thing I care about: compiling ways to start a foodfight. Twitter was nice for that, since the #hashtag facility helps identify loaded words, loaded either with theorical concepts or with rhetorical affects. Here are some, besides those already included in this comment:

I am not shub, and I do not know who shub is. My sockpuppet is "Mark", but I try to avoid it.

@ willard

I do not know what your own case is, or why you should NOT have forgotten about me - I am a guy with an email adress, and not even that last bit is sure, here. So, yes, one can always discuss everything. For example, one could write about the average life of flies in the building I work in as compared to the building next to it, and then draw some conclusion with regard to cleaing frequency. And if someone says that please, nobody cares, one can point to how the paper did, in fact, advance some method for this specific purpose that has not been used before, or how there are, indeed, some persons interested in the discussion. And then start an endless discussion about the validity of the conclusion.

I'd still call it useless, but perhaps that's just me.

As concerns Tol: He ios an economist who sepcialised in econometrics and environmental economics. I don't care what he has to say about this paper as I don't care what Eli thinks about the Stern Review, or Romm about Nordhaus. In all these cases the respective individuals do not have the necessary knowledge to advance understanding of any of the topics at hand. Depending on their mood, it's either dirt throwing or the expression of a politicalk stance, nothing anybody should be interested in.

"Try to follow the argument. It was said that that Cook et al did not search WOK or Scopus, but WoS and that's what matters."

Shub, try to keep up.

First, Web of Knowledge is not as pertinent to this issue as are Web of Science and Scopus. In fact it's basically irrelevant. So that's a red herring on your part and on the parts of others who care to dangle that line.

Second, I've already told you that I use Web of Science and Scopus frequently. And I mean frequently. Daily. Multiple times daily. And simultaneously. It's routine for me, as both databases are referred to in our instition's documentation for the 2015 Excellence in Research for Australia (ERA). Most academics responsible for research output monitoring for their disciplines across Australia would be in the same boat.

I can tell you confidently that for 'mainstream' fields of research Scopus and Web of Science are extremely similar in the range of publications that they return. Scopus seems to update with respect to most recent publications more quickly than does WoS, and WoS seems to have a slightly better reach into some of the medical sciences through the way it accesses PubMed and similar, and it's better for reaching further back into pre-internet publications, but they are largely of a match for anyone searching the top-tier papers in any of the classic sciences.

This is the simple fact of my professional experience.

Could there be a bias in my experience with these databases? Certainly. It's entirely possible that my institution's quality of research is especially high, resulting in the various disciplines with which I am involved being over-represented in the top journals and thus being selectly captured by both bibiometrices.

So ignore my experience. Go to the literature, identify the relative complement of Cook et al with respect to the set of total quality climatological publications, and show that this relative complement differs substantively from the composition of the Cook et al set.

If you have some knowledge that such a discrepancy exists (and you and Tol surely must, given the bleating with which you two - and others - have engaged), then this task must surely be a cinch.

So why is it that nothing's been produced...?

"If you take the EO Wilson quote and follow its antecedents you'll eventually end up with a few observational studies and a few computer-modeled ones. The loudness, and the universal applicability of the claims of "massive extinction" (for instance) do not match the quality and strength of the evidence underlying it. "

So, you've not actually read the body of literature on contemporary extinction? Is that what you're saying?

Richard Tol in a private email indicates that Cook et al "underestimated endorsement". He only cites part of the evidence in support of this claim, leaving it open that he thinks that other evidence suggests an oversampling of endorsement. Never-the-less, the comment suggests that Tol thinks a survey done to his (as yet unspecified) standards would yield a stronger result than Cook et al.

That being the case, two points should be made:

1) There is very little room to strengthen the Cook et al result, and it follows that if better methods would have strengthened the result, the 97% figure reported by Cook et al is robust.

2) If Tol thinks the correct method would strengthen the Cook et al result, why is he not saying so publicly?

Sorry again for the delay. It might be a profitable one, since I think I can connect what I was going to say at the time. It has to do with George Marshall's video, which I retitled How to Talk to Anyone about Anything:

This might be the only interesting element to be found in Kahan's op-ed. At least to me: I dislike any talk about communication models and share little affinities with Kahan's views, mostly legal realism and consequentialism. But that George Marshall video is enough for me to seek a common ground with him.

It was this video that led me to this exchange with Richard. Connecting to his worldview matters to me, now that I'm convinced of this How To. I want Richard's worldview to manifest itself in a more positive way than the usual gloating about another researcher's silliness. He's a bright lad that should show his brightness in other ways than assinine tweets.

I even want to connect with this gloating, a bad habit that is not exempt from Dana, btw. With this encounter, I think I could prove, contra perhaps Kahan's view, that the source of disagreement, i.e. the values, should be taken for granted and abstracted from the communication. There's no need to speak about our values. All there is to do, if I understand Marshall's correctly, is to speak for my values and make sure that my interlocutor honors his'.

This is where Marshall's idea of offering rewards come in, I think. My hypothesis is that blog exchanges create this aura of meanness that almost conveys the intent to break channels of communication. Breaking up channels of conversation has its advantages. It saves energy. It offers clarity. It creates exchanges of strokes (cf. Eric Berne's **Games People Play**) which are mainly negative. And as a bonus, we can blame the other for the breakup.

But strokes need not need to remain only negative. Offering positive strokes, here and elsewhere, might be the key: these are the rewards. It matters more to reinforce what we'd like to see more than we should pay attention to what we'd like to see disappear. And if we take this behaviorist approach to heart (with online persona, how I could I not?) I think that Kahan's concerns are not incompatible with research such as Cook & al's.

***

I think I can understand your own concerns, which I believe are different than Kahan's: all this is not only self-defeating, but irrelevant. This conclusion is the same as Dr. Doom's: this is what made him start planet3.org.

This is a fair point. But I don't think it suffices to dismiss what is being done here. You may be right that we are only throwing dirt or expressing political stances, but I believe that if we're to have something more positive, it has to start from our destructive and static patterns. If not, I'm afraid we'll return with our old ways, even if we were to speak about the stuff that matters. So before we talk about anything, we need to be able to talk.

Of course, this won't lead to a discussion about sensitivity, fat tails, or what not. At least, not now. But why not?

I do hope this demonstration will not equate #scopus with #completeness, or use something like this to derive his disproof of #representativeness. After all, #representativeness is used because we don't have #completeness. To that effect, let's recall that the paper's population should be something like the published research assuming or rejecting the consensus on AGW.

You persist in missing the basic point. The issue is not predicated on the absolute number of papers sampled; rather, it relies on the reliability of the results of Cook et al.

I will ask again as I asked above - do you understand the difference between a survey and a census? Do you understand why this difference important is in so many ways?

And again - do you understand what "representative" means? Do you understand what "sampling" means? Have you identified a difference between the relative complement of Cook et al with respect to the set of total quality climatological publications, and have you shown that this relative complement differs substantively from the composition of the Cook et al set? Do you even have a hypothesis that explains why there should be a difference, and why this difference should be such that Cook et al over-estimates the professional consensus support for the human cause of contemporary global warming?

Perhaps your confusion results from an inability to comprehend the nature of the results reported in Cook et al. For your benefit I'll reproduce the abstract here:

"We analyze the evolution of the scientific consensus on anthropogenic global warming (AGW) in the peer-reviewed scientific literature, examining 11 944 climate abstracts from 1991–2011 matching the topics 'global climate change' or 'global warming'. We find that 66.4% of abstracts expressed no position on AGW, 32.6% endorsed AGW, 0.7% rejected AGW and 0.3% were uncertain about the cause of global warming. Among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming. In a second phase of this study, we invited authors to rate their own papers. Compared to abstract ratings, a smaller percentage of self-rated papers expressed no position on AGW (35.5%). Among self-rated papers expressing a position on AGW, 97.2% endorsed the consensus. For both abstract ratings and authors' self-ratings, the percentage of endorsements among papers expressing a position on AGW marginally increased over time. Our analysis indicates that the number of papers rejecting the consensus on AGW is a vanishingly small proportion of the published research."

Notice somthing about the form of the results? I'll give you a clue - the % sign has something to do with it...

Yes, Tol should definitely contribute in a more constructive way than he does on twitter. However, I am more positive about that than you are: he contributed with 200+ papers that made him the leading researcher in his field. I ignore the rest and call it a deal. Note that this is a rather glaring difference between Tol and basically 100 percent of his critics, including the author of this blog post whose contribution to anything related to climate sciences are little more than nothing at all, or Nuccitelli, for that matter. Of course, climate bloggers who are not by coincidence also researchers in the field can still play a very important role in the public discussion in terms of communication. But they usually tend to estimate their insights much higher than what can actually be claimed based on their expertise as it emerges from their publications. Pertaining to Tol, his contributions are to be found in his papers and what he has to say in his fields of expertise. Of course that's the last place where people will eventually look.

In short: the way the Cook paper is discussed here does not reflect well with regard to its purpose. It perhaps establishes that Tol is a dork when his eagerness to badmouth environmentalists becomes strong. But then, this is irrelevant - or again: that this is apparently relevant in this context is a bad sign for the Cook paper.

If Eli wanted to discuss Tol in a serious way, he would do so. Instead he wastes his public capital on things like this here, guess why. That's why I said that this blog post is tantamount to the uselessness of the Cook paper.

bernardWhen you write lots of stuff but not say anything specific with it, and want the other person to say something, there is something wrong.

I gave you the query. If you had carried out the search, in Scopus, which you use several times a day, you'd have found about 19,415 entries. The authors found ~12k in their search effort. You may argue that 12k is enough, representative etc. It doesn't matter. It is up to the authors to demonstrate to show that their search retrieves relevant literature, relevant to the consensus-measurement exercise that is. Thea authors do this by attempting to retrieve every possible entry using the broadest possible search terms, including terms used by a precedent study. Unfortunately, they failed. All I need to show is they failed to accomplish what *they said they set out to accomplish*. If there are problems with the method, you need to take up with them.

Stop asking questions and contribute something positive to the discussion. We'll see if it takes us further.

The impact of the Cook search algorithm is now well-demonstrated by Tol. The link to one of the graphs showing the impact of differing search terms is posted above.

Shub, You have given no reason to suspect that 1)the query you give would lead to different results than the one Cook et al. used2)that the query you use is more appropriate to the purpose of demonstrating consensus than that used by Cook et al.

Indeed, since anthropogenic warming is by its nature global, it would seem to me that Cook et al. have the more appropriate query. What is more, the results of Cook et al. are consistent with those arrived at by other methods--by authors and with methods as different as Bray and von Storch and Anderegg 2011. Cook et al. stands on its own. If you have questions on the results, it is up to you to vary the methodology to see if the results produced by Cook et al. are robust. That is how science is supposed to work.

Shun, Tol's graph shows twenty five out of one hundred and ten. He did not show that that twenty five are representative. By your logic, therefore, the graph should be ignored.

Further, Tol has indicated that the graph shows Cook et al under reported endorsement. Just how much under reported can a reported endorsement level of ninety seven percent be?

I might add that Toll has not shown the different deviations to be statistically significant. Nor has he shown the different disciplines to have different levels of endorsement. Nor that, in the event that they do differ in endorsement levels, that they do so in a systematic way such that the different levels of endorsement will bias the results of Cook et al.

But you where convinced by Tol even before he presented evidence. We certainly cannot expect of you that you should expect Tol to close his argument before you swallow it.

> Thea authors do this by attempting to retrieve every possible entry using the broadest possible search terms, including terms used by a precedent study.

This is untruth:

> We address the issue of representativeness by selecting the largest sample to date for this type of literature analysis. Nevertheless, 11 944 papers is only a fraction of the climate literature. A Web of Science search for 'climate change' over the same period yields 43 548 papers, while a search for 'climate' yields 128 440 papers. The crowd-sourcing techniques employed in this analysis could be expanded to include more papers.

Not every possible entry, Shub.

Not the broadest possible, Shub.

Only the the largest sample to date for this type of literature analysis.

That's it.

No need to keep telling these untruths about #ALLTHEPAPERS to raise concerns about representativeness, you know. You just misread that article and your op-ed has no merit. No big deal.

I ask questions in an attempt to extract from you an indication of your level of understanding about the questioned subjects. It is quite revealing that you dance around supplying answers that would lead you to acknowledging the substantive points in the issue of scientific consensus on human-caused global warming.

Let's summarise:

1) You have not demonstrated that a WoS search is subtantively different to a Scopus search in terms of the proportions returned for each of the categories in the study.

2) You have not addressed the advantage that WoS has a better coverage of older literature, thus potentially providing more power in the earlier range of the study - and don't forget that an investigation of change over time was one of the stated aims.

3) You have not demonstrated that the extra coverage of Scopus over more recent decades provides significantly more power than does WoS. 12k vs 19k - how much extra power does that difference represent...?

And one more point. You have not yet answered my question about whether you've actually read the body of literature on contemporary extinction, nor have you pointed to any of the professional malfeasance of ecologists who - according to you - "are the masters at drawing inferences on [sic] small populations and extrapolating to the whole, for ideologic, [sic] environmental reasons."

Is this, when all is said and done, just another of your baseless drive-by smears against science that seems to rankle with your ideology?

Witness the success: ressources are distracted from topics relevant to humanity to a discussion about bibliometrics or something, everybody involved picks exactly those arguments or pseudo-argumets that allow them to not change their opinions even a little bit, and nobody cares for the conensus found. But for sure, the conviction are so entrenched by now that next time a consensus paper emerges, "skeptics" and "deniers" will simply shrug it off as one more "load of nonsense" (I predict a variation of:'remember how that Cook et al paper got crushed, hahahahaha!'). Who, I ask you, could have seen that coming?

Yes, @ Bernard J., my insinutation was simplistic and unfair. Pertaining to the methods the authors used, this paper is probably a success story. So good this is now talked about, too. Clearly, this is also exactly what the authors wanted: make a contribution to bibliometrics and have that discussed publicly.

Whatever SkS does attract what you deplore, and that includes everything you can think that matters. What you deplore ain't all that there is.

If you truly believe what Kahan is saying, what you're doing right now can't be felicitous. You're basically blaming Cook & al and all the bunnies for our sorry predicament. I agree that they deserve a pox in their house. But what good does it do? In the end, all that remains is more pox spread.

Since the move played is Cook & Al, why not follow its bibliometric consequences? Had this been done during Oreskes' times, we might not have to revisit this all over again.

As concerns Kahan and Cook: if the Cook paper has any influence on the broader public has to be assessed by methods I do not know and cannot imagine even on a conceptual basis. At least nominally, I am a chemist and economist by training, so I simply lack the necessary understanding of these things. My point was with regard to the blogosphere (and by extension, twitter) where everything turns out as Kahan predicted, with the exception that is you. We are not even talking about what the consensus means, but about bibliometrics.

As concerns Tol I could make some kitchen-sink psychological insinuation as to why he is almost exclusively targeted with meta and things unrelated to his research. I would say, then, that this provides a convenient excuse to ignore his research without appearing as the ideologically motivated hack one actually is, in this case. Wait a second, I just made that insinuation...

I guess what I'm saying is that you're not alone feeling that way. Perhaps I'm projecting. Sorry if I do.

***

My own take on the Consensus hurly burly is that it's for exoteric consumption. That is, it's for public in a more general way than climate blogland. Exoteric litterature is to be opposed to esoteric, see for instance:

http://en.wikipedia.org/wiki/Aristotle

Plato's dialogues were pamphlets to advertize his teachings. His exoteric work did not survive the passage of time. For Aristotle, it was the other way around: the treatises that are left were meant for his pupils.

I believe that most anticlimatic analysis of journalistic briefs or op-eds mainly confuse the target audience.

***

To understand why Eli wrote what he wrote, we need to understand the role he plays, and the role he plays in Climateball. If you're interested by such analysis, I could try to make a play by play of what happened.

For starters, you should understand that Chewbacca has a life of its own:

http://neverendingaudit.tumblr.com/tagged/chewbacca

My way to nurture compassion toward us all, the sorry characters that we are, was by way of nicknames. That Eli is already a bunny helped a lot. You can take an inspiration over there:

it will take me some time to read through all this and understand what it means.

For what it's worth, I am still not able to follow the framing of the debate. There is a debate about the impacts of AGW that should, in prinicple, guide our policy response. Call it the slope of the damage function, or the perception of catastrophic risk. It is crucial that we elaborate on these points. However, a response to this requires many branches of research - and as preferences are involved (risk aversion, time preference, inequality aversion, etc), there is a genuinely political side to the problem that has nothing, at all, to do with physical climate science. No consensus has been reached with regard to adequate policy.

The fact-based comunity vs skeptics vs deniers frames the entire problem as if it consisted only of the subset that are the physics of AGW where consensus has been established. More correctly, it's even a subset of that, as it is a consensus that entails little more than the answer to a yes-no question. This false framing allows people who accept the reality of AGW to accuse other people who also accept it of "skepticism" and "denialism", even if the discussion refers to issues where there is no consensus to deny, and skepticism exactly the right mode. Dana crying "denier" when Tol started his unspecific babble illustrates the point - it's completely bonkers, of course, but he cannot help, he has incorporated the concept and shouts it out as soon as a perceived "skeptic" or whatever category is talking as if he were one of Pavlov's dogs.

Environmentalists consistently ignore huge chunks of the literature that analyses policy. There is nothing wrong with that, in priniple: environmentalists have an idelogical POV, and they'll advocate it. Thereby, they'll exhibit a systematic bias towards literature that suggests strong policy responses to AGW - they'll endorse Weitzman (ignoring that there is no policy in Weitzman, which allows them to advocate whatever they want), and ignore or badmouth Nordhaus, for example. But this has nothing to do with being "fact-based", or with following peer-reviewed literature, on the contrary. But this is how it is framed, on the basis of a conensus that has no bearing at all on these discussions. It's advanced-level self-delusion.

Naturally, that pushes me more closely to Kahan than to, say, Mooney, at least in the superficial way I understand what this is about.

bernard"You have not demonstrated that a WoS search is subtantively (sic) different to a Scopus search in terms of the proportions returned for each of the categories in the study."

Really? I have to demonstrate this? Do the authors demonstrate that their search brings up papers that 'proportionally represent papers from their various chosen categories' in their study, in order to say their search is valid? Or do the authors, argue that their sampling is adequate *because* their results show the magical number 97%?

The burden of proof for rejection is the same as that for its acceptance. This is especially in cases where a method fails by its own standard. Any exploration of differences in proportions arising from flaws in sampling of this nature, can only quantify the impact of the flaw. They are not needed to designate the flaw as one.

If the authors themselves do not do what you asking me to do, how do they actually justify their sampling? (a) the search terms - widely used terms that are broad in meaning, capable of capturing relevant papers, (b) use of a standard academic database, (c) the time period encompassed, (d) finally, the number of retrieved entries being a fairly substantial one. The search raw results need only to be validated against these standards.

With Scopus and Web of Knowledge, one can show that there is a problem with (a), because of an issue with (b).

With the use of "climate change", Tol has actually carried out the type of data-peeking test that you ask. What is the result? Substantialy numbers of key, highly cited papers are missed by the search algorithm.

There are reasons for this. The Cook group wanted to emulate Oreskes and repeat the exercise. So the "global climate change" term she reported using, was chosen. They wanted to expand on her work. But if you use just "climate change", the search throws up >60,000 records in WoK. If you use the less technical, more popular term "global warming", WoK throws up >16k results. If you shift over to Web of Science, exclude SSCI and AHI and use "global warming", you get ~9000 results. Can you now guess the other search term they chose?

"In fact, the paper by Cook et al. may strengthen the belief that all is not well in climate research. For starters, their headline conclusion is wrong. According to their data and their definition, 98%, rather than 97%, of papers endorse anthropogenic climate change. While the difference between 97% and 98% may be dismissed as insubstantial, it is indicative of the quality of manuscript preparation and review."

Cook et al conducted a subsidiary survey to determine the percentage of papers rated as 4 which discussed the issue, with an indeterminate result. These are not correctly characterized as "no position" papers and so are properly included in the "papers which take a position" on AGW. Including those papers makes the percentage of endorsement papers among those which take a position 97.06%. Only by excluding them can Tol find a percentage of 98%. This is clearly discussed in the paper. So while I agree with Tol that though a small difference, it is indicative of the quality of manuscript review, it is his manuscript review that is shown to be inadequate. Put simply, he has plainly not read properly the paper on which he is commenting. (Nor is this the only case showing inattentive reading.)

Wrote a long comment yesterday. I'd have to edit it a bit. But since Richard published a draft, this gets priority:

> Draft of comment on 97% consensus paper for open review http://t.co/9BVOt8w7qX

https://twitter.com/RichardTol/status/341144213162962945

I've posted 11 comments on the Introduction:

https://twitter.com/nevaudit

That's just a start.

***

> With the use of "climate change", [...]

Searching for an endorsement on AGW #presumably requires one searches for authors who mentioned "climate change" in their abstracts. Not only as a restrictive keyword, i.e. to insure topicality, but **mentioned**.

That ought to get relevant results.

Also note that Shub already admitted that most of the hits in Cook & al comes from "global warming", and "global climate change" was #presumably added for historical reasons.

Tol (and Shub here) are providing an exemplary demonstration of the Burden of Proof Fallacy - the "claim that what has not been proved false must be true". In this case, the assertions that a different search would, somehow, dramatically change the Cook et al 2013 results.

Cook et al searched terms that might reasonably be used in papers on global climate change (as opposed to regional, for example), searching in Web of Science (a database filtered for high-quality journals), and examined all results they obtained.

I would suspect that broadening the search to less-reputable journals, to those less relevant in the field, would in fact produce a less representative sampling of the relevant literature. However, those are only suspicions - the burden of proof is on Tol to demonstrate his hypothesis, not on Cook et al.

The extension of that particular fallacious argument is that Cook et al would have to preemptively answer an open set of hypothetical questions regarding sampling, data, motive, relevance, time of day and phase of moon - when the reality of the matter is that if someone has a potential issue they must demonstrate that issue is relevant. Not Cook et al.

Opponents of a paper can spin cobwebs of questions all day, but unless they demonstrate that those questions actually matter, their insisting those questions must invalidate the paper is nothing more than a logical fallacy - rhetoric without substance.

An even better summary of my previous comment regarding Burden of Proof fallacies:

"That which can be asserted without evidence, can be dismissed without evidence." - Christopher Hitchens

At this point I would classify claims to present of non-representative sampling to be assertions without evidence - evidence that a different sampling would be more representative of the field of climate science, evidence that a different sampling would provide a substantially different (not just by a percentage point, mind you) set of conclusions.

"Cook et al. searched for papers on “global climate change” or “global warming”. For the last 20 years or so, however, climate change has meant global climate change, unless otherwise specified."

Really? "Unless otherwise specified"? Some climatologists may be surprised to hear this.

I wonder if Tol has searched the "climate change" set of WoS returns to ascertain how many that were published in the last two decades are about historic climate change, or paleo climate change - and that have no other specifier in the abstract. I wonder also how Tol would propose to screen these from a study such as Cook et al in order to derive only publications relevant to the study.

When all is said and done Cook et al explained, explicitly, that they surveyed a particular high-quality database using particular search terms, and "these were the results". As KR has pointed out above most recently in a long train of similar pointings out, neither Tol nor his supporters have explained how broading the search to include lesser journals increases the quality of representation of the expert literature. Similarly, Tol has not explained how broadening the search terms to include a very general search string improves the quality of representation of the expert literature.

All I can see is that Tol and his supporters are confabulating more search hits with greater precision in the study. Perhaps it's just me, but these look to me to be two different bunnies, and indeed to be bunnies and hares. Now, if this is the case, why would one want to so confabulate?

You are saying that Web of Science is "high-quality" whereas Scopus and Web of Knowledge are not? On the basis of what?

Do you have (any) prior experience in the use of these databases?

Cook framed intuitively reasonable search phrases. Which is why they appeal to low-information commenters who are ready to accept its results without investigation. However, Cook did not provide any profiling (other than raw counts) to demonstrate that his terms fetched an appropriate cross-section of articles and his choice of database did not exclude a significant amount of literature. When these attractive search phrases are used with WoK, or Scopus, surprisingly enough, it turns out that a significantly large number of results are excluded. It turns out that key papers are excluded by the use of chosen search phrases. It also turns out that the disciplinary distribution of results becomes lop-sided depending on the search phrases used.

Now, since Cook himself performed no such analysis of the kind that was done to show the above, those thinking like you may believe that it is upto Tol, or me, or Shollenberger, or anyone else to perform them. Unfortunately, it is upto Cook, to perform analysis and show, in the face of what's already out there, that his choice of search terms and search database do not invalidate or adversely affect his results.

What's already out there:Shollenberger: raised several issues. Key among them, in my view, are questions about the classification scheme itselfTol: search term and database skew results toward non-representative. Ratings show clear evidence of biasMe: Authors left out thousands of papers fetched by their exact same search phrases.

You can bounce this hot potato amongst yourselves, but the fact remains that you don't have answers to these questions. Just nonsense about who should be answering questions (you are not in middle school). It is your paper. At least have some defense ready because it doesn't look like these Qs are not going away.

For those interested, you can scrape the consensus database with the following code:

It is noteworthy that the number of results returned in Cook et al are a balance between coverage (sample size) and degree of effort - with Cook et al being one of the largest sample size so far presented on this issue (cf Oreskes, Doran, Anderegg, etc). In fact, I would consider the only analysis that comes close in size to be James Powells, who found a mere 0.17% explicitly rejecting AGW. It is interesting, and extremely supportive of the results, that the Cook et al percentages match those of several much smaller samples.

Scopus might present a more or less representative sampling. Web of Knowledge might provide a more or less representative sampling. Different search terms will include/exclude both papers endorsing and rejecting the AGW consensus.

But you cannot tell what the results would be until you actually do the work. In the meantime, raising multiple claims without evidence (as you and Tol have done) is nothing more than empty rhetoric.

Statistical sampling is never unbiased, plus or minus, on any issue - I'll certainly agree to that. But that's not an issue if the bias error is too small to change the conclusions. You and Tol have not given any evidence whatsoever (despite Tols noted differences in coverage with different searches) in that regard. None.

You have raised issues without any evidence whatsoever that the differences you trumpet actually change the conclusions. The burden of proof is on you - and you have not met it.

I'm having a hard time following Richard Tol's thinking. (It's much easier following Anthony Watts' inability to think.)

Still, I've had a go at deciphering some of his criticisms and found them wanting. I didn't get into the sample size issue because that's just too, too silly for words (even words on a snark blog).

Apart from his bad arithmetic (not able to work out percentages), Richard argues that 77% of climate science papers should be rated "neutral" automatically. Even though there is no "neutral" category in Cook13. He's saying to toss out all the climate impacts papers and all the mitigation papers and only look at science methods and paleo papers.

1) As noted above, he does not take note of the mini-survey distinguishing "uncertain on AGW" from "No AGW position" papers.

2) He does not take note of the resolution procedure for inconsistent ratings, nor the reporting of the number of inconsistent ratings in the paper.

3) He does not note that the abstracts in the TCP database are filed in order of year of publication so that the "drift towards negative skew" (ie endorsement) is a result reported in the paper, and others, not evidence of "tiredness" resulting in inconsistent ratings.

4) The data on auto-correlation may also be a result of filing in date order. It could be, for example, the result of a new "skeptical" hypothesis being floated giving rise to a number of new papers in support in short order; or due to the tenure of a "skeptic" friendly editor at a journal with the same result.

5) As the paper states:

"Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden."(My emphasis)

This is another case of Tol not reading the paper carefully. As the abstracts were randomly distributed, their order of filing does not reflect the order of rating. Ergo Tol's analysis of skewness and autocorrelation cannot uncover the issues he purports it to uncover.

6) Tol still does not show the different coverage between different search terms result in different levels of endorsement. He merely assumes it. He certainly does not show they are likely to have a significant impact on the final result.

7) Worse, while Tol reports his intuitive beliefs about potential bias when the perceived bias is towards endorsement of AGW, he does not report those beliefs when the perceived bias is opposite in sign. That is, his discussion is itself biased in a way which is intended to make Cook et al look worse, but in a way which does not reflect his true opinions.

And Willard, if Tol should happen to read this - fine. If not, let him submit a defective comment that only proves his superficial reading of the paper. He is in the business here of generating talking points, not analysis. I see no reason to aid him in that endeavour.

"The neutrality of the abstract raters can also be tested in a different way. The majority of the selected papers are not on climate change itself, but rather on its impacts or on climate policy. The causes for climate change are irrelevant for its impact. Therefore, impact papers should be rated as neutral. Emission reduction policy would be pointless if climate change were not human-made, so policy papers can be rated as an implicit endorsement of the hypothesis of anthropogenic climate change. However, a paper discussing, say, carbon capture and storage cannot be taken as evidence for global warming. These papers should therefore also be rated as neutral."(My emphasis)

The notion that Cook et al represents a survey of the evidence of AGW is fundamentally misconceived. It again shows a shallow reading of the paper at best. What the paper actually says is:

"Through analysis of climate-related papers published from 1991 to 2011, this study provides the most comprehensive analysis of its kind to date in order to quantify and evaluate the level and evolution of consensus over the last two decades."

The consensus being evaluated is the consensus of scientific opinion as expressed in the literature. In Cook et al, the degree of endorsement of AGW in the scientific literature is testing the degree of penetration of endorsement of AGW into the scientific literature. If we allow that expertise correlates with publication rates, it also serves as an expertise weighted proxy of the consensus of scientific opinion. It most certainly is not, and is not intended to be a poor man's IPCC - ie, an assessment of the state of scientific evidence by simple paper count (which would be silly beyond belief).

This is so fundamental a point that Tol's failure to grasp it renders all his various comments on the paper irrelevant. How much so can be seen by the comment quote above. Testing his quote, I searched the abstract database at SkS with the terms "carbon", "capture" and "storage". By doing so I turned up the paper, "A model of the CO2 capture potential", the opening lines of whose abstract reads:

"Global warming is a result of increasing anthropogenic CO2 emissions, and the consequences will be dramatic climate changes if no action is taken. One of the main global challenges in the years to come is therefore to reduce the CO2 emissions."

It is possible to quibble with the Cook et al rating of (1) for this abstract, given that there is no explicit quantification. I would not, and do not see how it is possible to, dispute that this is an endorsement of AGW. Yet according to Tol it is not possible that this paper endorse AGW.

The comparison of his statement and that abstract gives a fair measure of the value of his critique.

Tom["Global warming is a result of increasing anthropogenic CO2 emissions, and the consequences will be dramatic climate changes if no action is taken. One of the main global challenges in the years to come is therefore to reduce the CO2 emissions."]

"I would not, and do not see how it is possible to, dispute that this is an endorsement of AGW"

It is not that difficult to understand Tol's point. The acceptance of the orthodox position on climate change can be assumed in papers dealing with carbon capture. Why else would anyone practice carbon capture otherwise? The endorsement in this instance is merely a fallout. If you pad your sample with such papers, and classify them as 'endorse', your total proportion of endorse will automatically be high. These papers, however, are all about carbon capture and not about 'global warming', but happened to get roped in because of inclusion of the phrase.

On the other hand, consider papers that contain "global warming", or "global climate change", and are actually about climate change. A given paper on climate change can go one of several ways, - the paper could accept the orthodox position, be neutral toward it, reject it, question some aspect it, question some aspect which supports it etc.

Papers about 'carbon capture' do not have that freedom, as a given. Not that there's anything wrong with that, but that's how they are.

Tol is claiming that the Cook group's search traps a whole bunch of such papers. The data is presented in his graphs. You can look at Chris Maddigan's comments at Bart V's blog - he essentially raised the same questions. Lucia Liljegren asked the same question w.r.t biofuels - there are 177 papers relating to biofuels.

Leave aside the Cook group paper for a moment. If there are indeed lots of papers on biofuels and carbon capture in the literature, what does it tell us? These papers are a result of a perceived consensus, i.e., the authors think there is a consensus formed already that they can invest in, or research measures *that would otherwise make little to no sense*.

How can such papers be counted as contributing to the consensus in your study? These guys nodded their heads assuming the very thing you are trying to study. In other words, this is a cause-effect confusion.

> Why else would anyone practice carbon capture otherwise? The endorsement in this instance is merely a fallout.

I'd rather say commitment. Or working hypothesis. Or background assumption.

Yes, Virginia, in general, when one uses a concept, usually it is to commit to it, to work with it, or at the very least assume its reality. And yes, Virginia, these background assumptions can be measured against the papers who use the concept to deny its relevance, its existence, or our capacity to deal with it.

This should not be conceived as voting scheme, but as a measure of a consensus over a research program, however imperfect this measure might be. And this is a natural way to conceive what "to endorse" means, however anyone wanted to indulge into pragmatic considerations. See how Chris Maddigan does not even get out of his line of 20:

1) Tol's misunderstanding of the meaning of "endorses" in Cook et al creates more errors than just those explicitly discussed above.

2)Tol's argument is logically distinct from that made by Lucia. By Lucia's argument, papers in the mitigation and impacts categories are not incorrectly rated, they are merely irrelevant. Because they are not incorrectly rated, Tol's "correction" of the methods and paleoclimate ratings are without justification.

3) Lucia's argument is misguided in any event. Scientists writing "impacts" or "mitigation" cannot be assumed to have "accepted the consensus" merely based on what they read in the local rag. They have, presumably, read sufficient of the primary literature to understand the theory, and are reasonably qualified to assess it. Therefore their endorsement represents a scientifically informed opinion and is relevant to a paper that seeks to map scientifically informed opinion.

It is only if the endorsing papers are taken as a proxy for scientific evidence that Lucia's restriction makes sense, and such an interpretation would be a gross misinterpretation of Cook et al.

The amusing thing here is that by publishing papers that say if there is global warming then this happens or if there is global warming here is what we have to do or if there is global warming here is what we have to do to correct it

Is a flat endorsement by the editors that there is global warming. Otherwise why publish the stuff.

TomYou are correct that Lucia's argument is distinct. It is however related to Tol's.

If a carbon capture advocate/scientist writes the sentence you quote in the abstract, it cannot be taken to represent 'endorse' for the reason I state above. A carbon capture person cannot have any other opinion.

The second possibility is that a carbon capture guy can simply be agnostic. He need not agree with orthodox opinion to carry out his research as his expertise is not related to it.

The first paragraph is my assessment. Such papers should be rated as 'neutral'. In this, I concur with Tol.

Please note, the above is independent of the content of the abstract itself.

The assessment-exercise is subjective. It should carry minimal interposition of the volunteer rater's interpretive content. The same abstract you quoted above, to you represents "scientific opinion". It represents a 'background assumption' to willard. The classification exercise therefore is shown to consist of reading the abstract, and *imposing an external label*.

If you do that, the end statistics will merely reflect the imposed labels. That is Tol's point.

Are there any carbon capture papers that reject orthodox position in the paper list?

Shub's position is illogical and makes absolutely no sense. He has nothing worthwhile to add. We already know that denialists will hold on to the belief that there is no scientific consensus regardless of the volume of evidence which makes it clear that such a consensus exists. Shub's using the same argument structure used by creationists to "prove" that there's no consensus in biology regarding evolution.

DNFTT. Really. Let him live in his through-the-looking-glass world in peace. He is a nobody.

Shub, there *are* papers on carbon capture that are categorised as 'endorsement' and others that are categorised as 'no position'. If you or Tol had looked at the abstracts themselves, instead of jumping into the fray and arguing from a position of ignorance - both of you, then you would both know this.

There are no papers of any kind rated as 'neutral' because that is not a category used by the Cook study. Again, if you'd read the study you should have known this.

This idiocy has firmed up any opinion I had of Tol, with whose work I was not all that familiar. I was vaguely aware that although he accepts AGW as real and dangerous, he wants to dump it on future generations to cope with and clean up our mess. So I figured he had a very different set of values and moral code to most people. However I wasn't aware that he was so ridiculously sloppy in his thinking.

(It hasn't changed my opinion of Lucia or Shub or any of the other denier rabble/obsessives. I wouldn't have expected any better from them.)

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

As the level of active research and specialization in climate science increases, so does agreement with those two primary questions - 97% for climatologists who are active publishers on climate, ~90% for active publishers on climate, decreasing to ~57% for the general public. The two areas with least agreement were economic geology (47%) and meteorology (64%).

To quote their conclusions:

"It seems that the debate on the authenticity of global warming and the role played by human activity is largely nonexistent among those who understand the nuances and scientific basis of long-term climate processes."

"If a carbon capture advocate/scientist writes the sentence you quote in the abstract, it cannot be taken to represent 'endorse' for the reason I state above. A carbon capture person cannot have any other opinion.

The second possibility is that a carbon capture guy can simply be agnostic. He need not agree with orthodox opinion to carry out his research as his expertise is not related to it.

The conclusion of your first paragaph is incorrect.

The third possibility is that someone commenting on carbon capture and storage is critiquing such work because they do not believe that human-caused global warming exists in the first place.

The fourth possibility is that someone commenting on carbon capture and storage is critiquing such work because they do not believe that it will function effectively, even though such authors may accept the human cause of global warming.

> Figure 4 shows the 50 most cited papers in the larger sample. Only 17 of those are included in the smaller sample, or 34%. This suggests that the narrower query oversampled the most influential papers.

"Are there any carbon capture papers that reject orthodox position in the paper list?"

And if there aren't?

One has to be careful that one does not draw qualitatively incorrect conclusions. A negative result is still a result.

This is one of the facts of scientific life that sometimes skews results of whole disciplines. It's bad enough that under-reporting of negative results can skew quantitative interpretation, let alone that it can be used for invalid a priori (and/or post hoc) qualitative re-interpretation.

The research paper is very clear and straightforward. It's not esoteric or full of jargon. Anyone who has completed primary school should be able to understand the work and the analysis. To anyone who's completed high school it should be a breeze.

I'm finding it hard to fathom what sort of a mind it takes to *not* understand the work, like shub and Tol and Watts and his band (most of whom, going by their comments, haven't even read the paper).

It's also telling that no denier is prepared to do a review of the literature for themselves. The tools are all there. They can either do a lit search themselves or go through the list of papers from Cook13 that's on the ERL website and do their own classification. A team of them could knock it off fairly quickly if they put their minds to it. The hard work has already been done eg categories, list of papers, documentation of the abstract etc.

John Cook has given them a lot of tools on SkepticalScience to make it easy to do an analysis, showing the abstract for each paper plus links to the electronic copy.

Deniers like to pretend they are too dumb/stupid (and some of them may be). IMO it's mainly because they are too chicken (if they are ordinary follower/deniers) and would have to change their thinking. Or because they'd have to change their messaging (if they are leader/disinformers).

A search for carbon capture shows one paper for "explicitly endorses and ...>50%". It is the paper Tom Curtis posted above.

Another paper about methane hydrates which says that methane hydrates could be used with carbon capture to go to a zero-carbon economy is classified as No position.

It doesn't make any sense.

Here we are, with Curtis stating that scientists writing mitigation papers can be presumed to have read 'sufficient of the primary literature to understand the theory". They are good at carbon capture theory. How they heck can they 'endorse and quantify' AGW?

And then we have an endorsement from gas hydrate guys - an 'implicit' one. But they are neutral. (!)

If one assumes raters are blindly reading abstracts and rating them soley on a textual basis, there are abstracts with 'carbon capture' which make no mention of AGW/GW/CC/GCC whatsoever, but are yet rated '3'.

These ratings are just ad-hoc nonsense. It is virtually not possible for carbon capture related papers to be 1, or 2, or 5, 6 or 7. They cannot be 4b, or 4a, because the very reason for carbon capture is an assumed harm due to carbon-di-oxide. It is a mess.

EliRabettI agree with both your posts above. Any carbon capture related paper immediately means a host of assumptions can be automatically made about the researchers' and their work. Which is exactly why they are completely non-informative about any consensus-related position.

How can a paper about carbon capture 'explicitly endorse and quantify AGW >50% of second half of 20th century?

BernardYou are right that papers could appear criticizing carbon capture for the points you mention. However, they don't seem to have, in a search of WoS of the past 20 years. I just checked WoK. I don't seem to see any. I would modify my statement above. It is virtually impossible for a carbon-capture related paper to be 1, or 2, i.e., explicit endorsement. It is very unlikely for any of them to be 5, 6, or 7.

Take the last statement. Its assumption can be violated only by paper(s) by authors who have sufficient expertise to contribute an original article in the field of carbon capture, and incidentally happen to mention their skepticism in some way or form, in the abstract.

"If a carbon capture advocate/scientist writes the sentence you quote in the abstract, it cannot be taken to represent 'endorse' for the reason I state above. A carbon capture person cannot have any other opinion."

Actually, it is perfectly possible for a scientist to research an issue on a hypothetical basis. They may, for example, reject AGW but still research CCS in order to demonstrate that it is ruinously expensive and not a viable means for the mitigation they believe to be unnecessary in any event. Further, with regard to CCS, they may reject AGW but accept Ocean Acidification and pursue CCS as a means to eliminate the later without regard to its effects on the former.

That is why Willard's comment is incorrect. A stated endorsement in the abstract represents more than a (potentially hypothetical) "working assumption".

It is alsy why rating was done on the actual contents of the abstract rather than potentially mistaken beliefs about the background assumptions of the research (and why 43% of mitigation papers where rated as "no position").

Further, both my and Willard's comment was about the basis for endorsement in the paper, not about the basis for rating the paper as "endorsing AGW". Therefore your comments on subjectivity are non sequitur. In fact, on the basis of that non sequitur you are insisting that abstracts be rated as "no position" regardless of the contents of the abstract.

If a researcher mentions a concept C to refute it, whatever the grounds, we must assume that he'd mention the refutation in the abstract. If the relevance of his work relies on C being the case, he does not need to endorse it explicitly: this is what I (via Neal at Bart's) mean by a working assumption. Even if he were absolutely agnostic, his work would not. This is one reason to add a layer of validation with self ratings.

You are right to say that C could be more than a working assumption. My point was to provide a basic understanding of what we call implicatures. Scientists don't write papers where the point is discovered in the last sentence.

I'll distill your points in my tweets. I think the future success of this kind of crowd sourcing depends upon getting #ALLTHECONCERNS and the stats on the table as soon as possible.

"You are right that papers could appear criticizing carbon capture for the points you mention. However, they don't seem to have, in a search of WoS of the past 20 years. I just checked WoK. I don't seem to see any. I would modify my statement above. It is virtually impossible for a carbon-capture related paper to be 1, or 2, i.e., explicit endorsement. It is very unlikely for any of them to be 5, 6, or 7."

Returning to Tol's opus, and looking specifically at his analysis of disciplinary biases relative to a WoS search, the first thing I notice when analyzing his figures is that:

1) Approximately half of all papers come from disciplines which are under represented according to Tol's analysis, and (therefore) approximately half come from disciplines which are over represented according to Tol's analysis (5,883 vs 5,985 when weighted to match original Cook et al sample: sum less than Cook et al total due to rounding errors); and

2) The total under representation in disciplines which are under represented approximately equals the total over representation in disciplines which are over represented equals approximately 1700 abstracts (1711 vs 1714 when weighted to match original Cook et al sample).

Even if the sampling of disciplines by the search terms "global climate change" where random with respect to sampling by the search term "climate change" we would expect some disciplines to be over or under represented just by chance. In fact, we would expect approximately equal but non-zero over and under representation just as has been found. Curiously despite Tol's much vaunted statistical skill, he performs no test to see of the over and under sampling is random. That he needs to is shown by the very equal numbers I found.

More importantly, because of the very equal sample sizes from under represented and over represented disciplines, there is very little scope for significant bias in the Cook et al results from disciplinary bias. Specifically, because the weighted mean percentage of endorsements relative to (endorsements plus rejections) over all disciplines in Cook et al is 98%, there is very little scope for significant difference in that weighted mean percentage of that ratio between under represented and over represented disciplines. If the percentage falls to low in under represented disciplines, that forces the percentage above 100% in over represented disciplines (and vice versa). That restricts the potential difference in percentages between the to categories to 4% points at most; and the percentage for the total sample adjusted for variation in the percentage between disciplines to a range of 97.4-98.6% at most.

I do not think that range calls Cook et al's result into question.

Given that Tol's disciplinary analysis in WoS is so underwhelming when analysed, absent detailed analysis of his other examples I see no reason to take them seriously. He is shown to be generating talking points, not analyses.

bernardPoint taken. But it doesn't profoundly alter my conclusion. Carbon capture papers, i.e., papers dealing with the technical, engineering and scientific aspects, would implicitly or otherwise accept orthodox position. There are abstracts where the phrase appears incidentally, which can go a handful of ways, but those are few. I hope you are not referring to them.

This highlights what I said: in any abstract where you infer 'implicit acceptance' or 'implicit rejection', there is interposition of an interpretive layer. These categories are not possible to otherwise populate and reflect merely what the abstract was inferred to implicitly support. The implicit 'endorse' category forms the bulk of the 'endorse' numbers.

- any discussion about "this subject implies such and such" that is void of specific examples are empty pontifications, which may strengthen the belief that its author is trying to settle an empirical matter by means of his armchair;

- Richard should beware what he's wishing for, for his a priori classification of papers according to keywords might play against his concerns, i.e. if you think about this, he's eliminating rating altogether and replaces it with armchair thoughts about research subjects.

If we were to take Richard's remarks seriously, following keyword trends should suffice.

1) Increasing CO2 in the atmosphere cools the atmosphere;2) Increased CO2 in the atmosphere increases ocean acidity;3) Expected atmospheric CO2 levels in 2050 will be sufficient to significantly harm juvenile fish, resulting in a collapse of Earth's fisheries and wide spread hunger; and4) CCS is the most economic way to avoid that possibility.

I am not saying such a strange animal exists; but it is logically possible. Therefore it does not follow that because a person is researching CCS that they endorse AGW.

Further, even if it did that would not be sufficient to rate an abstract on CCS as a 3 (or 2, or 1). Abstracts were rated on the contents of the abstracts, not on the presumed motives of researchers; and hence not in the implications of the presumed motives of researchers.

Shub @5:23 AM, like it or not, there is an "interpretive layer" even in explicit statements except in formal logics. That is the consequence of the compromise between brevity and precision in communication (among other things). It does not follow that the "interpretive layer" is so opaque as to allow vastly different ratings of papers. We can reasonably be in genuine doubt as to whether a paper should be rated "3" or "2", or "3" or "4"; and minor differences in rating by different people is inevitable as a result. We are unlikely to be in reasonable doubt as to whether a paper should be rated "2" or "4", and rating differences of 2 or more on the scale will normally come down to clear faults in ratings (as, for example, when Tol interprets "endorse" as meaning "is evidence of").

Cook et al included a test of the impact of subjectivity or ratings by comparing the percentage of endorsements relative to endorsements and rejections for both abstracts on which both raters initially agreed, and abstracts in which they initially disagreed. In the supplementary material they show the former is 98.4% whereas the later is 97.8% (combined:98%). The impact on the results is, therefore, minimal.

Furthermore, I'll add that this discussion rests on how specific ABSTRACTS got written by the researchers, how they got read by the raters, how the authors rated their own PAPERS, and how all these well-intentioned people understood specific GUIDELINES.

To that effect, rating fatigue may be noteworthy, but it should be discussed. One does not simply throw dust in the air when writing a formal comment. When discussing this, Richard should bear in mind conservativeness of the ratings compared to the self-ratings.

Some, but not me, may wonder why Richard is elusive on that relationship in his comment.

Tom curtisYour answer is rambling, but fails to answer my contention. Moreover, showing ridiculously high concordance rates between raters only makes the classification process look suspect. It seems Cook is incapable of coming up with any statistic less than 97% for anything.

The point is, the interpretation of the rater becomes important only in the implicit category. In others, the abstract text carries the information directly required for classification, because, as you said, raters were instructed to just read the literal text and classify them. I think they did a good job - they identified a large number of neutral abstracts. They also appear to have liberally applied the implicit label.

Tom Curtis' last two comments convey a simple point that it is not impossible to imagine research that goes against our plausibility assumptions. This refutes your contention. Cf. Bernard J's point about empty boxes.

The point also relies on interpreting a classification task as a voting mechanism, where all the options should be available, which is yet to be justified.

***

> It seems Cook is incapable of coming up with any statistic less than 97% for anything.

Until we come with a parser to replace the raters, the word "directly" might be a bit farfetched. Besides, what would be the concordance rates of such parsers? By parser, I do not wish to exclude machine learning stuff.

***

> They also appear to have liberally applied the implicit label.

The numbers we have justify that we should rather speak of a conservative application.

Sooner or later, Shrub might tell us what he thinks about such conservativeness.

Environmental impacts from incineration, decentralised composting and centralised anaerobic digestion of solid organic household waste are compared using the EASEWASTE LCA-tool. The comparison is based on a full scale case study in southern Sweden and used input-data related to aspects such as source-separation behaviour, transport distances, etc. are site-specific. Results show that biological treatment methods - both anaerobic and aerobic, result in net avoidance of GHG-emissions, but give a larger contribution both to nutrient enrichment and acidification when compared to incineration. Results are to a high degree dependent on energy substitution and emissions during biological processes. It was seen that if it is assumed that produced biogas substitute electricity based on Danish coal power, this is preferable before use of biogas as car fuel. Use of biogas for Danish electricity substitution was also determined to be more beneficial compared to incineration of organic household waste. This is a result mainly of the use of plastic bags in the incineration alternative (compared to paper bags in the anaerobic) and the use of biofertiliser (digestate) from anaerobic treatment as substitution of chemical fertilisers used in an incineration alternative. Net impact related to GWP from the management chain varies from a contribution of 2.6 kg CO(2)-eq/household and year if incineration is utilised, to an avoidance of 5.6 kg CO(2)-eq/household and year if choosing anaerobic digestion and using produced biogas as car fuel. Impacts are often dependent on processes allocated far from the control of local decision-makers, indicating the importance of a holistic approach and extended collaboration between agents in the waste management chain.

> A direct comparison of abstract rating versus self-rating endorsement levels for the 2142 papers that received a self-rating is shown in table. More than half of the abstracts that we rated as 'No Position' or 'Undecided' were rated 'Endorse AGW' by the paper's authors.

Apart from all the underlying politicis and ideologies confusing the debate about this paper, it seems to be that much of the criticism boils down to questions about 1) Survey design and implementation; and 2) Applied statistics. I don't necessarily agree with Tol and in particular the 'tone' of his response, but it does seem to me that the economist (Tol) has a far better grasp of statistics than the climate scientists and other supporters of the Cook et al paper. If nothing else, I think the science community needs to start requiring more statistics classes for its students!

"it seems to be that much of the criticism boils down to questions about 1) Survey design and implementation; and 2) Applied statistics."

No, the criticism boils down to "no consensus" being the political foundation for the "no action" policy touted by conservative politicians here in the US and elsewhere.

Republican politicians reference the supposed "debate" and lack of consensus among scientists continuously as justification for inaction, and the more extreme claim that mainstream climate science is a fraud (a fraud encompassing 97% of scientists who take a position is a much harder sell than a fraud involving a non-consensus minority position).

It is not only this paper that's been attacked, Oreskes's original work was equally attacked and for the same reason:

Those who oppose action can't afford to let the fact that there's a broad consensus behind modern climate science become established in the public mind. Today, the public believes that scientific controversy around basic climate science still exists, and this gives politicians cover.

Am I implicitly accusing Tol of being less than honest in his criticism of this paper?

Rabett Run

Subscribe Rabett Run

The Bunny Trail By Email

Contributors

Eli Rabett

Eli Rabett, a not quite failed professorial techno-bunny who finally handed in the keys and retired from his wanna be research university. The students continue to be naive but great people and the administrators continue to vary day-to-day between homicidal and delusional without Eli's help. Eli notices from recent political developments that this behavior is not limited to administrators. His colleagues retain their curious inability to see the holes that they dig for themselves. Prof. Rabett is thankful that they, or at least some of them occasionally heeded his pointing out the implications of the various enthusiasms that rattle around the department and school. Ms. Rabett is thankful that Prof. Rabett occasionally heeds her pointing out that he is nuts.