A new survey of scientists

Dennis Bray and Hans von Storch have been making surveys of climate scientists for a number of years with the reasonable aim of seeing what the community thinks (about IPCC, climate change, attribution etc). They have unfortunately not always been as successful as one might like – problems have ranged from deciding who is qualified to respond; questions that were not specific enough or that could be interpreted in very different ways; to losing control of who answered the questionnaire (one time the password and website were broadcast on a mailing list of climate ‘sceptics’). These problems have meant that the results were less useful than they could have been and in fact have occasionally been used to spread disinformation. How these surveys are used obviously plays into how willing scientists are to participate, since if your answers are misinterpreted once, you will be less keen next time. Others have attempted similar surveys, with similar problems.

As people should know, designing truly objective surveys is very tricky. However, if you are after a specific response, it’s easy to craft questions that will favour your initial bias. We discussed an egregious example of that from Steven Milloy a while ago. A bigger problem is not overt bias, but more subtle kinds – such as assuming that respondents have exactly the same background as the questioners and know exactly what you are talking about, or simply using questions that don’t actually tell you want you really want to know. There are guidesavailable to help in crafting such surveys which outline many of the inadvertent pitfalls.

Well, Bray and von Storch have sent out a new survey.

The questions can be seen here (pdf) (but no answers, so you can’t cheat!), and according to Wikipedia, the survey respondents are controlled so that each anonymised invite can only generate one response. Hopefully therefore, the sampling will not be corrupted as in past years (response rates might still be a problem though). However, the reason why we are writing this post is to comment on the usefulness of the questions. Unfortunately, our opinion won’t change anything (since the survey has already gone out), but maybe it will help improve the interpretations, and any subsequent survey.

There are too many questions in this survey to go over each one in detail, and so we’ll just discuss a few specific examples (perhaps the comments can address some of the others). The series of questions Q15 through Q17, typify a key issue – precision. Q15 asks whether the “current state of scientific knowledge is developed well enough to allow for a reasonable assessment of the effects of turbulence, surface albedo, etc..”. But the subtext “well enough for what?” is not specified. Global energy balance? regional weather forecasting? Climate sensitivity? Ocean circulation? Thus any respondent needs to form their own judgment about what the question is referring to. For instance, turbulence is clearly a huge scientific challenge, but how important is it in determining climate sensitivity? or radiative transfer? Not very. But for ocean heat transports, it might very well be key. By aggregating multiple questions in one and not providing enough other questions to determine what the respondent means exactly, the answers to these questions will be worth little.

The notion of ‘temperature observations’ used in Q16 and Q17 is similarly undefined. Do they mean the global average temperature change over the 20th Century, or the climatology of temperature at a regional or local scale? Or it’s variability? You might think the first is most relevant, but the question is also asked about ‘precipitation observations’ for which a century-scale global trend simply doesn’t exist. Therefore it must be one of the other options. But which one? Asking about what the ability of models is for modelling the next 10 years is similarly undefined, and in fact unanswerable (since we don’t know how well they will do). Implicit is an assumption that models are producing predictions (which they aren’t – though at least that is vaguely addressed in questions 45 and 46). What ‘extreme events’ are being referred to in the last part? Tornadoes? (skill level zero), heat waves (higher), drought (lower), Atlantic hurricanes (uncertain). By being imprecise the likely conclusion that respondents feel that global climate models lack the ability to model extreme events is again meaningless.

Q52 is a classic example of a leading question. “Some scientists present extreme accounts of catastrophic impacts related to climate change in a popular format with the claim that it is their task to alert the public. How much do you agree with this practice?” There is obviously only one sensible answer (not at all). However, the question neither defines what the questioners mean by ‘extreme’ or ‘catastrophic’, or who those ‘scientists’ might be or where they have justified such practices. The conclusion will be that the survey shows that most scientists do not approve of presenting extreme accounts of catastrophic impacts in popular formats with the aim of alerting the public. Surprise! A much more nuanced question could have been asked if actual examples were used. That would have likely found that what is considered ‘extreme’ varies widely and that there is plenty of support for public discussions of potential catastrophes (rapid sea level rise for instance) and the associated uncertainties. The implication of this question will be that no popular summaries can do justice to the uncertainties inherent in the science of abrupt change. Yet this is not likely to have been the answer had that question been directly addressed. Instead, a much more nuanced (and interesting) picture would have emerged.

Two questions of some relevance to us are Q61 and Q62, which ask whether making discussions of climate science open to potentially everyone through the use of “blogs on the w.w.w.” is a good or bad idea, and whether the level of discussion on these blogs is any good. These questions are unfortunately very poorly posed. Who thinks that anyone has any control over what gets discussed on blogs in general? The issue is not whether that discussion should take place (it surely will), it is whether scientists should participate or not. If all blogs are considered, then obviously the quality on average is abysmal (sorry blogosphere!). If the goal of the question was to be able to say that the level of discussion on specific blogs is good or not, then specific questions should have been asked (for instance a list of prominent blogs could have been rated). As it is, the conclusion will be that discussion of climate science on blogs on the w.w.w. is a good idea but the discussion is thought to be poor. But that is hardly news.

One set of questions (Q68+Q69) obviously come from a social rather than a climate scientist: Q68 asks whether science has as its main activity to falsify or verify existing hypothesis or something else; and Q69 whether the role of science tends towards the deligitimization or the legitimization of existing ‘facts’ or something else. What is one to make of them? There are shades of Karl Popper and social constructivism in there, but we’d be very surprised if any working scientist answered anything other than ‘other’. Science and scientists generally want to find out things that people didn’t know before – which mostly means choosing between hypotheses and both examining old ‘facts’ as well as creating new ones. Even the idea that one fact is more legitimate than another is odd. If a ‘fact’ isn’t legitimate, then why is it a fact at all? Presumably this is all made clear in some science studies text book (though nothing comes up in google), but our guess is that most working scientists will have no idea what is really behind this. You would probably want to have a whole survey just devoted to how scientists think about what they do to get anything useful from this.

To summarise, we aren’t in principle opposed to asking scientists what they think, but given the track history of problems with these kinds of surveys (and their remaining flaws), we do suggest that they be done better in future. In particular, we strongly recommend that in setting up future surveys, the questions should be openly and widely discussed – on a wiki or a blog – before the surveys are sent out. There are a huge number of sensible people out there whose expertise could help in crafting the questions to improve both their precision and usefulness.

112 Responses to “A new survey of scientists”

A question that gets a 50% response of “other” is VERY poorly put. Either the authors have not a clue to what the range of responses will be or opinion about the issue is so widely spread that there is no point in asking the question.

Ah – Eli Rabett – like your friend over at Deltoid, you are a little late into this fray this time. I could say much more but I have neither the time nor the inclination.

My foray into blogworld has been what … amusing I guess. But I must bid farewell to the Hatfields and the McCoys. Alas, I must return to more responsible duties – I am milk monitor of the week!

Before I go I would like to offer a little prayer, penned by Robertson Davies long before Digital Daze. It goes ‘God give me oblivion from the small small voices of small small people’

Amen to that

[Response: Direct interaction on blogs is not for everyone, but there are legitimate issues that have been raised here, and good suggestions for making improvements in the future. I’d still like to have had a discussion of exactly what you were trying to find in the more imprecise questions, but I hope that you are able to take those constructive criticisms on board. – gavin]

How did it turn out? We spent the main part of the survey exercise to determine if the random respondents (a) knew what we were polling about and (b) cared what we were polling about; concluded a negative to both and pulled the plug.

Re: 86
The indigination at not being notified of the Promethus post is misplaced, if Bray and von Storch were not contacted before Gavin’s post on RC.

But for the rest, Gavin’s criticisms of the surveys and the misquotations are right on.

BTW, #96 looks like a typo:
“But given that ‘other’ was around 50% in both cases indicates that the question was well framed.” I assume Gavin meant “not well framed”. (Or does this refer to Gavin’s “question” about the questions?).

[Response: Yes. my bad. I’ve edited it above for clarity. – gavin]

While I agree that the von Storch/Bray survey has a lot of problems, here’s one that’s even worse. It was a public survey in Canada sponsored by the Frontier Centre for Public Policy.

The report’s title says it all:
“Immense Public Frustration with Politicians Over the Global Warming and Climate Change Debate ”

Sample report heading:
“One-Sided Media Reporting the Main Apparent Driver of Public Opinion on Global Warming and Climate Change”

Believe it or not, FCPP actually has charitable status in Canada (as does the Fraser Institute).

I just took a look at the Bray/von Storch survey and it’s even worse than I thought. Here’s a really poor question:

36. The best approach to the mitigation of anthropogenic climate change would be based on:
voluntary actions (1) enforced regulations(7)

In Canada, we have just been through an election campaign in which a proposed carbon tax was a major plank of the Liberals, one the major parties (losers to the Conservatives as it happens). That proposal had the support of most economists and climate scientists in Canada.

But it’s not clear where such a proposal fits on the continuum in the question. After all, a tax shift onto carbon is not really regulation as such, and it would presumably work through market forces (i.e. the sum of “voluntary” actions). A much better question might be ask for degree of agreement with the general proposal for “putting a price on carbon” (for example, through cap-and-trade system, carbon tax or some combination).

Revisit my 103 and forget about the survey if these two objective pre-tests fail.

Global warming has become so far detatched from science that public opinion surveys have a high probability of reporting “conditioning” “belief” and “propaganda”. None of this is of much use in understanding science or in enjoying the benefits of nice warm weather.

“In Canada, we have just been through an election campaign in which a proposed carbon tax was a major plank of the Liberals, one the major parties (losers to the Conservatives as it happens). That proposal had the support of most economists and climate scientists in Canada.”

Most? please…It had the support of Liberal climate scientists and economists..
Don’t forget the Primer Minister himself is an economist and he had no love for that wealth distribution plan. You can’t take money off people all year long, then hope they make past your cut off date to get some of it back, anyone who’s primary cost is fuel ends up lending the government money for 12 months, for farmers and small transportation companies, thats alot of money to be lending for free. Also, it would never be revenue nuetral, bureaucrats do not work for free.

Giving the benefit of the doubt we will assume that you are a well qualified milk monitor. As a designer of surveys it is quite clear that you are not very well qualified.

Eli’s point remains, one which was not made previous to this very small (but cute) Rabett posting. You did not even try to provide an answer but attempted to blow us off. If you had tried your little trick in a bar, there would have been very cross words and more, but, as you point out this is a very polite blog.

Thus, a reasonable person must conclude that you agree that a question that gets a 50% response of “other” is VERY poorly put and the authors have not a clue to what the range of responses will be or opinion about the issue is so widely spread that there is no point in asking the question. But Eli repeats himself.

No doubt about it, the IPCC needs to be more clear in its diction and description of certain aspects of its report. It would be a shame if people continued to misconstue, distort and minimize global warming due to flawed articulation of data, lack of presentation or minor errors overlooked through vague design of the reports themselves.