]]>Weekend update: Science of Science Communication 2.0: on CRED "how to" manual & more on 97% messaginghttp://www.culturalcognition.net/blog/2015/3/1/weekend-update-science-of-science-communication-20-on-cred-h.htmlDan Kahan2015-03-01T15:58:36Z2015-03-01T15:58:36ZHere are some contributions to class discussion. The first is from Tamar Wilner, who offers reactions to session 7 readings. I've posted just the beginning of her essay and the linked to her page for continuation. The second is from Kevin Tobia (by the way, I also recommend his great study on the quality of "lay" and "expert" moral intuitions), who addresses "97% messaging," a topic initially addressed in Session 6 but now brought into sharper focus with a recently published study that I myself commented on in my last post.

Health, ingenuity and ‘the American way of life': how should we talk about climate?

Tamar Wilner

Here was our assignment for week 7:

Imagine you were

President Obama about to make a speech to the Nation in support of your proposal for a carbon tax;

a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”;

a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or

a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.

Would the CRED manual be useful to you? Would the studies conducted by Feygina, et al., Meyers et al., or Kahan et al. be? How would you advise any one of these actors to proceed?

When I first read the CRED manual, it chimed well with my sensibilities. My initial reaction was that this was a valuable, well-prepared document. But on closer inspection, I have misgivings. I think a lot of that “chiming” comes from the manual’s references to well-known psychological phenomena that science communicators and the media have tossed around as potential culprits for climate change denialism. But for a lot of these psychological processes, there isn’t much empirical basis showing their relevance to climate change communication.

Of course, the CRED staff undoubtedly know the literature better than I do, so they could well know empirical support that I’m not aware of. But the manual authors often don’t support their contentions with research citations. That’s a shame because much of the advice given is too surface-level for communications practitioners to directly apply to their work, and the missing citations would have helped practitioners to look more deeply into and understand particular tactics.

A new paper reports the effect of “consensus messaging” on beliefs about climate change. The media have begun covering both this new study and some recent criticism. While I agree with much of the critique, I want to focus on a different aspect of the paper, the idea of a “gateway belief.” Thinking closely about this suggests an interpretation of the paper that raises important questions for future research – and may offer a small vindication of the value of 97% scientific consensus reporting. You be the judge.

First, what is a “gateway” belief? In this context, what is it for dis/belief in scientific consensus on climate change to be a “gateway” belief? You might think this means that some perceived level of scientific consensus is necessary to have certain beliefs in climate change, worry in climate change, belief in human causation, and support for public action. You can’t have these beliefs unless you go – or have gone – “through the gate” of perceived scientific consensus.

But here’s what the researchers actually have to say about the “gateway” model prediction:

We posit that belief or disbelief in the scientific consensus on human-caused climate change plays an important role in the formation of public opinion on the issue. This is consistent with prior research, which has found that highlighting scientific consensus increases belief in human-caused climate change. More specifically, we posit perceived scientific agreement as a “gateway belief” that either supports or undermines other key beliefs about climate change, which in turn, influence support for public action. ... Specifically, we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue (H1). In turn, a change in these key beliefs is subsequently expected to lead to a change in respondents’ support for societal action on climate change (H2). Thus, while the model predicts that the perceived level of scientific agreement acts as a key psychological motivator, its effect on support for action is assumed to be fully mediated by key beliefs about climate change (H3).

There are a couple things worth noting here. First, if the ultimate aim is increasing support for public action, you don’t have to go “through the gate” of the gateway belief model at all. That is, what is really doing the work is the set of (i) beliefs that climate change is happening, (ii) beliefs it is human-caused, and (iii) how much people worry about it. If we could find some other way to increase these, we could affect public support without needing to change the perceived level of science consensus (though, whether that change might itself affect the perceived level of scientific consensus is another interesting and open question).

Second – and more importantly – it is not the case that there is some “gateway belief” in perceived scientific agreement that is required to have certain beliefs in climate change or human causation, worry, or support for action. The model merely predicts that all of these are affected by changes to the perceived level of scientific agreement. Thus, it is incorrect to draw from this research that the “gateway belief” of perceived scientific consensus is (as analogy might suggest) some necessary or required belief that must be held in order to belief in human-caused/climate change, worry about it, or support for action. Instead, the gateway belief is something like a belief that tendsto have or normallyhas some relation to these other climate beliefs.

When we put it this way, the “gateway belief” prediction might start to seem less interesting: there’s one belief about climate change (perceived consensus among scientists) that is a good indicator of other beliefs about climate change. But, what is more intriguing is the further claim the researchers make: this gateway belief isn’t just a good indicator of others, but increasing the gateway belief will cause greater belief in the others. Perceiving scientific non-consensus is the gateway drug to climate change denial!

This conception of the gateway belief illuminates a subtle feature of the researchers’ prediction. Recall:

we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue ....

The hypothesis here is that changing the level of perceived consensus causes changes in these other climate beliefs.

Some might worry about the relatively small effects found in the study. But if we recognize the full extent of the researchers’ prediction (that increased perceived consensus will raise the other beliefs AND that decreased perceived consensus will lower the other beliefs), one possibility is that some participants with very high pre-test consensus estimates (e.g. 99% or 100%) actually reduced their estimate in light of the consensus messaging – and that this affected their other beliefs. It might seem unlikely that many participants held an initial estimate of consensus upwards of 97% (even with a pre-test mean estimate of 66.98), but data on this would be useful.

There is a more plausible consideration that also weakens the worry about small effect size: would we really want participants (or people in the world) to change their beliefs about science in exact proportion to the most recent information they receive about expert consensus? To put it another way, the small effect size might not be evidence of the weakness of consensus messaging, but might rather be evidence of the measured fashion in which people weigh new evidence and update their beliefs.

What would be helpful is data on the number (or percent) of participants whose beliefs increased from pre to post test. This would help distinguish between two quite different possibilities:

very few participants greatly increased their beliefs in climate change after reading the consensus message

many participants moderately increased their beliefs in climate change after reading the consensus message

There is no good way to distinguish between these from the data provided so far, but which of these is true is quite important. If (1) is true, consensus messaging can only be offered as a method to appeal to the idiosyncratic few. And we might worry that the few people responding in this extreme way are, in some sense, overreacting to the single piece of evidence they just received. If (2) is true, this might provide a small redemption of the “consensus messaging campaign.” A little consensus messaging increases people’s beliefs a little bit (and, since this is from just one message, the small belief change is quite reasonable).

Of course, even if (2) is true, much more research will be required before deciding to launch an expensive messaging campaign. For instance, suppose we discover that the climate beliefs of people with very high initial beliefs in consensus (“97+% initial believers”) are actually weakened when they are exposed to 97% messaging. Even if people who are initially “sub-97% initial believers” change their beliefs in light of 97% messaging, we should ask whether this trade-off is beneficial. There are a number of relevant considerations here, but one thought is that perhaps reducing someone’s climate change belief from 100% to 99% is not worth an equivalent gain elsewhere from, say, 3% to 4%. For one, behaviors and dispositions may increase/decrease non-proportionally. 100 to 99 percent belief change might result in the loss of an ardent climate action supporter, while change from 3 to 4 percent might result in little practical consequence. These are all open questions!

]]>"the strongest evidence to date" on effect of "97% consensus" messaginghttp://www.culturalcognition.net/blog/2015/2/25/the-strongest-evidence-to-date-on-effect-of-97-consensus-mes.htmlDan Kahan2015-02-26T02:31:34Z2015-02-26T02:31:34ZThere's a new study out on effect of "97% consensus" messaging.

The earlier paper reported that after being told that 97% of scientists accept human-caused climate change, study subjects increased their estimate of the percentage of scientists who accept human-caused climate change.

The new paper reports results, not included in the earlier paper, on the effect of the study's "97% consensus msg" on subjects' acceptance of climate change, their climate change risk perceptions, and their support for responsive policy measures.

The design of the study was admirably simple:

Ask subjects to characterize on a 0-100 scale their "belief certainty" that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it;

tell the subjects that “97% of climate scientists have concluded that human-caused climate change is happening”; and

ask the subjects to characterize again their "belief certainty" that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it.

Administered to a group of 1,104 members of the US population, the experiment produced these results on the indicated attitudes:

So what does this signify?

According to the authors,

Using pre and post measures from a national message test experiment, we found that all stated hypotheses were confirmed; increasing public perceptions of the scientific consensus causes a significant increase in the belief that climate change is (a) happening, (b) human-caused and (c) a worrisome problem. In turn, changes in these key beliefs lead to increased support for public action.

I gotta say, I just don't see any evidence in these results that the "97% consensus msg" meaningfully affected any of the outcome variables that the authors' new writeup focuses on (belief in climate change, perceived risk, support for policy).

It's hard to know exactly what to make of the 0-100 "belief certainty" measures. They obviously aren't as easy to interpret as items that ask whether the respondent believes in human-caused climate change, supports a carbon tax etc.

(In fact, a reader could understandably mistake the "belief certainty" levels in the table as %'s of subjects who agreed with one or another concrete proposition. To find an explanation of what the "0-100" values are actually measurements of, one has to read the Climatic Change paper-- or actually, the on-line supplementary information for the Climatic Change paper. If the authors have data on %s who believed in climate change before & after etc, I'm sure readers would actually be more interested in those.)

But based on the "belief certainty" values in the table, it looks to me like the members of this particular sample, were, on average, somewhere between ambivalent and moderately certain about these propositions before they got the "97% consensus msg."

After, they got the message, I'd say they were, on average, ... somewhere between ambivalent and moderately certain about these propositions.

The authors repeatedly stress that the results are "statistically significant."

But that's definitely not a thing significant enough to warrant stressing.

Knowing that the difference between something and zero is "statistically significant" doesn't tell you whether what's being measured is of any practical consequence.

Indeed, w/ N = 1,104, even quantities that differ from zero by only a very small amount will be "statistically significant."

The question is, What can we infer from the results, practically speaking?

A collection of regression coefficients in a path diagram can't help anyone figure that out.

Maybe there's more to say about the practical magnitude of the effects, but unfortunately the researchers don't say it.

For sure they don't say anything that would enable a reader to assess whether the "97% message" had a meaningful impact on political polarization.

They say this:

While the model “controls” for the effect of political party, we also explicitly tested an alternative model specification that included an interaction-effect between the consensus-treatments and political party identification. Because the interaction term did not significantly improve model fit (nor change the significance of the coefficients), it was not represented in the final model (to preserve parsimony). Yet, it is important to note that the interaction itself was positive and significant (β = 3.25, SE = 0.88, t = 3.68, p < 0.001); suggesting that compared to Democrats, Republican subjects responded particularly well to the scientific consensus message.

This is perplexing....

If adding an interaction term didn't "significantly improve model fit," that implies the incremental explanatory power of treating the "97% msg" as different for Rs and Ds was not significantly different from zero. So one should view the effect as the same.

Yet the authors then say that the "interaction itself was positive and significant" and that therefore Rs should be seen as "respond[ing] particularly well" relative to Ds. By the time they get to the conclusion of the paper, the authors state that "the consensus message had a larger influence on Republican respondents," although on what --their support for policy action? belief in climate change? their perception of % of scientists who believe in climate change? -- is not specified....

Again, though, the question isn't whether the authors found a correlation the size of which was "significantly different" from zero.

It's whether the results of the experiment generated a practically meaningful result.

Once more the answer is, "Impossible to say but almost surely not."

I'll assume the Rs and Ds in the study were highly polarized "before" they got the "97% consensus msg" (if not, then the sample was definitely not a valid one for trying to model science communication dynamics in the general population).

But because the authors don't report what the before-and-after-msg "belief certainty" means were for Rs and Ds, there's simply no way to know whether the "97% consensus msg's" "larger" impact on Rs meaningfully reduced polarization.

All we can say is that whatever it was on, the "larger" impact the msg had on Rs must still have been pretty darn small, given how remarkably unimpressive the changes were in the climate-change beliefs, risk perceptions, and policy attitudes for the sample as a whole.

Sigh....

The authors state that their "findings provide the strongest evidence to date that public understanding of the scientific consensus is consequential."

If this is the strongest case that can be made for "97% consensus messaging," there should no longer be any doubt in the minds of practical people--ones making decisions about how to actually do constructive things in the real world-- that it's time to try something else.

To be against "97% consensus messaging" is not to be against promoting public engagement with scientific consensus on climate change.

It's to be against wasting time & money & hope on failed social marketing campaigns that are wholly disconnected from the best evidence we have on the sources of public conflict on this issue.

President Obama about to make a speech to the Nation in support of your proposal for a carbon tax;

a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”;

a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or

a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.

]]>Some other places to to find discussionhttp://www.culturalcognition.net/blog/2015/2/23/some-other-places-to-to-find-discussion.htmlDan Kahan2015-02-23T18:48:41Z2015-02-23T18:48:41ZCouple of posts elsewhere worth checkiout out today.

Also not thrilled with headline--I don't study science communication to teach people how to "change skeptics' minds"; I do studies to show how to communicate science in a manner that enables people to decide for themselves what to make of it.

Oh well...

]]>Weekend update: Hard questions, incomplete answers, on the "disentanglement principle"http://www.culturalcognition.net/blog/2015/2/22/weekend-update-hard-questions-incomplete-answers-on-the-dise.htmlDan Kahan2015-02-22T16:32:53Z2015-02-22T16:32:53ZI have written a few times now about the “disentanglement principle”—that science communicators & educators must refrain from “making free, reasoning people choose between knowing what’s known by science and being who they are.” The Measurement Problempaper uses empirical evidence to show how science educators and communicators have “disentangled” identity and knowledge on issues like evolution & climate change, and proposes a research program aimed at perfecting such techniques.

In a comment on a recent post, Asheley Landrum posed a set of penetrating and difficult questions about “disentanglement.” I thought they warranted a separate blog, one that I hoped might, by highlighting the importance of the questions and the incompleteness of my own answers, motivate others to lend their efforts to expanding our understanding of, and ability to manage, the problem of identity-knowledge entanglement.

Asheley's comment:

I'm really interested in the idea of disentangling identity from knowledge. However, I wonder to what extent that really can be done. Take, for instance, the conflation of belief in evolution versus knowledge of evolution that you've described. Does it matter if multiple cultural identies recognize that the theory of evolution states humans evolved from earlier species of mammal if they do not accept (or believe) it to be extremely likely to be true? Is our goal as scientists (and science communicators) to make sure that people simply know what a theory is comprised of but not worry about whether the public buys it?

Also, once a topic becomes politicized, is it possible to truly disentangle that topic from people's cultural identies? I feel like new work is showing how we can potentially stop topics from becoming politicized in the first place, but once a topic becomes entangled with cultural identy, the mere mention of it may trigger motivated cognition. Is it something that will pass with time? For instance, we've seen public perception shift on a myriad of social issues (e.g. Interracial marriage, now gay marriage). Is this a result of time or a change in the narrative surrounding the topics? Does changing the narrative surrounding certain science topics change eventually change how entangles that topic is with regard to cultural identity?

My response:

@Ashley:

Good questions. I certainly don't have complete answers.

But I'd start by sorting out 3 things.

1st, is "non-entanglement" possible?

2d, can entanglement be undone?

3d, is the goal of the science communicator/educator “belief” or “knowledge”?

1. Is non-entanglement possible?

I take this to mean, is it possible to create conditions where people don't have to choose between knowing what's known and being who they are?

Answer is, of course.

For one thing, the issue never arises for most issues -- ones for which it very well could have.

Another point: even when positions on risks and other facts become entangled in antagonistic cultural meanings--turning them into symbols of cultural identity--it still is possible to create conditions of science communication that free people from having to choose between knowing and being who they are!

You advert to this in raising evolution. We know from empirical evidence that it is possible to teach evolution in a manner that doesn't make religious students choose between knowing and being who they are and that when the right mode of teaching (one focusing on simply valid inference from observation), they can learn the modern synthesis just as readily as students who say they "do believe" in evolution (and who invariably don't know anything about natural selection, random mutation, and genetic variance).

As I mentioned, in most cases, the entanglement problem never arises-- as in case of HBV vaccine or GMO foods.

But if entanglement occurs-- if antagonistic meanings become attached to issues turning positions on them into symbols of identity--can that condition itself be neutralized, vanquished

This is different, I think, from asking whether, in a polluted science communication environment, it is possible to "disentangle" in communicating or teaching climate science or evolution, etc.

The communication practices that make that possible are in the nature of "adaptation" strategies for getting by in a polluted science communication environment.

The question here is whether it possible to decontaminate a polluted science communication environment.

I think this is possible, certainly. I suppose, too, I could give you examples where this seems to have happened (e.g., on cigarette smoking in US).

But the truth is, we know a lot less about how risks and like facts become entangled in antagonistic meanings, and about how to “adapt” when that happens, than we do about how to clear the science communication environment of that sort of pollution once it becomes contaminated by it.

We need more information, more evidence.

But the practical lesson should be obvious: we must use all the knowledge at our disposal, and summon all the common will and attention we can, to prevent pollution of the science communication environment in the first place (a critical issue right now for childhood vaccines).

3. Is the goal of the science communicator/educator “belief” or “knowledge”?

Finally, you raise the issue of what the “goal” of science communication and education is—“knowledge” vs. “belief”?

My own sense is that the “knowledge”-“belief” dichotomy here reflects at least two forms of confusion

One is semantic. It’s the incoherent idea that there is some meaningful distinction between the objects of “belief” and the objects of “knowledge” and that “science” deals with the latter.

The other confusion is more complicated. It's certainly not a cause for embarrassment, but not grasping it is certainly a cause for concern.

The nature of the mistake (I'm still struggling, but am pretty sure at this point that this is the nub of the problem) is to believe that, as a psychological matter, it makes sense to individuate people's "beliefs" (or items of "knowledge") independently of what those people are doing.

If we say, “but isn’t that inconsistent—to say you ‘disbelieve’ in evolution but then make use of it in those ways as a Dr?,” he thinks we are being obtuse.

And he is right.

As Everhart & Hameed helps us see, there are two different “evolutions”: the one the Dr rejects in order to be a member of a religious community; and the one he accepts in order to be a doctor and a member of a scientific-knowledge profession.

The idea that there is a contradiction rests on a silly model that thinks individuals’ “beliefs” (or what is “known” by them) can be defined solely with reference to states of affairs or bodies of evidence in the world.

In the mind, “beliefs” are intentional states--often compound ones, consisting of assent to various factual propositions but also pro- or con- affective stances, and related propensities to action--that are yoked to role-specific actions.

Being a member of a religious community and being as a member of the medical profession are integrated elements of the Pakistani Dr's identity.

As a result, there’s no contradiction between the Pakistani Dr. saying he “disbelieves” in evolution when he is “at home” (or at the Mosque), where the set of intentional states that signifies allows him to be the former, and that he “believes in” it when he is “at work,” where the set of intentional states that signifies allows him to be a member of a science-informed profession.

He knows that the evolution he accepts and the one he rejects both refer to the same account of the natural history of human beings that originates in work of Darwin; but the one he accepts and the one he rejects are "completely different things" b/c connected to completely different things that he does.

It's confusing, I agree, but he's not the one who is confused-- weare if we can't get grasp the point knowingthat can't be disconnected, psychologically, from knowing how!

Still, the Pakistani Dr is is lucky to live a life in which the two identities that harbor these competing beliefs have no reason to quarrel.

For members of certain religious communities in the US–and as Hameed notes, for more and more Islamic scientists and scientists in training in Europe—that's not so.

It should be to make it possible for people to recognize and give effect to scientific knowledge in order to do the things the doing of which require such knowledge: like being scientists; like being successful members of other professions, including, say, agriculture, that depend on knowing what science knows; like being a good parent; like being a member of a self-governing community, the well-being of which turns on it making science-informed policy choices; and like being curious, reflective people who simply enjoy the awe and pleasure of being able to participate in comprehending the astonishing insights into the mysteries of nature that our species has gained by using the signature methods of scientific inquiry.

If a science teacher or communicator thinks that it his or her job “to “get people to say they ‘believe in’ ” evolution or climate change independently of enabling them to do things that depend on making use of the best available evidence, then he or she is making a mistake.

]]>"Measurement Problem" published but still unsolvedhttp://www.culturalcognition.net/blog/2015/2/20/measurement-problem-published-but-still-unsolved.htmlDan Kahan2015-02-20T16:45:30Z2015-02-20T16:45:30ZPublished version of this paper is now out....

I am an engineer who, in addition to engineering-related courses, also studied geology, geophysics and even a little astronomy at the undergraduate and graduate level. In short, a big fan of science and rational thought, especialy applied science like engineering.

I firmly believe in climate change but find that the claim that it is "human-made" is total rubbish.

Rather I think the human-made claim is driven by "tribalism", just like Professor Dan Kahan of Yale Law School ascribes to the "barber in a rural town in South Carolina". (And seriously, good he be any more elitist and patronizing? North vs. South, City Mouse vs. Country Mouse, Perfesser of Law vs. Barber, etc.)

In fact, the tribal forces at work on researchers and politicians are much more pronounced.

It's not just about losing customers, it's about fame, glory, popularity and - most important - money.

Think Oscars, think Nobel Prizes, think tapping into the hundreds of millions of dollars (billions?) out there for the taking.

All you have to do is run the same flawed computer programs, fiddle with the data when necessary, and confirm, affirm, re-affirm The Consensus.

And, if that isn't enough enticement, you also get to adopt a holier-than-thou attitude when talking about "Skeptics". I mean, it's no accident that your article states "How to convert the skeptics?"

Maybe you should take a look in the mirror.

When you liken the more than half of Americans who don't believe the Earth is warming because humans are burning fossil fuels to "loopy... flat-Earthers" you are reinforcing tribalism.

That sort of misinterpretation of our findings is part of exactly the phenomenon we are studying: the forces that drive people to misconstrue empirical evidence in patterns congenial to their cultural outlooks.

There is one more thing I want to be sure I express my agreement with: I don't doubt for a second that I myself, in the course of trying to address these matters, will blunder, either as a result of being subject to the same dynamics I'm studying or to simple failings in judgment or powers of expression. And as a result, I'll end up conveying, contrary to my own intentions and ambitions, the very sort of partisan meanings that I believe must be purged from the science communication environment.

I don't resent being told when that happens; I am chastened, but grateful.