The paper I posted “yesterday”—“The Politically Motivated Reasoning Paradigm”—is mainly about what “politically motivated reasoning” is and how to design studies to test whether it is affecting citizens’ assessment of evidence and by how much.

The paper is concerned, in particular, with two confounds—alternative explanations, essentially—that typically constrain the inferences that can be drawn from such studies. The problems are heterogeneous priors and pretreatment effects (Druckman, Fein & Leeper 2012; Druckman 2012; Bullock 2009; Gerber & Green 1999).

Rather than describe these constraints abstractly, let me try to illustrate the problem they present.

Imagine a researcher is doing an experiment on “politically motivated reasoning”—the asserted tendency of individuals to conform evidence on disputed risks or other policy-relevant facts to the positions that are associated with their political outlooks.

She collects information on the subjects' “beliefs” in, say, “human caused global warming” and the strength of those beliefs (reflected in their reported probability that humans are the principal cause of it). She then presents the subjects with evidence—in the form of a study that suggests human activity is the principal cause of global warming--and measures their beliefs and their confidence in those beliefs again.

This is what she observes:

Obviously, the subjects have become even more sharply divided. The difference in the proportion of Democrats and Republicans who accept AGW widened, as did the difference in their respective estimates of the probability of AGW.

Does the result support an inference that the subjects selectively credited or discredited the evidence consistent with their political predispositions?

Not really, no.

The clam that individuals are engaged in “politically motivated reasoning” implies they aren’t assessing the information in an unbiased manner, uninfluenced by the relationship between that information and outcomes congenial to their political views.

We can represent this kind of “unbiased” information processing in a barebones Bayesian model, in which individuals revise their existing belief in the probability of a hypothesis, expressed in odds, by a factor equivalent to how much more consistent the new information is with that hypothesis than with a rival one. That factor is known as the “likelihood ratio,” and conceptually speaking reflects the “weight” of the new information with respect to the competing hypotheses.

But in the hypothetical study I described, we really don’t know if that’s happening. Certainly, we would expect to see a result like the one reported—partisans becoming even more “polarized” as they examine the “same” evidence--if they were engaged in politically motivated reasoning.

But we could in fact see exactly this dynamic consistent with the unbiased, Bayesian information-processing model.

As a simplification, imagine the members of a group of deliberating citizens, Rita, Ron, and Rose—all of whom are Republicans—and Donny, Dave, Daphne—all Democrats. Each has a “belief” about the contribution of human beings to “human caused climate change,” and each has a sense of how confident they are about their beliefs—a sensibility we can represent in terms of how probable they think it is (expressed in odds) that human beings are the principal cause of climate change.

The table to the left represents this information

Now imagine that they are shown a study. The study presents evidence supporting the conclusion that humans are the principal cause of climate change.

Critically, all of the individuals in this group agree about the weight properly afforded the evidence in the study!

They all agree, let’s posit, that the study has modest weight—a likelihood ratio of 3, let’s say, which means that it is three times more consistent with the hypothesis that human beings are responsible for climate change than with the contrary hypothesis (don’t confuse likelihood ratios with “p-values” please; the latter havenothing to do with the inferential weight evidence bears).

In other words, none of them adjusts the likelihood ratio or weight afforded to the evidence to fit their predispositions.

Nevertheless, the results of the hypothetical study I described could still display the polarization the researcher found!

This table shows how:

First, the individuals in this "sample" started with different priors. Daphne, e.g., put the probability that human beings were causing climate change at 2:1 (0.5:1 in favor) against before she got the information. Rita’s prior odds were 1000:1 against (.001:1 in favor).

When they both afforded the new information a likelihood ratio of 3, Daphne flipped from the view that human beings “probably” weren’t responsible for climate change to the view that they probably were (1.5:1 or 3:2 in favor). But because Rita was more strongly convinced that human beings weren’t causing climate change, she persisted in her belief that humans probably weren’t responsible for climate change even after appropriately adjusting downward (from 1000:1 to about 333:1) against (Bullock 2009).

Second, the individuals in our sample started with differing amounts of knowledge about the existing evidence on climate change.

In particular, Ron and Rose, it turns out, already knew about the evidence that the researcher showed them in the experiment! That's hardly implausible: members of the public are constantly being bombarded with information on climate change and similarly contentious topics. Their priors—10:1 against against human-caused climate change, and 2:1 in favor, respectively--already reflected their unbiased (I’m positing) assessment of that information (or its practical equivalent).

They thus assigned the evidence a likelihood ratio of “1” in reporting their "after evidence" beliefs in the study not because they were conforming the likelihood ratio to their predispositions—indeed, they agree that the evidence is 3x more consistent with the hypothesis that humans are causing climate change than that they are not—but because their priors already reflected having given the information that weight when they previously encountered it in the real world.

If the “outcome variable” of the study is “what percentage of Republicans and Democrats think human activity is a principal cause of climate change,” then we will see polarization even with Bayesian information processing—i.e, without the sort of selective crediting of information that is the signature of politically motivated reasoning--becaues of the heterogeneity of the group members' priors.

Likewise, if we examine the “mean” probabilities assigned to AGW by the Democrats and Republicans, we find the differential grew in the information-exposed condition. The reason, however, wasn't differences in how much weight they gave the information, but pre-treatment (pre-study) differences in their exposure to information equivalent to that conveyed to them in the experiment (Druckman, Fein & Leepr 2012).

In sum, given the study design, we can’t draw confident inferences that the subjects engaged in politically motivated reasoning. They could have. But because of the confounds of heterogeneous priors and pretreatment exposure to information, we could have ended up with exactly these results even if they were engaged in unbiased, Bayesian information processing.

To draw confident inferences, then, we need a better study design for politically motivated reasoning—one that avoids these confounds.

A one-table version of the "mock data" illustrating how "heterogeneous priors" & "pretreatment effects" can defeat an inference of politically motivated reasoning in a study that treats "change in belief" as outcome measure. Just one of the improvements inPolitically Motivated Reasoning Paradigm originating in helpful suggestions of generous commentators who've downloaded & read draft.

References will be subject to editor approval before appearing. Your reference will not appear until it has been cleared by a website editor.

Research of the Cultural Cognition Project is or has been supported by the National Science Foundation; by the Annenberg Public Policy Center at the University of Pennsylvania; by the Skoll Global Threats Fund; by the Putnam Foundation; by the Woodrow Wilson International Center of Scholars; by the Arcus Foundation; by the Ruebhausen Fund at Yale Law School; by the Edmond J. Safra Center for Ethics at Harvard University; and by GWU, Temple, and NYU Law Schools. You can contact us here.