Elections are a time for smearing, and the Mail’s desperate story about Nick Clegg and the Nazis is my favourite so far. Generally the truth comes out, in time. But how much damage can smears do?

A new experiment published this month in the journal “Political Behaviour” sets out to examine the impact of corrections, and what they found was far more disturbing than they expected: far from changing peoples’ minds, if you are deeply entrenched in your views, a correction will only reinforce them.

The first experiment used articles claiming that Iraq had weapons of mass destruction immediately before the US invasion. 130 participants were asked to read a mock news article, attributed to Associated Press, reporting on a Bush campaign stop in Pennsylvania during October 2004. The article describes Bush’s appearance as “a rousing, no-retreat defense of the Iraq war” and quotes a line from a genuine Bush speech from that year, suggesting that Saddam Hussein really did have WMD, which he could have passed to terrorists, and so on. “There was a risk, a real risk, that Saddam Hussein would pass weapons or materials or information to terrorist networks, and in the world after September the 11th,” said Bush: “ that was a risk we could not afford to take.”

The 130 participants were then randomly assigned to one of two conditions. For half of them, the article stopped there. For the other half, the article continues, and includes a correction: it discusses the release of the Duelfer Report, which documented the lack of Iraqi WMD stockpiles or an active production program immediately prior to the US invasion.

After reading the article, subjects were asked to state whether they agreed with the following statement: “Immediately before the U.S. invasion, Iraq had an active weapons of mass destruction program, the ability to produce these weapons, and large stockpiles of WMD, but Saddam Hussein was able to hide or destroy these weapons right before U.S. forces arrived.” Their responses were measured on a five-point scale ranging from “strongly disagree” to “strongly agree”.

As you would expect, those who self-identified as conservatives were more likely to agree with the statement. Separately, meanwhile, more knowledgeable participants (independently of political persuasion) were less likely to agree. But then the researchers looked at the effect of whether you were also given the correct information at the end of the article, and this is where things get interesting. They had expected that the correction would become less effective in more conservative participants, and this was true, up to a point: so for very liberal participants, the correction worked as expected, making them more likely to disagree with the statement that Iraq had WMD when compared with those who were very liberal but received no correction. For those who described themselves as left of center, or centrist, the correction had no effect either way.

But for people who placed themselves ideologically to the right of center, the correction wasn’t just ineffective, it actively backfired: conservatives who received a correction telling them that Iraq did not have WMD were more likely to believe that Iraq had WMD than people who were given no correction at all. Where you might have expected people simply to dismiss a correction that was incongruous with their pre-existing view, or regard it as having no credibility, it seems that in fact, such information actively reinforced their false beliefs.

Maybe the cognitive effort of mounting a defense against the incongruous new facts entrenches you even further. Maybe you feel marginalised and motivated to dig in your heels. Who knows. But these experiments were then repeated, in various permutations, on the issue of tax cuts (or rather, the idea that tax cuts had increased national productivity so much that tax revenue increased overall) and stem cell research. All the studies found exactly the same thing: if the original dodgy fact fits with your prejudices, a correction only reinforces these even more. If your goal is to move opinion, then this depressing finding suggests that smears work, and what’s more, corrections don’t challenge them much: because for people who already agree with you, it only make them agree even more.

++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks!
++++++++++++++++++++++++++++++++++++++++++

41 Responses

Quick2kill said,

Hmmm… that’s an interesting finding but lets not overpromote it. It is a single study with 130 participants, so what 70 conservatives giving about 35 in the two different groups? Too late to read through the paper but i’d guess the statistical signifcance of this finding isn’t that big?

Then there is the interpretation. Although most of us think it’s well established that there were no wmd in iraq, there is some residual controversy to that and the subjects might have read things they believe confound the issue or heard of the Duefler report before and already prejudiced before. I don’t see how you can use that to assume that a clear misinforming slur, which is later indisputably debunked and corrected by all sources including those that initial reprted it would show this effect.

TP said,

This is just another example of cognitive dissonance – a phenomenon that has been studied in depth since the 50s.

When people are led to challenge a belief it causes a sense of unease: for example, asking people to write an essay against their beliefs,(in one study, asking a group of students to write an essay listing the benfits of woodchipping old-growth forests)
People try to resolve the internal conflict not by shifting their opinion but by reinforcing it.

Leon Festinger’s “When Prophecy Fails” is a superb description of what happened to a doomsday cult when the world did not end as predicted. Members of the cult reduced dissonance by proselytising even more fervently than before.

TP said,

.. and another more recent study looked at people’s response to the front-page story (guess where) that the Argentinian soldiers who invaded the Falklands dined on household pets – and their maintenance of that belief even when the paper printed a correction.

reprehensible said,

ellieban said,

This may be a description of one piece of research but there are others. It is becoming well known in psychological circles that people in general are very difficult to persuade out of their original view point once they have stated it (telling another person seems to be the important moment after which there is no return).

The biggest problem is, the more conservative the person, the bigger that effect is. Look at the current US political system where, regardless of the facts, the Republicans vote en bloc in every vote. Meanwhile, the Democrats are a disorderly bunch who vote with their consciences and, usually, based on the evidence. As an impartial observer, it seems to me the Dems get it right most often and yet, because of their homogeneity, the Republicans are a constant and powerful threat. It is frightening.

Diversity of opinion results in the best outcome but it is always vulnerable to a uniform opposition, however misguided they may be.

I have no idea how to solve that problem but meeting the tactic like for like seems to work. A blunt statement as of fact with no attempt to back it up with evidence equal in volume and passion and repetition to the opposing view does seem to sway such people. The problem with that is that it comes with the enormous moral and ethical caveat that it goes against the entire foundation of the liberal view point.

It would be nice to have a study that looked at liberal views as well, particularly in view of ellieban’s comment. I think the question of whether this is just a phenomenon of the right is open and should be looked at.1 I would also welcome more studies of this and I am sure it will happen, the implications for advertisers2 are too great for it not to. Whether we will hear about the other studies is an open question too. Advertisers not being the most forthcoming with research into the effects they have.

1 At this point I should point out that my own political views are on the left, no-one who knows me has ever accused me of being a conservative!!

2 Yes I know this study concerned politics but its implications are wider than that and would concern brand loyalty and the reaction to advertising messages as well.

Paul said,

John Bullock at Yale has clearly shown that this is not purely a phenomenon of the right. Democrats are just as prejudiced as republicans and everybody is pretty poor at dispassionately updating views.
And this is certainly not an isolated observation – there’s a load of similar observations (in a number of domains) out there: from basic sensory processing right up to the sociopolitical, what you already believe fundamentally changes how you perceive and process new information

mauve said,

Whether Iraq had WMDs is such a weird thing to rate on a 5-point scale. Either they did or they didn’t, and it’s a matter of fact not agreement. I’d have to choose “Strongly Disagree”, but what I really mean is, “I don’t know of any evidence to support that”.

That’s another thing I find chilling about this experiment – the media frequently offer the same sort of false continuum, and suggest that people’s agreement or disagreement with some matter of fact is required.

Quick2kill said,

@ellieban: i was commenting on the counter intuitive claim that if a claim matches someones pre existing prejudices providing refuting facts only makes them more likely to believe. From Ben’s piece i gathered that was a new finding from this new study.

The existence of other studies showing that peoples views are difficult to change, if anything makes me more suspicious because i wonder if there are any constraints on the new finding from those previous studies.

Mark P said,

It is becoming well known in psychological circles that people in general are very difficult to persuade out of their original view point once they have stated it.

In the short term. Have there been any follow-up studies over periods of years?

People who hold a view deeply don’t like to change it. We all know that. When a position becomes impossible what they tend to do, in my experience, is move subtly over a long period of time, without overtly doing so. Many will even swear blind that they have always held a position, when they clearly have changed.

Fads work like that. Something will be flavour of the month, then shortly after you will find no-one willing to admit to ever liking it.

So I take these studies with a grain of salt. Experience with political decisions being fought deeply and then accepted 10 years later is pretty common. Vietnam protesting was only a minority at the time, 10 years later everyone thought they protested, now it is swinging back.

One thing I’d be interested in seeing is how much an understanding of this phenomenon is able to counteract it. That is, does knowing the vagrancies and oddities of our thinking help us to in some way correct them?

I’m aware that these biases must operate at a some level for me. I’ve often joked for example that the Daily Mail has “anti-authority” and that if someone finishes a statement with “I read it in the Daily Mail” I will be less likely to believe it than if they provide no source. Is this a pragmatic assessment of the reliability of the Mail as a source of information, or an instinctive reinforcing of my existing biases?

Trinoc said,

No disrespect to Quick2kill, but the first two comments on this page unwittingly illustrate exactly the point of the article. In the first message the validity of the conclusion is doubted on the basis of it being a single survey, then in the second message, despite admitting that in fact there were several surveys, it says “Without more details on the other studies I still wonder about the interpretation though”.

In other words, seeing new evidence that invalidates the evidence used to come to an earlier conclusion reinforces, or at least does not noticeably diminish, the strength of belief in that conclusion.

SteveGJ said,

It’s fascinating stuff, not particularly because of the politics, but because there’s a lot of interesting research that seems to be showing that the nature of “conservative” and “liberal” mindsets are some part of innate personality types.

FOr instance, there is some very interesting research that showed self-identifying conservatives are more easily disgusted and squeamish

It is also demonstrably true that conservatives have more children – which is a bit worrying if you are a liberal rationalist.

I think if this was not the case, then many religions would have collapsed under the sheer weight of evidence against some of their earlier proclamations. It still seems to me that the dominant parts of many of the world’s religions is dominated by conservative thinking. We see this with the Catholic Church, Mormons, Islamic fundamentalism, the envangelical protestant churches in the US, orthodox Judaism and so on. It’s almost that the more evidence that is produced which contradicts the basic tennets of the religion, the more the conservatives take hold. Witness the evolutionary debate in the US. Also the amlmost complete lack of objective evidence for the stories in the Old Testament seems not to trouble Orthodox Judaism or conservative Christianity. Almost the opposite – it strengthens their resolve.

In contrast, it seems to me that the more liberal religions which do change behaviour with evidence are rather dying out – Quakerism, the Anglican Church (or at least much of it) which appear to foster a more liberal mindset are foundering.

Of course, if true, this is all horribly bad news for those of us who prefer (or maybe I should say are innately attracted to) the evidence based route. It looks very much like there is a large (and, possibly growing) group of the world population that will not be reached by such arguments.

I wondered if it might be at least partly people voicing an unchanged view more strongly when they think it’s being challenged.

If this is right, though, requiring newspapers to publish retractions of false news stories is totally counter-productive. Instead, what if we require them to publish the same story again, but about whoever is responsible for the error? That would lead to some fun articles.

Q.T.Getomov said,

If yo don’t believe in God, have you ever tried to have a reasoned conversation with a Christian about the possibility of the non-existence of an all powerful deity?
The more rational, logical and thorough you are in your argument the more they will see you as an organ of Satan. Indeed, Satan is speaking through you, attempting to use reason to destroy their beliefs. The more sensible you are, the more tempting it would be for them to turn their back on God so the harder they cling to their irrational, illogical and contradictory beliefs.
I guess this just shows that it’s not only the religious who cling harder to their ideas the more evidence they receive to the contrary but anybody who doesn’t include adaptability among their personal traits.

Quick2kill said,

@Trinoc: i imagine these studies must have inspired bad jokes in seminars of that nature, but you seem serious? i made a couple of criticisms of ben’s article (not just that it was a single study), then withdrew one when I realsed it was in error. how does that fit? i do deserve to be laughed at for the poor reading comprehension though.

Quick2kill said,

@Andrew I just finished reading the paper and the plots only seem to show a shift in strength of opinion, which I think includes disagree to agree and agree to strongly agree. However they also state that for conservatives overall there was a statistically significant difference between the groups in the numbers who believed. Seems to go from 11/33 to 21/33 believing that iraq had wmd. So though it is a smnall sample they do get significant results because of the huge shift.

tim1234 said,

@Quick2kill, you don’t really understand statistical significance. These were results at the p<.01 level. The fact that there were 130 participants is neither here nor there. p<.01 is p<.01, whether there are 13 participants or 1300 participants.

How many participants would you like? Just saying there are not enough participants is a shallow comment–there are many strong studies with just a handful of participants.

thom said,

I don’t see that what Quick3kill wrote was wrong – though I’m not sure what point he or she is trying to make.

A p value doesn’t index strength of evidence. In two studies with a difference only in sample size, if their respective p values are identical, the study with smaller n provides stronger evidence against the null hypothesis. This is because it must have a larger effect. See Royall (1986).

Having said that, larger samples are useful for different reasons. For instance, f you are interesting knowing how big an effect is (not merely deciding between two hypotheses), a larger sample allows you to measure it more precisely. In many cases I’d rather have a precise interval estimate than simply be able to detect an effect.

irishaxeman said,

Pretty well all research in this area shows this type of result. It illustrates the so-called ‘liberal dilemma’, which should be renamed the ‘rationalist error’. We liberals assume that others will alter their beliefs in receipt of strong counter-evidence, yet even a little real-world experience will show that this is largely not so. The preservation of a liberal society ultimately becomes the need to display an implacable alternative view, and has historically often led to armed conflict.

tim1234 said,

You are right that p value does not indicate strength of evidence. But neither does N–if you think that, that’s an error. In any case, you claimed that lower N means stronger evidence–just the opposite of Quick2kill’s point.

Quick2kill argued that because N was “only” 130, then the statistical significance must be low. That would also be an error.

thom said,

As I said, I’m not sure what Quick2kill’s point was (it wasn’t clear enough).

I do think I was reasonably clear that N doesn’t provide strength of evidence when I said “larger samples are useful for different reasons”. One is to help provide information about the precision of an effect (as n increases the interval estimate narrows). A second useful feature is that as n increase the probability of misleading evidence decreases, so in large samples it is unlikely to obtain strong evidence of an effect in the opposite direction to that supported by the evidence. This all assumes ‘all other things equal’.

Whether Quick2kill made an error or misinterpreted statistical significance depends on precisely what he meant (which I couldn’t work out from the post). For instance, small samples tend to over-estimate effect sizes (because they can only detect big effects) and some statisticians would argue that surprisingly large effects are in themselves a reason to doubt an effect (e.g., Ioannidis). In this case I don’t think the result is that surprising (but I have some familiarity with the social psychology literature on dissonance etc.)

Josie said,

Quick2kill said,

@tim1234: I’m not in a position to laugh but go reread my comment. I wasn’t questioning the claims in the paper. Ftr I was just saying that asuming it’s only this study (which it wasn’t, my error) we have to go by right now, and the stat. sig. they could have attained is limited since it doesn’t have a huge sample, we probably shouldn’t over promote the study. I was *not* saying because they had only 130 people their claimed significance was wrong. I hadn’t checked what it was guestimated it couldn’t be huge (and on that I was correct).

You do understand how sample size limits how statistically significant any potential findings could be? If not try to find a set of results, where the control and study group have only 5 people, where the groups results differ with a 5 standard deviation signicance.

Schwarz et al wrote about the challenging of erroneous beliefs increasing acceptance of the false information:

Any attempt to explicitly discredit false information necessarily involves a repetition of the false information, which may contribute to its later familiarity and acceptance.

The authors later used the example of a flyer published by the Centers for Disease Control (CDC) on vaccine myths and facts:

Right after reading the flyer, participants had good memory for the presented information and made only a few random errors, identifying 4% of the myths as true and 3% of the facts as false. Thirty minutes later, however, their judgments showed a systematic error pattern: They now misidentified 15% of the myths as true, whereas their misidentification of facts as false remained at 2%.

tim1234 said,

Sorry, you don’t get it. It’s perfectly possible to get statistically significant results when N=1 (in a repeated measures design). N=130 is larger than average for a psychology experiment. There are hundreds of thousands of published studies where N<130 and the results are statistically significant.

Quick2kill said,

@tim1234: Are you really claiming that only 1 person would work in this experiment? I assume you do understand the central limit theorem and the real basics, so there must be some confusion over what we are talking about. What you mean by N here?

It sounds like you’re trying to fix N as the number of participants and then suggest experiments where the statistics come from other sources. But i don’t think that is the case here.

Idk much about repeated measure design experiment but it appears that in those increased statistics come from repeated observations of the same subject. Afaik (and it seems clear in Ben’s article) this was not such an experiment. Admittedly there were three other similar experiments which I initially missed and is why I immediately withdrew that criticism and why this is a daft conversation.

And once again I did not say you can’t get statistically significant results with the 130 people (and remember the statement is about a subgroup of them). I said the level of statistical significance which is possible is limited. Do you understand the distinction because it’s important?

nigel said,

Quick2kill said,

@Tim:then why mention N=1 at all? if you just had a quantitative problem why not say that from start instead of throwing out insults? tbh one could do a psych study on why i feel compelled to reply but here is the back of the envelope reasing i did…

I guestimated 35 people cons in each group (looks like it was 33 in the paper) allowing up to maybe 6 sigma (?) which is a very good, but limits the scope for such high significances. then factor in the findings that cons were more likely than libs to believe that wmd was there and libs given the correction were less likely to believe than those who didn’t (remember these statements also have to be statistically significant). thats going to remove 2-3 sigma from the highest attainable significance of the finding because your control group has a lower bound to satisy those statements.

i hope that helped you cause otherwise i’m just wasting my time. if not sorry but i can’t keep replying as it’s getting silly.

JonathanE said,

This study is, as others have said, another piece of evidence that seems to fit a general pattern in Social Psychology.

On statistical significance there is a general formula worth knowing about when trying to interpret what it means.

Statistical Significance = Size of Effect times Degrees of Freedom.

Degrees of Freedom is based on sample size. So larger sample sizes can detect smaller effects; sometimes so small that they have no real world meaning.

In this study an estimate of effect size can be gathered from the reported Rsquared of the Multiple Regressions, which gives an indication of the amount of variance explained by the regression models (with 1.00 being all the variance in the data is explained).

The finding for WMD has a decent effect size; the other two findings have tiny effect sizes to my eyes.

adamb said,

Could it be that the very fact that we are reading this website means that we are more likely than not liberal and well educated, therefore “prejudiced” against ignorant right-wing religious conservatives, so a study that shows that they are in fact ignorant and with entrenched views, even more so after their facts have been corrected, makes us liberals even more convinced that they are stupid twits, and then if a larger study proves this experiment wrong, we are still likely to increase our disdain towards them even further? Or am I reading too much into this??

tim1234 said,

Quick2kill said,

@tim: yeah sorry i was being a bit liberal with ‘insult’ but I found the broad “you don’t really understand statistical significance” insulting because it was a comment on me rather than an academic criticism of my point, along with the false certainty with which you made it. i am a research scientist and a major misunderstanding of statsistical significance would be very bad so i take that more offense from that than if you just called me a wanker.

tim1234 said,

What said,

Thinking about the specifics of the study, I think my initial response would have been, “How do you define WMD?”

I remember reading a BBC news article about the discovery and destruction of several long-range, but conventional, missles in Iraq shortly before the invasion. Their possession of these missles were explicitly banned by the relevant UN resolution (and, when they were discovered, they invited foreign journalists to witness the destruction of, if memory serves, about six of them).

But are these “WMDs”? Or are they some sort of non-WMD banned weapon that could have killed hundreds of people in Tel Aviv? Without a clear definition, I don’t think that I could answer the question that was posed in the study.

Dr Spouse said,

@What, it’s not that relevant what WE think of as WMD, to the participants in the study they are bad things that Iraq shouldn’t have had and that, if they exist, make war on Iraq justified.

If *you* participated in the study you’d have been one of those irritating participants who tell the researcher they are doing it all wrong. But you’re also the kind of newspaper reader who tells the journalist they are doing it all wrong (and the kind of blog reader who tells the blogger they are doing it all wrong). Most newspaper readers aren’t like that – making the study of newspaper readers highly relevant to newspaper readers in general.

Perhaps they should also study the effect of bloggers presenting evidence that people are hard to persuade on irritating readers’ opinions of the evidence in the blog.

Strangely, many of the readers of this blog have just demonstrated *exactly* the same effect as the participants in the study. Some readers of this blog already believe that studies of perception of events are flawed (“psychology is all guff, I’m cleverer than the participants in this experiment”).

Evidence was presented to show that a really interesting study of perception of events was carried out, with really interesting and applicable results. Those readers of this blog who already thought “psychology is all guff, I personally couldn’t POSSIBLY be influenced by this sort of thing” became even more entrenched in their view that psychology is all guff and they couldn’t possibly be influenced by this sort of thing!