This one is so dumb it makes me want to barf

A new study finds that atheists are among society’s most distrusted group, comparable even to rapists in certain circumstances.

Psychologists at the University of British Columbia and the University of Oregon say that their study demonstrates that anti-atheist prejudice stems from moral distrust, not dislike, of nonbelievers.

“It’s pretty remarkable,” said Azim Shariff, an assistant professor of psychology at the University of Oregon and a co-author of the study, which appears in the current issue of Journal of Personality and Social Psychology.

The study, conducted among 350 Americans adults and 420 Canadian college students, asked participants to decide if a fictional driver damaged a parked car and left the scene, then found a wallet and took the money, was the driver more likely to be a teacher, an atheist teacher, or a rapist teacher?

The participants, who were from religious and nonreligious backgrounds, most often chose the atheist teacher.

Ummmm . . . given that there are lots of atheists out there and not many rapists, I think it’s pretty clear that the bad guy described in the vignette was indeed more likely to be an atheist than a rapist. What’s really disturbing about the study is that many people thought it was “more probable” that the dude is a rapist than that he is a Christian! Talk about the base-rate fallacy. If some dude scratched my car, I wouldn’t be so quick to jump to the conclusion that he’s a rapist.

As Kahan says, it’s hard to say who is more confused—the study subjects or the researcher.

I conclude from the published results that the participants in this study do not have a sound understanding of probability. There’s gotta be a way to study this in a more reasonable way. I’m no experimenter, though. I’d be interested in what the experts think on this. My intuition is that it would be better to study representativeness directly rather than through this bank-shot analysis of a statistical fallacy.

P.S. The author of the news article is Kimberly Winston from the “Religion News Service” which sounds like it might be some kind of joke, but it appeared in the Washington Post. Hey, wait a minute . . .

38 Comments

Forget about whether atheist teachers are more or less common than rapist teachers! As phrased above, the first category, “teacher”, includes the second and third categery, and hence is QUITE OBVIOUSLY more probable than the others. So everyone should have selected that option.

However, in normal communication situations, when someone says something that is obviously true or false, or has a trivial answer, we don’t take it literally, but instead reinterpret it as something else that isn’t obviously true or false, since otherwise it makes no sense for it to have been said. So the subjects presumably did that here. But how they reinterpreted the question is anyone’s guess. They may, for instance, have interpreted it as asking them to compare the probability of the bad behaviour described given “teacher”, “atheist teacher”, or “rapist teacher”.

Yes, indeed there are more teachers than atheist teachers. Not recognizing this is called the “conjunction fallacy” and it appears to be a real problem that people have. The basic experiments on the conjunction fallacy (done by Kahneman, Tversky, etc) find it to be robust. In particular, the conjunction fallacy (believing that Linda is more likely to be “a bank teller who is active in the feminist movement” than “a bank teller”) also appears in between-subject designs. The researchers in the above-linked paper are well aware of this literature, in fact they use it as an inspiration for their work. But I’m highly doubtful about what can be learned in this indirect way.

The conjunction fallacy is itself a fallacy in examples like these: that is, they are asked which is more LIKELY—which would make the data more probable. Never mind which combo would in this case, make the data more probable.

Of course this study couldn’t be getting at the least trusted group when groups outside these are not even considered.

I have likely = probably in my dictionary, so I think the correct interpretation of the question is which is more probable, not which would make the data more probable. Fisher really oughtn’t to have called it the likelihood function because that’s just bad English.

MAYO: If you actually read the journal article (not the press report), the researchers use the word “probable,” not “likely.” They also asked about a number of other groups besides atheists and rapists, including homosexuals, feminists, Jews, Muslims, and Christians. The distrust vignette produced a distinct conjunction effect for atheists relative to the other groups.

Andrew et al.: The study used between-subject comparisons as in the original K&T conjunction studies. One set of subjects compared teacher versus atheist teacher; a different set of subjects compared teacher versus rapist teacher; etc. So nobody made a direct comparison of the probability of someone being an atheist teacher versus a rapist teacher, and for any given subject the answer “XXXX teacher” was always incorrect. The researchers used the error rates in different conditions to infer how well a description of an untrustworthy person is considered representative of the various categories (atheist, rapist, etc.). A control condition used a description of an unpleasant but not untrustworthy person to test the explanation that this is being driven by distrust and not more general dislike or unpleasantness.

Andrew’s critique seems to assume that the base rate of atheists is higher than the base rate of rapists, that people know about that base rate difference, and that they use base-rate information rationally in making their judgments. The first point has been challenged elsewhere in the comments. As for the third point, I know of no evidence that higher base-rates increase the frequency of conjunction errors. Plenty of studies on base-rate insensitivity (by K&T and others) would challenge that idea. Besides, choosing anything other than “teacher” would still be a conjunction error. So it’s hard to argue that the effect is driven by people being rational.

The standard interpretation of the conjunction effect is that it is driven by the representativeness heuristic. In the classic example, when people are given the setup description of Linda (Linda majored in philosophy and is concerned with discrimination and social issues, etc.), many think “Linda is a feminist bank teller” is more probable than “Linda is a bank teller” because the description closely matches people’s idea (stereotype) of a feminist. You don’t get the effect if the smaller set does not match the vignette. And that was indeed the case here: in the control condition when Richard was an unpleasant but not untrustworthy man, very few people thought “atheist teacher” was more probable than “teacher.”

Why do it in this roundabout way? Social psychologists do not assume that people are able or willing to consciously report the full extent of their prejudices. So they look for indirect evidence — look for prejudice’s footprints, so to speak. The paper had 6 studies, including a direct self-report of attitudes toward various groups, an IAT, and a job candidate selection task (don’t worry Andrew, the subjects were compensated), as well as a few versions of the conjunction method. Given the standard interpretation of the conjunction effect being driven by representativeness, as well as it being one piece of a multi-method approach, it’s not all that bizarre of a way to go.

Yes, I looked at the article and I realize they used a between-subject design. Apparently I was wrong about the base rates of atheists and rapists—it’s truly a scary thought that America has more rapists than atheists!—but my real point was no (of course) to assume that people “use base-rate information rationally.” I know about Kahneman and Tversky too and have often blogged on their work, also I read the linked research article so I know they were basing their work on this. Doing an experiment to demonstrate a fallacy is one thing, but I’m highly skeptical of the idea of using this fallacy as some sort of measuring instrument. Indirect evidence is fine but this is a bit too indirect for my taste.

Andrew, I guess I’m struggling to figure out what is your substantive critique or alternative explanation. In the parent post you wrote “it’s hard to say who is more confused—the study subjects or the researcher” and talked a lot about base rates, but from your followup comments I can’t figure out what you think the researchers are confused about. I can understand having intuitions or taste in experimental methods, but that alone doesn’t seem to be enough to call a study “so dumb it makes me want to barf.”

The researchers describe some dishonest behaviour by ‘Richard’, then ask subjects whether they think it more probable that Richard was either (a) a teacher or (b) an atheist teacher. Why not just ask whether they think Richard’s behaviour is more representative of (a) a teacher (b) an atheist teacher ? Or, given two teachers, one atheist & one not, which, other things being equal, is more likely to behave as Richard ? The first version seems as obviously about attitudes to atheism as the others. Do psychologists really think its subtlety leads people into betraying prejudice’s footprint ?

But why resort to such roundabout tricks? Why not ask the question directly?

Who is more likely to steal a wallet when nobody is looking?
a. an atheist
b. a rapist
c. neither; they are equally larcenous

Or:

On a seven-point scale, rank each of the following on how likely they would be to steal a wallet when nobody is looking:
an atheist: 1 2 3 4 5 6 7
a Christian: 1 2 3 4 5 6 7
a rapist: 1 2 3 4 5 6 7
etc.

Instead, they asked questions that they knew would confuse nearly anyone not fluent in the language of statistics and probability.

I can well imagine there is some reason they didn’t want to try the direct approach, but I really don’t buy their implied conjunction that (a) results from direct question would be uninterpretable, but (b) results from their indirect method can be trusted.

One would think that even a completely untrained person of average intelligence would be able to recognize that this is some outrageous BS. Yet, it gets published in a high ranked, peer refereed journal. Truly amazing.

A general rule in surveys is that if you ask a question poorly, you can’t really interpret the results. That would seem to apply here.

There’s also an issue of experimenter bias. What sort of mindset does an experimenter have who sets out to compare athiests and rapists — knowing that it’s likely only publishable if you find this particular set of results?

Undoubtedly I should read the original study before insulting it in this way, but maybe I’m just one of those immoral athiests.

The number of atheists depends on the exact definition, but, if we think about Richard Dawkins type atheists and not just people who do not belong to any specific congregation, we’re talking about 2% of adults, or 5 million people.

As to the number of rapists, FBI reports that, in 1990 through 2010, there were an average of 95,000 reported forcible rapes per year. Times mean life expectancy, assuming one rape per assailant, I have to conclude that there are 8 million people in America who either have committed a rape, or will commit it sometime in the future. And that does not include unreported rapes (which outnumber reported incidents at least 2:1 to and possibly even 6:1).

It seems to me that there are far more rapists in this country than atheists.

I think that your estimate is way too high. The number of registered sex offenders is only ~740,000 currently (source: National Center for Missing and Exploited Children http://www.missingkids.com/en_US/documents/sex-offender-map.pdf ). Note that this includes all rapists not just child molesters, and for serious crimes like rape once someone is in the registry they’re not allowed off of it. I understand that not all rapes are reported but even at a 6:1 ratio that is still far less than 8 million rapists. Also the percentage of adults who self identify as atheists is 4% of the population according to a 2006 poll conducted in multiple countries by the Financial Times( http://www.harrisinteractive.com/news/allnewsbydate.asp? NewsID=1131 ) where the working definition of atheist that participants could self identify as was: Atheist (one who denies the existence of God). This seems like it would fit under the Richard Dawkins type of atheism.

You’re missing two significant factors. Less one in three reported rapes results in an arrest. (In 2010, there were 84,767 reported forcible rapes nationwide, resulting in only 20,088 arrests.) Only about half of all these arrests result in a conviction.

Repeating your back-of-fag-packet calculation for England & Wales gives very similar results; suggesting 2.5% of the population are past or future rapists. The shakiest assumption is one man per rape, but even if it were one per five it would imply one in a hundred men was a rapist – still alarmingly high (let alone that it implies it’s common to get away with it more than once).

Like Andrew, I’m surprised by these figures. My initial guess for the number of murders (600) was almost spot on but for rapes (1500) I was an order of magnitude out.

While we’re talking about the UK – yes there are still a lot of rapists, but here I think the athiests are more numerous. The most recent British Social Attitudes Survey has roughly 50% of the UK population reporting that they have “no religion”, if we take a very conservative assumption that 10% of those are athiests, then athiests still out-number athiests.

Also, Pew Forum on Religion & Public Life / U.S. Religious Landscape Survey is the most detailed religious survey of the United States that I know, with a pool that’s almost twenty times larger than the Financial Times study. It’s so detailed that it will tell you that two thirds of Jehovah’s Witnesses are women, that Buddhists strongly lean democratic, and that residents of Mississippi are almost four times more likely to believe in absolute inerrancy of the scripture than residents of Massachusetts (64% ± 6% vs 18% ± 4%). And that survey reports that 1.6% ± 0.6% of adult Americans self-identify as atheist. http://religions.pewforum.org/pdf/affiliations-all-traditions.pdf

I’m going to assume Nameless is just overhyping the rape threat through dodgy stats and assumptions but I do feel it is necessary to post something REALLY important and unintuitive, Andrew Gelman (and receive some condemnation here). Forgive me for my own dodgy stats.

“The number of atheists depends on the exact definition, but, if we think about Richard Dawkins type atheists and not just people who do not belong to any specific congregation, we’re talking about 2% of adults, or 5 million people.”
I think it’s an overestimate. First off, the Pew Survey on Religion claim that 1.6% of Americans claim to be atheists. Now, you may think…that’s fine, just assume a “margin of error” between Nameless’ stat and the Pew Survey’s stat.

21% of those atheists ALSO claim to believe in God or a universal spirit though! (Source: http://religions.pewforum.org/img/general/conception.gif) Now, there could be many reasons for that, one of which being that these people don’t know what atheism means. I don’t like that idea, because I find it insulting, but it’s possible that these people may be reaching for a word to express their beliefs and might have described themselves differently if they were better educated. I basically tried to figure out the reason behind the existence of these God-fearing atheists, to little avail, but the truth is, these sort of people ARE there. (If you are interested, go over to http://www.bay12forums.com/smf/index.php?topic=69828.0)

I’m going to assume that the people here are going to assume that an atheist who believe in God or a universal spirit is a contradiction and are therefore not “true” atheists[1], so we’re pretty much are going to need to figure out the population of atheists that either don’t believe in God or don’t know (I’m assuming atheists can adopt agnosticism, even if it is the agnosticism of fairies)…which is “79% of 1.6%”…or 1.246%.

Also, according to Nameless, 2%=5 million, meaning 1%=2.5 million, meaning that the population of the United States is 250 million. That’s…er, wrong. The United States’ population is currently above 300 million (source: http://www.census.gov/main/www/popclock.html for exact figure).

If I use Nameless’ assumption of 250 million people and my own stat of 1.246% “true atheists”, there would be 3.115 million atheists. If I use my assumption of 300 million people and my own stat of 1.246% “true atheists”, there would be 3.738 million atheists. I will actually lean towards using Nameless’ assumption for determining how many atheists they are in the US because the Pew Survey was done in 2007 (before the US population went above 300 million), and it is possible that the proportion of atheists in the general population may have changed during that time.

Not that I have any real use for knowing how many atheists they are in the US, but I do think that if we want to compare atheists to rapists, we need to know how many rapists AND how many atheists they are. Assumptions of 2% are not useful at all when at least a sizable number of these atheists say they also believe in God.

[1]Andrew Gelman, please answer this question. Assume the study is correct and Americans do distrust atheists. Would an average, run-of-the-mill American draw a distinction between a self-proclaimed atheist who believe in God and a self-proclaimed atheist that disbelieved in God? Or would the American distrust both individuals equally because they both claim to be atheists and atheists should be distrusted (regardless of their actual religious beliefs)?

“meaning that the population of the United States is 250 million. That’s…er, wrong.”

You’ve missed the part where I used the word “adults”. Now, that must slightly bias the rapist to atheist ratio my way (because there are nontrivial numbers of 8-year-old atheists, but virtually no 8-year-old rapists), but it’s all icing on the cake when we’re talking about a ratio that is no lower than 3:1 even using the most conservative estimate of the number of rapists.

Your number of aetheists varies wildly between surveys, with ARIS putting the number at 15%. ARIS is done by a non-religious university, and has a good (n) value [1].

But from a policy perspective I’m more interested in how many people self identifying as Christians are true believers. Religion is a very complex and personal thing, but if people actually believe in the Christian god, it makes sense that they will attend church. A range of organisations put service attendance around 41%+-2 [2]

So er, how religious is the US? In terms of people who believe enough to make it a part of their life style?
I wouldn’t like to say, especially as most people believe lack of exercise makes you fat… It’s just a messy picture is all.

But you can’t trust newspaper stories to give accurate accounts of research methods : according to the copy of the article on Gervais’ site (http://www2.psych.ubc.ca/~will/Gervais%20et%20al-%20Atheist%20Distrust.pdf) subjects were asked if it was more probable that this bad egg was a teacher or a teacher & a _____ , where the blank was either ‘atheist’, ‘rapist’, ‘Christian’ or ‘Muslim’ for different subjects. The researchers were measuring the proportion of subjects that committed the conjunction fallacy for different descriptions. So the population frequencies of atheist teachers vs rapist teachers don’t come in to it.

The reason they give for this odd approach is that in their first study (apparently a straightforward opinion survey) “instead of being representative of personal feelings, participants’ explicit responses may have instead reflected cultural norms determining which groups are fair game for criticism”. So in their second study they “adapted a classic conjunction fallacy paradigm […] to create an indirect measure of distrust for various groups of people”. I agree with you that it only seems to confuse the issue (perhaps as many people distrust Christians but understand probability) – I can’t imagine why anyone would be more likely to reveal his true feelings being asked in this way.

Isn’t it, as usual the challenge of how draw inferences from observed/reported responses?

When I was a stat grad student I was given a study to work on for the already really famous clinical researcher under the direction of the very soon to be famous statistician.

It involved the assessment of patient utilities in three dimensions, social, psychological and physical using a series of vignettes. The statistical advisor thought this would be great opportunity for me to learn about linear modelling under constraints and after reading through a recently published book on statistically modelling of utilities and was not convinced that was the best route.

As fascinating as that modelling was, and even though it would have been in my interest to get good at it – I first wanted to “see” the patients responses directly rather than through some abstract model. But the stats advisor was really anxious for me to do the linear modelling, so I came in on the weekend (when they weren’t around) and did some plots and discovered that many patients had responses that were incoherent. For instance, when going from a vignette that was bad on social to one that was also as bad on social and but now bad on physical, their reported utility when up! And this was not a small minority of patients but almost a third.

When I brought this to the attention of the clinical researcher they grinned and said thanks, obviously something does not make sense and we should hold off or even abandon trying to interpret the study (and I got out having to do the linear modelling.) Another clinical researcher later joked about how to translate the research findings into practice – “If you have a patient that’s lonely, break their legs – they will really appreciate it!”

I now think it was newspaper reporter — and as a result me — who was most confused. As Sanjay points, out, the experimenters didn’t ask about relative likelihood of “rapist teacher” & “atheist teacher”; only about relatively likelihood of “teacher” & *either* one of those two. I actually can see the value of trying to figure out if moral bias — differential levels of moral animus against a group — can pave the way for cognitive bias (or inferential error) that reflects or reinforces the moral bias. Andrew reasonably continues to question the design after reading the study — but my reaction was based on newspaper story alone & I should know better. :(

I think the validity criticism is more trenchant than the base-rate criticism, unless you think the base rate of the conjoined category affects the likelihood of making the conjunction fallacy. (Which it might, i don’t know, but once you’ve stipulated that the subject’s committed a fallacy it’s a little hard to go around trying to rationalize the subject’s behavior modulo that fallacy.)

I guess the defense of the CF analysis is that people are more likely to commit the CF if they seem strongly typical of the conjoined category; at any rate, I assume that’s the analysis of the original results on the CF (woman described in feminist terms is judged more likely to be a feminist AND bank teller than bank teller tout court). I think it’s reasonable to criticize that inference, but then it seems like you’re going after the CF in general, not this study in particular.

“I guess the defense of the CF analysis is that people are more likely to commit the CF if THE TARGET OF THE JUDGMENT SEEMS strongly typical of the conjoined category (I.E. THE CATEGORY UNIQUE TO THE CONJUNCTIVE DESCRIPTION); at any rate, I assume that’s the INTERPRETATION of the original results on the CF (woman described in feminist terms is judged more likely to be a feminist AND bank teller than bank teller tout court). I think it’s reasonable to criticize that inference, but then it seems like you’re going after the CF in general, not this study in particular.”

One can agree with all that without thinking it’s a good choice for *measuring* attitudes & then making comparisons. The subjects who distrust atheists won’t all be the same people who distrust rapists, Christians &c., & there’s no reason to suppose they all have the same susceptibility to the conjunction fallacy.

A large part of this discussion thread seems to illustrate exactly how cognitive biases operate. Andrew’s initial post was a riff based heavily on the use of the word “likely” in the “driver” scenario , which I think illustrates yet another bias ( the so-called expertise bias). Even taking into account the poorly written WaPo article, even a cursory glance at the link to the articles’ abstract reveals that the research is focused on how perceptions of probability are affected by bias, and in no way claims to be testing people’s ability to correctly make inferences by calculating formal probabilities in their heads based on the information given.

JSB: I’m sure I have my own cognitive biases; still, I’m highly skeptical of the idea of using this fallacy as some sort of measuring instrument. Indirect evidence is fine but this is a bit too indirect for my taste.

The other thing that took me a minute was this bit from Professor Gelman:

“Ummmm . . . given that there are lots of atheists out there and not many rapists, I think it’s pretty clear that the bad guy described in the vignette was indeed more likely to be an atheist than a rapist.”

Sounds like Availability in action…which is good in a way…
I’m glad Professor knows more atheists than rapists!