When science clashes with beliefs? Make science impotent

A psychologist has identified a handy mental shortcut that people use when …

It's hardly a secret that large segments of the population choose not to accept scientific data because it conflicts with their predefined beliefs: economic, political, religious, or otherwise. But many studies have indicated that these same people aren't happy with viewing themselves as anti-science, which can create a state of cognitive dissonance. That has left psychologists pondering the methods that these people use to rationalize the conflict.

A study published in the Journal of Applied Social Psychology takes a look at one of these methods, which the authors term "scientific impotence"—the decision that science can't actually address the issue at hand properly. It finds evidence that not only supports the scientific impotence model, but suggests that it could be contagious. Once a subject has decided that a given topic is off limits to science, they tend to start applying the same logic to other issues.

The paper is worth reading for the introduction alone, which sets up the problem of science acceptance within the context of persuasive arguments and belief systems. There's a significant amount of literature that considers how people resist persuasion, and at least seven different strategies have been identified. But the author, Towson University's Geoffrey Munro, attempts to carve out an exceptional place for scientific information. "Belief-contradicting scientific information may elicit different resistance processes than belief-contradicting information of a nonscientific nature," he argues. "Source derogation, for example, might be less effective in response to scientific than nonscientific information."

It might be, but many of the arguments against mainstream science make it clear that it's not. Evolution doubters present science as an atheistic conspiracy; antivaccination advocates consider the biomedical research community to be hopelessly corrupted by the pharmaceutical industry; and climatologists have been accused of being in it to foster everything from their own funding to global governance. Clearly, source derogation is very much on the table.

If that method of handling things is dismissed a bit abruptly, Munro makes a better case for not addressing an alternative way of dismissing scientific data: identifying perceived methodological flaws. This definitely occurs, as indicated by references cited in the paper, but it's not an option for everyone. Many people reject scientific information without having access to the methodology that produced it or the ability to understand it if they did. So, although selective attacks on methodology take place, they're not necessarily available to everyone who chooses to dismiss scientific findings.

What Munro examines here is an alternative approach: the decision that, regardless of the methodological details, a topic is just not accessible to scientific analysis. This approach also has a prominent place among those who disregard scientific information, ranging from the very narrow—people who argue that the climate is simply too complicated to understand—to the extremely broad, such as those among the creationist movement who argue that the only valid science takes place in the controlled environs of a lab, and thereby dismiss not only evolution, but geology, astronomy, etc.

To get at this issue, Munro polled a set of college students about their feelings about homosexuality, and then exposed them to a series of generic scientific abstracts that presented evidence that it was or wasn't a mental illness (a control group read the same abstracts with nonsense terms in place of sexual identities). By chance, these either challenged or confirmed the students' preconceptions. The subjects were then given the chance to state whether they accepted the information in the abstracts and, if not, why not.

Regardless of whether the information presented confirmed or contradicted the students' existing beliefs, all of them came away from the reading with their beliefs strengthened. As expected, a number of the subjects that had their beliefs challenged chose to indicate that the subject was beyond the ability of science to properly examine. This group then showed a weak tendency to extend that same logic to other areas, like scientific data on astrology and herbal remedies.

A second group went through the same initial abstract-reading process, but were then given an issue to research (the effectiveness of the death penalty as a deterrent to violent crime), and offered various sources of information on the issue. The group that chose to discount scientific information on the human behavior issue were more likely than their peers to evaluate nonscientific material when it came to making a decision about the death penalty.

There are a number of issues with the study: the sample size was small, college students are probably atypical in that they're constantly being exposed to challenging information, and there was no attempt to determine the students' scientific literacy on the topic going in. That last point seems rather significant, since the students were recruited from a psychology course, and majors in that field might be expected to already know the state of the field. So, this study would seem to fall in the large category of those that are intriguing, but in need of a more rigorous replication.

It's probably worth making the effort, however, because it might explain why doubts about mainstream science seem to travel in packs. For example, the Discovery Institute, famed for hosting a petition that questions our understanding of evolution, has recently taken up climate change as an additional issue (they don't believe the scientific community on that topic, either). The Oregon Institute of Science and Medicine is best known for hosting a petition that questions the scientific consensus on climate change, but the people who run it also promote creationism and question the link between HIV and AIDS.

Within the scientific community, there has been substantial debate over how best to deal with the public's refusal to accept basic scientific findings, with different camps arguing for increasing scientific literacy, challenging beliefs, or emphasizing the compatibility between belief and science. Confirming that the scientific impotence phenomenon is real might induce the scientific community to consider whether any of the public engagement models they're currently arguing over would actually be effective at addressing this issue.