All posts tagged confirmation bias

I think comparative politics has a bigger problem with conflicts of interest than scholars who work in this field generally acknowledge. I don’t think the problem can be eliminated, but I imagine that talking about it more can help, so that’s what I’m going to do.

When you hear the term “conflict of interest,” you probably think of corporations paying for studies that advance their commercial interests. I know I do. It’s easy to see why studies on the effectiveness of new drug therapies or the link between pollution and cancer, for example, warrant closer scrutiny when they’re funded by firms with profits riding on the results. You don’t have to be a misanthrope to believe that the profit motive might have shaped the analysis, and there are enough examples of outright fraud to make skepticism the prudent default setting.

That’s not the only conflict that can arise, though. What I think many scholars working in comparative politics don’t appreciate as much as we should is that it’s also possible for political values and advocacy to play a similar role, and to similar effect. When a researcher’s work deals with issues on which he or she has strong moral beliefs, that confluence can hinder his or her ability to identify and fairly weigh relevant evidence. Confirmation bias is hard to overcome, especially in studies that rely entirely on an author’s interpretation, as many qualitative studies do. The problem is even more intense if the researchers’ personal life is interwoven with her work. Certain conclusions may be more palatable or appealing to people with certain values, and it can be professionally and personally damaging for researchers to report findings that suggest the work their friends and colleagues are doing may not be all that useful, or may even be counterproductive.

The example I know best comes from one of my primary research interests, comparative democratization. Some of the best-known and most respected researchers and organizations in this sub-field routinely engage in advocacy through op-eds, policy briefs, and meetings and speaking engagements with advocates and development professionals. One of the leading journals on this topic, the Journal of Democracy (JoD), is published for the National Endowment for Democracy, a U.S. government-funded organization that supports the U.S. government’s efforts to promote democracy around the world. In contrast to conventional academic practice, most submissions to JoD are commissioned by the editors, and they aren’t formally peer-reviewed.

Perhaps it’s just a coincidence, but for the past 20 years or so, the main themes to emerge from research on this topic are that democratization has all kinds of ancillary benefits—peace, wealth, and freedom from terrorism, to name a few—and that the kinds of the things the U.S. government and the advocates it supports generally do to advance democratization are helpful. In other words, scholars’ studies often reach conclusions that affirm the value of U.S. policy and their own advocacy, which is intimately connected to their personal beliefs and relationships.

That happy alignment doesn’t automatically invalidate those studies, of course, but I think it does warrant closer scrutiny than it now gets. I have great respect for many of the people working in democratization studies, and I happen to share their moral convictions that democracy is the best form of government and that every human being deserves citizenship. Still, let’s be honest: we feel better when we believe our research is helping people we admire change the world for the better, and we’re more likely to get that positive feedback when our findings validate the work those people are already doing. The effects of this feedback loop on the questions we ask, the designs we adopt to answer them, and the conclusions we reach may not be trivial. I think we should talk more about it, both in a general way and whenever evaluating specific pieces of research.

It would be unfair and probably unethical of me to conclude without pointing out that similar issues arise when scholars do consulting work, as I have for the past 15 or so years. Even if a client asks for as fair and objective a study as possible, interpersonal and financial concerns can shape the design of the analysis and interpretation of the results. For example, if you’re paid handsomely to develop a system to forecast event X, you have a financial interest in saying that you can indeed forecast event X and that you can do it well. We can ameliorate this problem by being as transparent as possible about our funding, data, and methods, but we can’t eliminate it, and we’re usually not the best judges of our own motives. Contract research like this occupies a pretty small space in comparative politics right now, so I don’t think this is having much effect on the field at the moment, but I think it’s important for me to note it, given the career path I’ve taken.