Tag Archives: polling

If you’re at all familiar with opinion polls, you know a few things: How the questions are asked matters; The results are always within a certain margin of error thanks to probability sampling; and there are always some people who profess to have no opinion on the question being asked. When we interpret the results of polls, it’s easy to keep the margin of error in mind, and it’s not even that difficult to assess the impact of question wording by comparing the results from differently worded polls that are asking about the same thing. No, the real challenge is figuring out what those undecideds are actually thinking. A little example shows why this can be so important.

Let’s say a poll asks respondents about their support of health reform. Now imagine two sets of results, both with a margin of error of +/- 3%.

In this case, we clearly want to know what the undecided respondents are thinking, because there are so many of them. The results simply aren’t very informative when we only know what half of people are thinking. Now, consider a case where we might be inclined to ignore this lack of opinion:

In this case, we would consider the public evenly divided on the issue, because the difference of 3% is well within our combined margin of error. We probably wouldn’t think much of the 5% with no opinion, although the reality is that how that group broke out if people were forced to choose one position or the other, could make a difference. If all 5% went to the support camp, we’d still have a statistical tie. If they broke for the opposition, the difference between the two groups would no longer overlap. The undecideds matter.

Fortunately, there’s something that researchers can do to get around this. As Adam Berinsky and Michele Margolis from MIT write in a recent issue of the Journal of Health Politics, Policy, and Law, it is possible to impute the opinions of those who profess not to have an opinion. Here’s how it works: Most people don’t answer “no opinion” to every question. So, you use their responses to the questions they did answer to match them up with other respondents who answered those questions the same way they did. Then you look at the way those people answered the question that the others expressed no opinion on. That gives you a sense of how they would have answered the question if they had done so. Make sense?

Here’s a silly example: Let’s say you asked people three questions about which foods they liked to eat. First, you ask if they like lettuce. 80% say yes, and 20% say no. Then you ask if they like cheese. 50% say yes, and 50% say no. Then you ask if they like hamburger. 40% say yes, 20% say no, and 40% have no opinion. When you look at the data, you see that the 40% with no opinion all like lettuce and dislike cheese. Others in the sample who like lettuce and dislike cheese report that they dislike hamburger (it turns out they are strict vegetarians). Therefore, it’s not a certainty, but there’s a good chance that the folks with no opinion actually dislike hamburgers. They may also be vegetarians who answered no opinion because they felt the question wasn’t relevant to them, for example. Make better sense? Good.