How to be a smart reader of political polls, illustrated

Share this:

Every few days, someone releases a new poll that claims an advantage for one candidate or the other in one of the many (many!) big races in Minnesota’s November elections.

It’s tempting to read a lot into these poll results: Your candidate’s ahead! Your candidate’s behind! They’re neck and neck! After all, polls are the best information we have about who’s winning and losing the horse race until all the votes are counted.

But sometimes polls don’t make claims as definitively as people make them out to do. That’s why it’s important to be a savvy consumer of polls and that, reader, is where we’re going to help you out with this handy guide.

How do I know if I should trust a poll?

Just because something looks like a poll and acts like a poll doesn’t mean it’s a poll you should trust.

So, first steps: Take a good look at who conducted the poll and think about whether or not they have an agenda they might be trying to promote by publishing it.

Be more skeptical of parties, campaigns and political committees, who might release poll results to push a certain point of view, said Rob Daves, the principal researcher at Minneapolis-based Daves and Associates Research and the former director of the Minnesota Poll.

Polls by media organizations might raise less suspicion at the outset, but they sometimes cut corners in efforts to save money, Daves said. That’s why it’s important to know what to look for when you interpret a poll.

A good pollster should lay out who he or she polled (for example: how many were men, how many were women, were they likely voters and how do they know that?), when, how — whether by cell phone, landline, or internet — and what questions were asked. A person reading a poll should theoretically have enough information to replicate the poll and its results.

Read the questions a poll asked, said Jim Cottrill, a political science professor at St. Cloud State University who works on the SCSU Survey. And look for this: a good poll will ask questions worded in a neutral way in an order designed not to nudge responses.

One of the early questions in the SCSU survey asks what respondents think is the most important issue facing Minnesota.

“We ask it out of the gate, first, so whatever is on their mind, they mention,” Cottrill said. If the poll asked about the most important issue facing Minnesotans, say, after it had asked several questions about immigration, more people might have responded with that top-of-mind topic. That’s called priming responses, and good pollsters try not to do it.

Likewise, most pollsters weight answers — giving more heft to others in the final result to correct for ways in which the people they end up calling don’t represent the population as a whole.

There’s all kinds of reasons sample populations might not accurately represent the underlying group you’re trying to survey. When you call landlines, for example, “You’re going to be particularly overrepresentative of older people, people who are retired have a little more time to sit and chat with you about what they think,” Cottrill said. Likewise, men are overrepresented when you call cell numbers because women are less likely to pick up a call from an unknown number.

A good poll surveys a random sample, meaning any one person in the population is as likely as any other person to be chosen to participate. A good pollster will weight responses to account for these non-random factors, and explain how he or she did it.

For example, in its explanation of a September Minnesota CD8 poll, the New York Times describes how it weighted results by age, estimated party, gender, voting likelihood, race, education and region. It shows the results of its poll under different weighting schemes, which gives you a sense of how much weighting (or lack thereof) matters:

The point of weighting is that factors like gender and age affect the way people vote, so the sample should be adjusted to match up as much as possible with the population of voters overall.

Poll results often include a “margin of error.” Should I really trust something that admits it has errors?

You actually should, as long as you take into account what margin of error really means.

Because they attempt to represent a big population using a smaller sample, there are lots of ways polls have errors. The kind that often appears next to poll results is called a margin of sampling error.

If pollsters talked to every voter in a district, these margins of error wouldn’t exist. (That’s why the ten-year U.S. Census is reported without margins of error — it’s a complete count.) But that’s not the way the world works.

In real life, pollsters talk to a sample of people and extrapolate the results to the general population. To account for that mathematical leap of faith, good pollsters report margins of error (if they’re not reported, don’t trust the poll), which are based on mathematical equations that try to estimate how far off a sample is likely to be from the underlying population.

“They’re giving you an estimate, and you have to know how much error there is in that estimate,” Cottrill said.

Usually, margins of error use something called a 95 percent confidence interval, which means if you did the poll 100 times, you could expect to get results within the margin of error 95 times.

MinnPost illustration by Greta Kaul

(While 95 percent of the time, the result is assumed to fall within the margin of error, there’s a 2.5 percent chance the result falls above it or below it).

In Minnesota, talking to 600 people is enough to get a pretty good feel for that state — usually about a 3.5 percentage point margin of error, Cottrill said.

Pay attention to margins of error when making claims about subgroups of people based on polls, Cottrill said. When you take a slice of the population in the overall poll — say women, or Asian American voters, or dog owners (OK, nobody polls dog owners about voting) — the sample size gets smaller, so the margin of error gets bigger.

So how should I think about margin of error when reading a poll result?

The Star Tribune/MPR News Minnesota Poll put DFL candidate for governor Tim Walz at 45 percent, with Republican Jeff Johnson at 36 percent and 16 percent undecided (3 percent of voters favored another candidate) with a 3.5 percentage point margin of sampling error. So what does that mean?

A margin of error of 3.5 percent tells you that the share of Minnesotans who support Walz likely falls within 3.5 percentage points of the way the Minnesotans in the sample said they feel.

That tells us Walz seems to be ahead. Accounting for the +/-3.5 percent margin of error:

45 percent of Minnesotans sampled favored Walz, but the true result could be as low as 41.5 percent and as high as 48.5 percent.

36 percent of Minnesotans sampled favored Johnson, but the true result could be as low as 32.5 percent and as high as 39.5 percent.

That seems like good news for Walz: worst-case scenario, he’s at 41.5 percent and Johnson’s at 39.5 percent. But then there is the matter of undecided voters. There’s weeks left to go, and 16 percent (+/- 3.5 percentage points) haven’t made up their minds.

So yeah, Walz appears to be ahead, but it’s early and a lot of people still haven’t given a thought to the governor’s race. Maybe they’ll all vote for Johnson. Maybe they won’t.

MinnPost illustration by Greta Kaul

Another example: A New York Times/Siena College Minnesota CD8 poll found DFLer Joe Radinovich had the support of 44 percent of respondents compared to Republican Pete Stauber’s 43 percent. A 4.6-point margin of error means Radinovich’s support could be as low as 39.4 percent or as high as 48.6 percent. Stauber’s support could be as low as 38.4 percent or as high as 47.6 percent.

MinnPost illustration by Greta Kaul

This is a really close race. A responsible read on the result? This poll suggests Radinovich is ahead, but his lead is well within the margin of error, meaning he and Stauber could very well be tied.

Great. So if I keep all that stuff in mind, I can read a poll and know who’s going to win the election?

Not quite. Remember that polls measure how people answered questions when the pollster called — they aren’t meant to predict the outcome of a race.

To get a better sense of trends, it sometimes helps to look at the direction polls are going. If fewer are undecided and more have decided on Johnson than last time, things seem to be trending in a good direction for him.

Granted, that can be hard to do in state races, where fewer polls are done, than in national ones. And there’s always the potential for an October surprise to cause an upset.

One more thing. When you’re looking at polls, think about them in the context of the way American politics are structured, Cottrill said. Yes, pollsters had been tracking an enthusiasm edge for Democrats (that’s diminished a bit, according to a new poll), but representation in Congress isn’t proportional to enthusiasm.

Politicians need to win seats that represent districts of varying political stripes. That means what’s happening in individual districts matters more, in the context of Congress, than the enthusiasm of parties’ bases on the aggregate.

An example of this: in 2016, polls projected Hillary Clinton would win more votes than Donald Trump. She did, by nearly 3 million. But the U.S. elects presidents based on the electoral college, not the popular vote, so her lead didn’t translate into victory.

“The polls were totally right, but that’s not how we elect the president,” Cottrill said.

Related Tags:

About the author:

Comments (5)

One recommendation I would add is: compare a poll you are looking at to the aggregate of polling on the same question where possible. That way you will have a better idea of if the poll is an outlier or not.

This is the time when so-called polls pop up that are just blatantly obvious attempts at boosting one candidate’s or another’s profile. I got one the other day which was so clearly biased towards Feehan that it was laughable. My wife got one, obviously from the Hagedorn camp, that when it became obvious she favored Feehan they just hung up on her.
Ever notice how Drudge constantly touts the Rasmussen poll results? This is a poll that consistently ranks conservative candidates, and Trump especially, higher than any other poll out there. There has got to be a reason for that and it isn’t that they are the only ones getting it right.

But the final polls before the election showed Hillary 2-3 points ahead. Which is exactly where the vote ended up. The national polls were exactly right. We just happen to have a system where some votes count more than others and you can get elected president even though millions more people preferred your opponent.

Polling about voting is a campaign art clothed as a survey. Even campaigns earnestly trying to assess the likely outcome are also looking for reasons. I’d always sneak in a knowledge test:
“In the upcoming US Senate race for Sen. Franken’s seat do you intend to vote for Tina Smith, Karin Housley or Al Franken?”

Neutral or media pre-election polls are usually overly simple, like “If the election were today, would you vote for Candidate A or Candidate B?”
A canvasser at the door asks that, and reads a pause before answering not the answer.

In a voice poll, responders might offer “Neither, I don’t intend to vote.” “I voted early.” or even “Who is B?” In a written or on-line poll A, B, Undecided as options get marked.not neccesarily by the random sample and not returned isn’t the same as undecided.