13 Tips For Reading General Election Polls Like A Pro

Back in December, we partnered with WNYC’s “On the Media” to put together a “Breaking News Consumer’s Handbook” for presidential primary polls. With the campaign’s homestretch now in view, it’s time to do the same for general election polls. I talked with Bob Garfield from “On the Media” about how to make sense of the onslaught of polls that awaits us on the other side of Labor Day (subscribe to the “On the Media” podcast here).

Here are the 13 rules/guidelines worth keeping in mind.

Beware of polls tagged “bombshells” or “stunners.” Any poll described thusly is likely to be an outlier, and outlier polls are usually wrong. Remember those American Research Group polls that had Republican John Kasich climbing rapidly in primary after primary? They were pretty much all wrong; stunners usually are. That said, sometimes they’re right, such as the Des Moines Register poll that projected a large Joni Ernst victory in the 2014 Iowa Senate race, when other polls showed a tighter race. So don’t dismiss outliers, either.

Instead, take an average. I don’t just say this because it’s what we do at FiveThirtyEight. I say it because aggregating polls, especially in general elections, is the method that leads to the most accurate projection of the eventual result most often. Put simply, it’s the best measure of the state of the race.

Look for polls that use live interviewers; they have a better track record. Polls that use real, live humans to call people up and ask them what they think are more tested have a better record than interactive voice response (“robopolls”) or online polls. Pollsters that use live interviewers can more easily reach a representative sample, while convenience online surveys and especially interactive voice response polls have trouble contacting some demographic groups — young people and nonwhite respondents, for example.

Know the polling firm — some are waaay better than others. There’s a ton of pollsters. Many are staffed with really smart people who put a lot of thought and resources into producing top-quality surveys. Some are not. Some produce fake polls. If you’ve never heard of the polling firm, be suspicious. When in doubt, check our pollster ratings to be sure.

Beware the unskewers. It’s easy to find fault with a poll, even from the best pollsters. Anyone passionately arguing that a poll is wrong because its sample has “too many [xx]” or “too many of this group are voting for [xx]” is probably wrong. As you dig into a survey’s crosstabs — looking at college-educated white men, for example, or Hispanics 65 years or older — you’re sacrificing sample size for specificity. The margins of error of subsamples can get huge. Further, most pollsters weight their results by demographics (such as age and race) and not attitudes (like party identification). They do so because historically this has produced the most accurate result. Picking apart individual polls is usually a bad use of time, and the people doing it tend to have a motive.

Check what the pollster said previously. Some pollsters’ results lean more toward one party or the other. Sometimes the race will seem as if it’s shifting simply because a group of pollsters with a Clinton-leaning “house effect” — they tend to produce better numbers for Clinton — release their results one week and a group of Trump-leaning pollsters the next. So, comparing polls between different pollsters can get tricky. Instead, when Acme Polling releases a survey showing Trump ahead by 4 points in Ohio, check what the previous Acme poll of Ohio said — that’ll give you a better sense of whether there has truly been movement in the race. (FiveThirtyEight’s forecast models do just this.)

Consider the motives of the media reporting on the polls. Conservative and liberal media outlets are more likely to report on polls more favorable to their candidates or portray outlier polls as the true state of the race. And even nonpartisan media outlets know that “New Poll Shows Race Hasn’t Changed” isn’t a great headline. Additionally, a media company that sponsors a poll is probably going to want to hype up their own findings.

Check to see if the poll includes third-party candidates. Normally, whether a poll includes independent candidates doesn’t have too much effect on the margin separating the two major-party candidates. This year, however, surveys that include the Libertarian Party’s nominee, Gary Johnson, and the Green Party’s, Jill Stein, have shown a closer race than two-way matchup polls. Pollsters, at the least, should be including Johnson — he’s probably going to be on the ballot in all 50 states, and he’s polling in the high single digits. Moreover, Johnson does not seem to be fading in the polls as many past third-party candidates have.

Margin of error and sample size matter less than who’s in the sample, though be wary if the sample size is smaller than 400. One key to accurate election polling is correctly projecting who will vote. Calling harder to reach people (like young voters) costs a lot of money, so pollsters who want to properly poll these groups often have smaller sample sizes. Meanwhile, having a larger sample size often doesn’t shrink the margin of error that much. A national poll with a sample size of 400 has a margin of error of +/- 4.9 percentage points, while one with a sample size of 800 has a margin of error of +/- 3.5 percentage points. That said, the margin of error rapidly increases as you drop below 400 respondents.

Don’t get crazy about the Electoral College. If either Clinton or Trump is ahead in the popular vote, then they will most likely win the Electoral College, especially if they’re winning by more than a few percentage points. That’s been almost universally true throughout American history, with only a couple of exceptions in super close elections. (That’s one reason it’s silly to dismiss national polls.)

Still, aggregating the state polls usually provides a better idea of who is going to win than the national polls. If the election is close, we need to know who is winning in the swing states. But even if the election isn’t close, history shows that aggregating the state polls (weighted by population) has more often than not produced a more accurate projection of the national popular vote than the national polls.

If the polls shift after the debates … wait. Short-term shifts in polls often reverse themselves. Debates are among the final events on every modern election calendar that can have a big effect on the polls. There’s an argument to be made that Ronald Reagan’s strong debate performance in 1980 shifted the polls in his direction in the final week of the campaign. But in 2012, Mitt Romney rose in the polls after the first debate only to lose most of those gains before Election Day. Better to wait and see if any change lasts.

Even at the end of the campaign, the polls probably won’t perfectly predict the results. Polls get more accurate as Election Day approaches, but even on the eve of voting they’re not perfect. Everyone talks about the margin of sampling error — the error introduced by not surveying every voter. But polls are also subject to all kinds of other errors, none of which disappear as we get closer to the election. The average presidential poll within the final 21 days of the election has been off by 3.6 percentage points since 2000.

Harry Enten is a senior political writer and analyst for FiveThirtyEight. @forecasterenten