How can election polls swing so much given the increasingly polarized nature of American politics, where switching one’s support between candidates is a significant move? We investigate this question by conducting a novel panel survey of 83,283 people repeatedly polled over the last 45 days of the 2012 U.S. presidential election campaign. We find that reported swings in public opinion polls are generally not due to actual shifts in vote intention, but rather are the result of temporary periods of relatively low response rates by supporters of the reportedly slumping candidate. After correcting for this bias, we show there were nearly constant levels of support for the candidates during what appeared, based on traditional polling, to be the most volatile stretches of the campaign. Our results raise the possibility that decades of large, reported swings in public opinion — including the perennial “convention bounce” — are largely artifacts of sampling bias.

Here’s the key fig:

The short story is much of the apparent changes in public opinion are actually changes in patterns of nonresponse: When it looked like Romney jumped in popularity, what was really happening was that disaffected Democrats were not responding to the survey while resurgent Republicans were more likely to respond.

From a “methods” point of view, the key step is to poststratify by party ID, an idea that I’d explored before (with Cavan Reilly) but without realizing the full political implications.

Here’s another way of looking at it: We have a panel survey so we can see how often people were changing their opinion during that critical period of the campaign. Check it out:

(Sorry about that graph where the axis goes below zero. I don’t know how I let that one through.)

This is a big deal and it represents a major change in my thinking compared to my 1993 paper with Gary King, “Why are American Presidential election campaign polls so variable when votes are so predictable?” At that time, we gave an explanation for changes in opinion, but in retrospect, now I’m thinking that many of these apparent swings were really just differential nonresponse. Funny that we never thought of that.

David, Sharad, Doug, and I came to our conclusion after a fairly elaborate analysis of a new dataset. But the idea was out there. Here was Mark Palko, writing on Nov. 6, 2012, just before the election returns were coming in:

Assume that there’s an alternate world called Earth 49-49. This world is identical to ours in all but one respect: for almost all of the presidential campaign, 49% of the voters support Obama and 49% support Romney. There has been virtually no shift in who plans to vote for whom.

Despite this, all of the people on 49-49 believe that they’re on our world, where large segments of the voters are shifting their support from Romney to Obama then from Obama to Romney. . . .

In 49-49, the Romney campaign hit a stretch of embarrassing news coverage while Obama was having, in general, a very good run. With a couple of exceptions, the stories were trivial, certainly not the sort of thing that would cause someone to jump the substantial ideological divide between the two candidates so, none of Romney’s supporters shifted to Obama or to undecided. Many did, however, feel less and less like talking to pollsters. So Romney’s numbers started to go down which only made his supporters more depressed and reluctant to talk about their choice. . . .

This reluctance was already just starting to fade when the first debate came along. . . . after weeks of bad news and declining polls, the effect on the Republican base of getting what looked very much like the debate they’d hoped for was cathartic. Romney supporters who had been avoiding pollsters suddenly couldn’t wait to take the calls. . . . The polls shifted in Romney’s favor even though, had the election been held the week after the debate, the result would have been the same as it would have been had the election been held two weeks before . . .

I think Palko was basically right (although I’d change his 49-49 to something more like 51-49), and he gets extra credit for figuring this out without having the panel data to show it. If all the major pollsters had been poststratifying by party ID, though, maybe it would’ve been clearer.

Let me conclude with a statistical point. Sometimes researchers want to play it safe by using traditional methods — most notoriously, in that recent note by Michael Link, president of the American Association of Public Opinion Research, arguing against non-probability sampling on the (unsupported) grounds that such methods have “little grounding in theory.” But in the real world of statistics, there’s no such thing as a completely safe method. Adjusting for party ID might seem like a bold and risky move, but, based on the above research, it could well be riskier to not adjust.

Andrew Gelman is a professor of statistics and political science at Columbia University. His books include Bayesian Data Analysis; Teaching Statistics: A Bag of Tricks; and Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do.

Comments our editors find particularly useful or relevant are displayed in Top Comments, as are comments by users with these badges: . Replies to those posts appear here, as well as posts by staff writers.

To pause and restart automatic updates, click "Live" or "Paused". If paused, you'll be notified of the number of additional comments that have come in.

Comments our editors find particularly useful or relevant are displayed in Top Comments, as are comments by users with these badges: . Replies to those posts appear here, as well as posts by staff writers.