Cassidy’s Count: A Victory for the Pollsters and the Forecasters

Now that the Florida authorities have finally confirmed that President Obama defeated Mitt Romney in the Sunshine State by a margin of 50 per cent to 49.1 per cent, we have all the results and data we need to talk about what happened in the 2012 election, and who got it right. Obviously, Nate Silver did—more about that below. But so did most of the other forecasters, and, more importantly, many of the pollsters on whose work all the prognosticators, Silver and myself included, relied. To remind you, here are the results: President Obama won the popular vote by 50.5 per cent to 47.9 per cent, a margin of 2.6 per cent. In the electoral college, he got 332 votes and Mitt Romney got 206 votes. Obama carried almost all the battlegrounds, which, for these purposes, I will consider as eleven states: Colorado, Florida, Iowa, Nevada, North Carolina, New Hampshire, Michigan, Ohio, Pennsylvania, Virginia, and Wisconsin.

Let’s start with the pollsters and Obama’s win in the popular vote, which was a bit bigger than expected. For much of the final month, following the President’s poor performance in the first debate, Romney led in the national polls—in some, such as the Gallup tracking poll, by large margins. But in the final ten days or so, the poll of polls, which combines many different surveys, correctly identified a swing back toward Obama. On the eve of the election, the Real Clear Politics poll of polls and the T.P.M. Polltracker both showed the President ahead by 0.7 per cent.

Before I move on to the individual polls, a word of caution. In almost all of them, the margin of error was three per cent or more, which militates against putting much emphasis on small differences. Moreover, polls are snapshots, and not forecasts. Conceivably, some of them were accurate when they were carried out, but things changed between then and Tuesday. Even allowing for these factors, though, it’s fair to make some comparisons. All pollsters love to be vindicated on Election Day, and the sensible ones compare their numbers with the final outcome to see if they need to make any adjustments in subsequent elections.

In the weeks leading up to the election, dozens of national polls were carried out. Interestingly, of those executed in the final days before Tuesday, the three that produced findings most closely resembling the final result were all internet-based surveys. A Google Consumer Survey, which was published on Monday, showed the Obama-Biden ticket leading the Romney-Ryan ticket by 2.3 per cent. On the same day, the Reuters/Ipsos daily tracker had Obama leading by two points. And a so-called megapoll from the Economist/YouGov, which involved the pollster questioning 36,472 likely voters online, also had Obama leading by two points.

Another survey worth mentioning is the panel survey from Rand, which, rather than sampling new voters every time it took a new poll, followed the same individuals—three thousand and five hundred of them—and tracked their preferences over months. The Rand survey showed Obama consistently ahead, and its final update showed him leading by more than three points.

The surveys that showed Obama losing and Romney ahead going into Tuesday included the trackers from Gallup and Rasmussen. Gallup, in particular, has come in for much criticism, which isn’t surprising. It’s the oldest and best-known poll in the business, and people expect it to do better. (In political circles, most folks expect Rasmussen’s results to lean toward the G.O.P.) Interestingly, Gallup’s tracking poll of registered voters, rather than likely voters, showed Obama with a three-point lead on the day before the election. It successfully captured the final-week swing to Obama in the wake of Hurricane Sandy. Evidently, Gallup’s main problem was in deciding who was likely to vote. In making this judgment, which was based on a number of factors, it appears to have excluded too many Democrats.

Of course, it was the outcome in the battleground states that ultimately determined the result. For months, most of the swing-state polls showed Obama with a steady lead. On the night before the election, the polls of polls from Real Clear Politics and T.P.M. both had Obama leading in eight of the ten battlegrounds, the exceptions being Florida and North Carolina. On average, then, the pollsters called all the swing states correctly, except for Florida, which many of them got wrong. On the day before the election, the Real Clear Politics poll of polls in Florida had Romney leading by 1.5 per cent; the T.P.M. poll tracker had him up by 1.2 per cent.

Since Florida was close all along, it’s not particularly notable that many pollsters had Romney ahead. A bigger surprise was that, in many of the swing states, Obama’s margin of victory was bigger than the polls had indicated—often considerably bigger. Only in Florida, Ohio, and North Carolina were Obama and Romney’s final vote totals within two percentage points of each other. Obama carried Colorado by 4.7 points, Iowa by 5.6 points, New Hampshire by 5.7 points, Nevada by 6.6 points, and Wisconsin by 6.7 points. In these battlegrounds, the race didn’t end up being particularly close.

Still, if the pollsters’ exact numbers didn’t match up with the actual vote tallies, they generally called the winner correctly. Of course, not all of the pollsters did equally well. In general, the big, reputable polls, which use real interviewers rather than robocalls, and which didn’t weight their results in favor of the G.O.P.—that’s you again, Scott Rasmussen—were generally pretty reliable. But at the local level there was also a lot of junk polling, which added quite a bit of statistical noise to the picture.

Even in Florida, where most of the polls in the last week showed Romney leading, several polls showed Obama with a lead of a point or two: NBC/WSJ/Marist College, CBS/New York Times/Quinnipiac University, and Public Policy Polling. During the campaign, these pollsters, particularly NBC/WSJ/Marist, received criticism from the right for allegedly stacking their results in favor of the Democrats. But they had the last laugh. A couple of tracking polls that monitored the state race separately from the national contest also deserve an honorable mention. Both the Reuters-Ipsos poll—the same one that got the national race right—and the Newsmax/Zogby poll, which also called an Obama victory at the national level, showed the race in Florida virtually tied a day or two before the election. And also worth mentioning again: the final YouGov megapoll, which, when broken down to the state level, showed Obama leading by one point in Florida. (His actual margin of victory was 0.9 per cent.)

In Ohio, which was the subject of exhaustive surveying and analysis, the pollsters were also vindicated. Of twenty-nine polls carried out in the final three weeks of the campaign, just one—a Rasmussen survey—showed Romney ahead. Ultimately, unlike in most of the other battleground states, Obama’s margin of victory was a bit smaller than the polls had indicated: 1.9 points compared to a 2.9-point margin in the final R.C.P. poll of polls. Two local surveys—the Ohio poll and a poll for the Columbus Dispatch produced numbers that were within one per cent of the final result.

The partisan dispute about how the pollsters were counting the numbers was particularly bitter in Ohio: many conservative analysts claimed that the mainstream pollsters were mistakenly assuming that many more Democrats would turn out than Republicans. This assumption turned out to be perfectly justified. According the National Exit Poll, thirty-nine per cent of the voters in Ohio identified themselves as Democrats, compared to the thirty per cent who identified themselves as Republicans. The conservatives’ conspiracy theory was debunked.

Now on to the forecasters, starting with myself. On Monday morning, in making my final update to The New Yorker’s electoral map, I predicted that Obama would get 303 seats in the electoral college and that Romney would get 235. I made that projection primarily on the basis of the state polls, and, as I pointed out in a subsequent post, it was in line with the consensus opinion.

I got forty-nine of the fifty states right, which is pretty good. But Florida went for Obama. Why didn’t I foresee that happening? In retrospect, I didn’t attach enough weight to the tracking polls, which showed a nationwide swing to Obama over the final days, probably because of his handling of Hurricane Sandy. I wasn’t oblivious to what was happening. I wrote a post about it, and I called Colorado and Virginia, which I’d previously had as toss-ups. But I left Florida in the Romney column, citing the fact he still had a narrow lead in most of the local polls. That turned out to be a mistake.

Meanwhile, the more mathematical forecasters were also anguishing over Florida. As the national polls indicated movement in Obama’s favor, their statistical models, which combine national and state polling, alerted them to the fact that the President’s chances of taking the Sunshine State were approaching fifty per cent. That is the advantage of having a model as opposed to just staring at polls and scratching your head. Still, it was a very close call. In tabulating his final state-by-state projections at FiveThirtyEight, Silver listed Florida as a “toss-up,” and in his last pre-election post, published early on the morning of November 6th, he wrote, “Florida remains too close to call.” He put Obama’s chances of victory at fifty per cent exactly and projected that the final percentages for the two candidates would be 49.9 and 49.9. (As of this writing, these figures are still on the site in the table showing the projections for Florida.)

But in the final twenty-four hours before the vote, FiveThirtyEight also showed Florida light blue on its electoral map, indicating that the probability of an Obama victory, to one decimal place, was 50.3 per cent. Later on Tuesday, as Silver live-blogged the vote returns, he wrote, “In the final pre-election forecast at FiveThirtyEight, the state of Florida was exceptionally close. Officially, Mr. Obama was projected to win 49.797 percent of the vote there, and Mr. Romney 49.775 percent, a difference of two-hundredths of a percentage point.”

It might be said that calling the race too close to call and also coloring Florida light blue amounted to having it both ways. But Silver’s model did point (every so slightly) to an Obama win, and over-all he deserves a lot of credit. In a previous post, I queried whether mathematical models of the type that he uses add anything to the polls they rely on, and to simple polls of polls, such as those of R.C.P. and T.P.M. In this case, they did. Silver’s final forecast in Florida clearly beat the polls of polls, which had Romney ahead. He also nailed the popular vote, again beating the polls of polls, and he correctly identified the last-minute swing to Obama. For the second election in a row, he left many of the pundits in the dust. As promised, a bottle of champagne is on its way to the offices of FiveThirtyEight—not that its creator needs any more prizes now that his new book is number two on the Amazon best-seller list.

Still, as Silver readily conceded when I spoke to him on Saturday, there was an element of good fortune involved, especially when it came to coloring Florida light blue. “That was just a case of dumb luck basically,” he said modestly. At least one other mathematical forecaster wasn’t so fortunate. On the eve of the voting, Sam Wang, the man behind the Princeton Election Consortium, looked at his model and saw it indicating that the race in Florida was basically tied. Still, Wang felt obliged to make a prediction. “We are all tossing coins,” he wrote in calling the race for Romney. “I am prepared to lose the coin toss.”

Wang did lose and Silver did win, but he wasn’t the only one. Simon Jackman, a Stanford University political scientist who created the Huffington Post’s pollster model, and Drew Linzer, an Emory University political scientist who runs the Votamatic Web site, both called all fifty states correctly, although, they, too, hesitated over Florida. On Tuesday, morning Jackman published his final prediction, noting, “We are not particularly confident about the forecast for Florida.” About the same time, Linzer, in making his final prediction, described Florida as “a true toss-up” and said that he “would not be surprised” if it went for Romney. However, both Linzer and Jackman did ultimately predict an Obama victory, which, according to their models, was just about the most likely outcome.

In this somewhat equivocal manner, Silver, Linzer, and Jackman correctly predicted the 332-206 outcome in the electoral college. Who, then, was the ultimate winner of the forecasting gold medal? For the sake of argument, I’ll use the popular vote as a tiebreaker. As far as I could see, Linzer didn’t issue a forecast for the popular vote, but Silver and Jackman did. Silver’s final prediction was Obama 50.8 per cent, Romney 48.3 per cent. Jackman’s prediction was Obama 50.1 per cent, Romney 48.4 per cent. Neither got the final voting figures—50.5 per cent to 47.9 per cent—exactly right, but Silver was the closest. Jackman’s prediction of a 1.7-per-cent winning margin for Obama turned out to be a bit low. Silver’s prediction of a 2.5-per-cent winning margin proved to be almost spot on, and the gold medal goes to him.

In the bigger picture, though, the lessons of the campaign are about more than FiveThirtyEight. First and foremost, reliable polling remains the bedrock of any serious electoral analysis. It isn’t easy, and it takes a lot of grunt work, but without it we would all be lost. In this respect, the 2012 election was a hopeful one. Solid unbiased surveying, of the sort that organizations like Pew and the pollsters associated with the major newspapers and television networks engage in, was rewarded. Blatantly skewing the figures was punished. And there was also evidence that online polling, which facilitates the creation of very large samples at relatively low cost, can be informative and reliable.

Second, it’s time for election analysts (myself included) to take the mathematical forecasters in general more seriously, and to incorporate their findings into their analysis. “It’s not ‘a Nate thing,’ Jackman noted after the election. “It’s a data-meets-good-model, scientific-method thing.” Silver readily agreed with that sentiment. He pointed out that the model he uses is very similar to Jackman’s, saying, “the DNA is ninety-five per cent the same.” And he also reminded me that Linzer, of Votamatic, predicted as far back as June that Obama would win Florida and all the other swing states, except North Carolina. “It was a big year for data-driven analysis” in general, he said.

Nobody could argue with that. I still suspect that one of these years there’s going to be a “black swan” election that confounds the modellers. But looking ahead, the burden of proof is going to be on the skeptics. If the probability models say candidate X has an eighty-per-cent chance of winning, and you think X is going to lose, you will have to explain what it is the models are missing. That is the Nate legacy.

Stan Midler, a volunteer for the Sarasota Democratic Party, waited to approach voters with sample ballots as others lined up for the 7 A.M. start of voting outside the Municipal Auditorium in Sarasota, Florida, on Election Day, 2012. Photograph by Chip Litherland.

Sign up for the daily newsletter.Sign up for the daily newsletter: the best of The New Yorker every day.