In Obsessing Over Polls and Models, We All Lose

Nate Silver provides a powerful new resource for understanding the cascade of polls that have come to dominate discussion of elections, and it's hard to argue with his accuracy. Yet it's the news media's very obsession with polls and models that hinders our ability to talk about substantive issues from climate change to inequality.

November 06, 2012 | Matthew Nisbet

Share

If you are a Democrat, you were likely feeling good on Election Day about President Obama’s chances. Many pollsters and forecasters predicted an Obama victory, with TheNew York Times’ Nate Silver pegging his chances at 90.9%. Not surprisingly, conservatives voiced skepticism about Silver’s prediction, making the science and art of modeling the subject of considerable attention at cable news and blogs.

Silver provides a useful and powerful new resource for understanding the cascade of polls that have come to dominate discussion of elections. Yet it’s the news media’s very obsession with polls and models that hinders our ability to talk about substantive issues. When advocates, for example, deservedly complained that we had gone months without talking about climate change only to be woken from our slumber by a tragic storm, we can thank in part the media’s infatuation with polls and modeling.

“The problem with the current fashion for polls and statistics is that it changes what it purports to study,” wrote Washington Post columnist Michael Gerson on election day. “Instead of making political analysis more ‘objective,’ it has driven the entire political class … toward an obsessive emphasis on data and technique.”

Our obsession with polling and modeling results in miniaturization, argued Gerson. “At the election’s close, we talk of Silver’s statistical model and the likely turnout … and relatively little about poverty, social mobility, or unsustainable debt."

The Signal

Silver is the author of the Times’ FiveThirtyEight blog named after the number of votes in the Electoral College. As an amateur statistician, Silver gained notice for his Money Ball-style analysis of professional baseball players. In the run up to the 2008 presidential election, Silver launched his own blog, accurately predicting the electoral outcome in 49 out of 50 states and in 35 Senate races.

His accuracy turned Silver into a cultural celebrity, the political equivalent of Warren Buffett, Bill Gates, or Steve Jobs, a self-made oracle who offered unique insight into a complex future. This celebrity – and Silver’s popularity online – led the Times in 2010 to add him as a blogger.

A few months ago, he published his first book The Signal and the Noise: Why Most Predictions Fail – But Some Don't. In a review of the book at The Breakthrough, Roger Pielke Jr. wrote that on subjects that Silver knows well -- namely sports, politics, and poker – he contributes extremely valuable chapters. In other chapters on subjects like the stock market or climate change, the chapters have flaws, but raise important questions.

In politics though, you can’t argue with Silver’s accuracy. In yesterday’s election, he accurately predicted the presidential vote in all 50 states, an outcome that only added to his celebrity and oracle status. CNN reports that sales of his book shot up 800% on Amazon, pushing it to number two on the best-seller list. Silver is a vitally important resource that adds statistical context to the profusion of weekly and daily tracking polls. Absent his analysis – and similar efforts at modeling – the profusion of polls would be difficult to synthesize.

What Silver and others do is correct for the intrinsic randomness to any single national or state poll. Based on a sample of likely voters, each poll is just an estimate, an attempt to forecast at a single moment in time how the true population of likely voters might vote on Election Day.

Because they are an estimate, each poll has a margin of error usually in the range of +/- two to four points. Depending on the confidence interval for the polls, this generally means that if 100 polls were taken at that period in time by the survey firm, using the same question wording, 19 out of 20 of the polls would fall within the survey’s margin of error.

But other under reported sources of error also factor into a poll’s accuracy, including the greater reliance on cell phones, the trouble in reaching Latinos and young people for interviews, and shifting voter enthusiasm which shapes the estimate of who is a likely voter. Moreover, if a polling firm leans Republican or Democrat in their outlook, the decisions they make on how to treat these “unknowns,” can skew poll results in marginal ways. Rather than random error, you get small levels of systematic error.

Both random and systematic error are why you sometimes see outlier polls, results that might show Romney ahead or closing the gap in Ohio, for example, whereas six other polls might show Obama maintaining a lead.

By averaging poll results over time – and weighting some pollsters more heavily than others based on past performance – Silver and similar modelers start to correct for random and systematic error. But ultimately, as Silver freely admits, his modeling and prediction is only as good as the polls on which they are based. If for some reason, pollsters as a group are wrong in their assumptions in estimating and reaching likely voters, then Silver’s forecast will also be off.

The Noise

The 2012 election was ultimately a triumph for the science and art of a new type of political prediction. But the profusion of polls and the rise of modelers like Silver have also amplified long-standing problems with election reporting and commentary more generally, trends that scholars have warned about for years.

According to the Pew Project for Excellence in Journalism, close to 40 percent of coverage over the last two months of the 2012 campaign focused primarily on the strategy and tactics of the campaign and the question of who was winning. As Harvard University’s Thomas Patterson explains, rather than foregrounding issues like climate change or income inequality, in this “horse race reporting,” journalists instead focus on who’s ahead and who’s behind in the polls, the ground game and social media strategies employed, and the monthly tally of fundraising by each campaign.

“Horse race” is an apt metaphor as much of contemporary political reporting translates easily into the conventions of ESPN, with a focus on competing political gladiators who survive to ride another day, battling to cross the finish line first. Polling and modeling are a central feature of this political spectacle. In fact they supply the “objective” data for reporters and commentators to define who is winning while offering a news peg for offering attributions about the reasons for political success or failure.

This pattern in coverage has been fueled over the past three decades by industry trends and organizational imperatives, constituting a “quiet revolution” in political reporting, notes Patterson. In a hyper-competitive news environment with a 24 hour news cycle and tight budgets, reporting the complexity of elections in terms of the strategic game is simply easier, more efficient, and considered better business practice.

At the New York Times, Fox News, the Huffington Post and other media organizations, polls are a competitive advantage in the news marketplace; and a central part of branding and marketing, writes Pew’s Rosenstiel. As CNN notes, the day before the election, 20% of visitors to the New York Times site went to Silver's blog and "538" was the eighth most searched term driving traffic to nytimes.com.

Polls also help fill the demand for “anything new” while also fitting with trends towards “second hand” rather than primary reporting. The increased use of daily tracking polls and efforts at predictive modeling like Silver’s has only amplified the tendency towards horse race coverage.

The profusion of polls and predictive models also makes it easier for news organizations to adhere to the informal – but often skewed -- rules of false objectivity, writes Harvard’s Patterson. Journalists will pay attention to scandals, gaffes, and perceived momentum, but they typically shy away in coverage from actively assessing whether one side has the better set of candidates, ideas, or proposed solutions. With a preference for partisan neutrality, it is much easier for journalists to simply default to obsessive discussion of polls.

As Rosenstiel adds, instead of accountability and explanatory journalism that carefully evaluates the positions and claims of candidates, polls and predictive models enable today’s era of “synthetic journalism.” In a hyper-competitive news cycle and online environment, there is increasing demand for journalists to try to synthesize into their own coverage what has already been reported by other news organizations.

Polls and predictive models provide the “objective” organizing device by which to comment and analyze news that is being reported by other outlets. For example, if a new survey or model result indicates that a candidate is slipping in public popularity, the reporting of the results provides the subsequent opening for journalists to then attribute the opinion shift to a recent negative ad, allegation, or political slip up.

Despite their accuracy and usefulness, as news organizations rely more and more on daily tracking polls and predictive models like Silver’s as branding devices, a focus on horse race coverage and synthetic journalism is only likely to be magnified. The debate over Silver's prediction in the week leading up to the election is a leading example. In this not only was Silver's estimate used to argue why Romney might be losing and where he went wrong, but Silver's model became a news event unto itself, with commentators like MSNBC's Joe Scarborough arguing that election outcomes were too complex to boil down to a formula and that it was crazy to argue anything but that the race was in doubt.

In this case, the important signal provided by Silver and other modelers was turned into distracting and spiraling levels of media noise. When election coverage becomes Sports Center on steroids, we all lose. “The nearer this campaign has come to its end, the more devoid of substance it has become,” wrote Michael Gerson in his column at the Washington Post. “This is not the advance of scientific rigor. It is a sad and sterile emptiness at the heart of a noble enterprise.”

Further Reading

Patterson, T.E. (1993). Out of Order. New York: Knopf.

Patterson, T.E. (2005). Of Polls, Mountains: U.S. Journalists and Their Use of Election Surveys. Public Opinion Quarterly 69, 5, 716-724.

Rosenstiel, T. (2005). Political Polling and the New Media Culture: A Case of More Being Less. Public Opinion Quarterly 69, 698-715.

Photo credit: CNN

Comments

Comments

But my problem is that that Nisbet/ Thomas Patterson/ Gerson acts as though the poll discussion is itself the problem…

...the problem, rather, is that the MSM not only fixates on polls, it then (probably intentionally) misinterprets the meaning of the polls and lies to us, screaming that the race is “neck and neck” and we are an equally divided country.

Nate Silver and Sam Wang show us that the MSM is lying to us.

Nate and Sam are not pollsters. They have never commissioned a poll, or even suggested one… they simply show you how to properly evaluate the data.

And the polls Nate and Sam are evaluating simply ask which candidate you prefer. Presumably the ANSWERS of the polled population aggregate all of the issues that Patterson/Gerson lament are being lost.

How relevant will Nate be during the NEXT election? Will people still be willing to accept the BS that the MSM puts out as “punditry?”— in other words, will Silver and Wang actually force a more substantive discussion, having pulled back the curtain and revealed that The Great Oz is a phony?

To me, criticizing Nate is like criticizing Jon Stewart for not getting into the issues deeply enough. Stewart’s original role was to highlight how ludicrous the MSM is. But his star rose to the point where he became so relevant, he became part of the discussion: now a significant subset of the populace go to Jon Stewart for their news! Will Nate similarly change the discussion, rather than just commenting on it as he does now?

Hi Josh,
I guess the very relevant point you bring up is if in the next election cycle we will trade discussing the latest tracking poll with the latest model calculation, since inevitably just as there will be more polls the next time around, there are likely to be a lot more modelers in the Silver mold. Either way—if we are talking about polls or models—we are not talking about issues nor contextualizing them for voters.

What Silver and others do is correct for the intrinsic randomness to any single national or state poll. Based on a sample of likely voters, each poll is just an estimate, an attempt to forecast at a single moment in time how the true population of likely voters might vote on Election Day. deterjan Because they are an estimate, each poll has a margin of error usually in the range of +/- two to four points. Depending on the confidence interval for the polls, this generally means that if 100 polls were taken at that period in time by the survey firm, using the same question wording, 19 out of 20 of the polls would fall within the survey’s margin of err

What Silver and others do is correct for the intrinsic randomness to any single national or state poll. Based on a sample of likely voters, each poll is just an estimate, an attempt to forecast at a single moment in time how the true population of likely voters might vote on Election Day. Because they are an estimate, each poll has a margin of error usually in the range of +/- two to four points. Depending on the confidence interval for the polls, this generally means that if 100 polls were taken at that period in time by the survey firm, using the same question wording, 19 out of 20 of the polls would fall within the survey’s margin of err