Post navigation

Polls Networks Dare Not Name

Just possibly you’ve heard of a surge Rick Santorum is enjoying in Michigan. A surge that has now put him ahead of Mitt Romney in Romney’s childhood home state. And quite possibly you’ve seen the chart above from TPM, or one like it at HuffPost/Pollster or RealClearPolitics.

But if you tuned in to The Daily Rundown with Chuck Todd this morning (Feb 15), you didn’t see this chart. You instead heard Todd report on an OHIO poll from Quinnipiac showing Santorum leading Romney 36-29. And then you heard Todd say “Ohio, Michigan, there’s a lot of similarity there.” OK. But what of the data represented in the above chart about Michigan, the critical current topic? Todd continued, saying “We haven’t seen some great polling out of Michigan yet that we are willing to quote, that meets NBC News standards, but its clear Santorum is on the move. We are seeing it nationally. We are seeing it there.”

And so there you have it. NBC News standards force Todd to ignore the evidence of multiple polls from Michigan, and instead rely on one poll from a neighboring state. All to avoid saying the dread words: PPP, or ARG, or MRG or Rasmussen or Mitchell Research. Those polls all show Santorum leading Romney by from 3 to 15 points in Michigan. (Median: 9 points).

And yet NBC News standards won’t allow these polls and this critically important result to be reported on the air. Why? Three are IVR (“robo-polls”), one isn’t entirely clear about how interviews were conducted and one has been criticized for substantial “house effects” in the 2008 primaries. At least that’s my guess why these are not seen as meeting NBC standards, though no explicit reason was given by Todd.

This is an example of a painful divide between the way many polls are now conducted and the standards networks and newspapers continue to use in evaluating whether a poll is “reportable” or not. The explosive growth of IVR polling means a tremendous number of state polls are not reportable because they lack live interviewers, randomized in-household selection, and omit cell phones (though the latter is not seemingly reason alone for failure to meet standards.) NBC is not alone in this at all. ABC has similar limitations and the Washington Post also has similar standards. I’m sure CBS and the New York Times have essentially the same standards.

Standards are not bad things. No reputable news outlet should be reporting results from volunteer polls of the kind papers and local stations often have on their websites (“for entertainment”). And skepticism about the quality of the science behind many public polls is certainly warranted. Charges of outright fraud have been lodged in the last few years as well. So skepticism and vetting are entirely justified, and indeed the vetting might be more vigorously pursued.

But when a pollster such as PPP has an established track record over many elections at the state, national and indeed sub-state level, it seems extremely odd to conclude that their work is not “reportable” simply because it is based on IVR methods. We have a lot of information about how PPP performs in practice, and it is a good record, in most ways an enviable record. The other Michigan polls also have more than long enough track records to provide an informed judgement as to their validity.

This is also an issue about when failing to report the elephant in the room is a failure of standards. Todd seemed a bit awkward this morning maneuvering to imply what was obviously happening to the polling in Michigan and yet not being allowed to say it on the air. Any political junkie like me (and you!) that watches Daily Rundown every day certainly knows what the polls are showing in Michigan. So for Todd not to be able to report on that and honestly discuss what the data show is fundamentally misleading, especially for the segment of the audience that doesn’t check TPM Polltracker, RCP and HuffPost/Pollster every morning.

We had a preview of this situation in 2009 when the Virginia governors race was dominated by IVR polling that was driving all the conversation about the race and yet the Washington Post couldn’t report on those polls for the race in its own backyard. How do you cover a race, how does a reporter write a story, while censoring all mention of virtually every poll done in the state?

New polling methodologies are a fact of life. IVR has exploded because it is cheap and some pollsters have shown they can do it well. (Not all, but some.) Internet based polls haven’t yet established their general validity but some are doing very serious statistical work on developing an intellectually rigorous approach to these non-random selection methods. And some pollsters using conventional live interviewer methods have had questionable track records. This year the American Association for Public Opinion Research (AAPOR) will make the theme of their annual meeting evaluating the new methodologies in public opinion research. Many in AAPOR make their living by doing high quality, expensive, live interview polling with cell phones included. The new methods are a challenge to that quality and that business model. But the methods must be evaluated and AAPOR should be commended for making this the theme of the conference.

When Mark Blumenthal and I started Pollster.com we debated for quite a while whether to exclude some types of pollsters from our trend estimates. And partisans frequently condemned our inclusion of their least favorite pollster. But I felt then and now that the only way to know if a pollster is a consistent outlier is to show the data against all other pollsters. The bad ones tend to stand out and a bit of analysis can highlight that. But you can’t see that if you simply exclude a pollster a priori.

So I think it is time for a serious discussion by the keepers of standards in all media to consider that evaluation of contemporary polling requires a recognition that new methods are unavoidable and that a blanket exclusion of IVR or internet methods is no longer tenable. This may make it harder to be a responsible gate keeper over what is reportable. I don’t necessarily care to argue that networks or newspapers must include everyone. I can here at PollsAndVotes because I’m NOT the New York Times or NBC, and if I’m wrong to include someone it affects at most a handful of sophisticated readers who are able to make their own judgements. More to the point, by being transparent about who is included and providing analysis of pollster house effects, I’m essentially in the business of illustrating how pollsters perform, among other things.

Finally, this post should not be read as an attack on Todd or NBC’s standards or those of anyone else. I have a lot of respect for the people at all the networks and major papers with in-house polling experts. But the old model that decided what is reportable based on a single view of “good” methodology is in need of reconsideration. When the elephant is in the room, dominating perceptions of a race, not mentioning the pachyderm serves no one well. Brand the polls as IVR. Caution readers or viewers. And evaluate the proven performance of the pollsters in the past, and say so.

But let’s not pretend we don’t have evidence that Santorum is leading Romney in Michigan.

Addendum: @postpolls tweets that they have updated their IVR policy:

@pollsandvotes we updated approach to IVRs in late 2011, explained here: wapo.st/ww8Fha // also w/ 4 VA polls in ’09 no coverage issue

The link to Post Polling Director Jon Cohen’s piece is well worth reading for a thoughtful approach to IVR polls and their use in the paper. We could debate how far to go or not go in using IVR but it is great that the Post is grappling with the issue and writing about it openly.