Demystifying the Science and Art of Political Polling - By Mark Blumenthal

January 05, 2005

The "Smoking Gun" Part II

Today's must read for polling junkies is undoubtedly the exchange between Warren Mitofsky and Mickey Kaus (on the "smoking gun" item I blogged on yesterday). I cannot do it justice with a quick excerpt -- you should definitely read it all -- but here is the text of Mitofsky's email reply to Kaus:

The so-called smoking gun you wrote about was in the hands of every subscriber to our national election poll throughout election day. What took you and the others two months to locate it? At least a dozen news organizations have had this smoking gun since 11/2.

Second, the complex displays you ridicule, which were the source used by the leakers for the numbers that got posted by bloggers on election day, are not the tables you and others discovered. I stand by my original statement. Had you asked me I would have told you as much.

Third, if you doubt that we warned the NEP members on election day why don't you ask one of them? Or is ridicule with your eyes closed your preferred method of sounding smart?

And lastly, if my clients were as misinformed as you seem to think how come none of them announced an incorrect winner from the 120 races we covered that day? It seems that the only ones confused were the leakers and the bloggers. I guess I should include you in that list, but I'll bet you don't make mistakes. We have never claimed that all the exit polls were accurate.

Then again, neither is your reporting.

warren mitofsky

Ouch. I definitely have some thoughts on this one, but unfortunately, my day job prevented me from writing more today. I'll try to update this post later tonight. For now, read the Kaus item in full (and if any of this exchange needs additional "demystifying, please post a comment and I'll try to clarify).

---------

UPDATE: First, because at least one highly valued reader heard it differently than I intended, let me clarify what I meant above by "ouch." It was that pained feeling I had watching someone I admire - and I'm talking about Mitofsky here - do something so obviously inappropriate. It was the way I would imagine Ohio State fans must have felt 26 years ago watching their legendary coach Woody Hayes punch that defensive back receiver from Clemson.

I'll come back to my reaction to Mitofsky's email, but the heart of this exchange is a question I considered a few weeks ago, "were the exit polls really wrong?" Looking back, I realize that I would have been well served by a good editor on that post, because while I asked a provocative question, I never made it clear where I stood. Moreover, by putting quotation marks around the word wrong (and then using the same phrase as the title of my exit poll FAQ), one could conclude I saw nothing "wrong" at all.

The point I wanted to make then was the exit polls were obviously wrong in some ways, not so wrong in others. Everyone, even Mitofsky, concedes that the just-before-poll closing exit polls had an average "error" (or, to some, a "discrepancy") of roughly 2% in Kerry's favor compared to the actual count.

Where the exit polls were right - or at least, not quite "wrong:" The errors were too small to achieve statistical significance in all but a handful of statewide polls. They were not large enough to give Kerry a lead beyond sampling error in any states that he ultimately lost, and not large enough to result in any wrong calls on election night.

[In an update, Kaus suggests a failing I overlooked: Projections in states like South Carolina might have been called earlier on actual vote returns but for exit poll errors in Kerry's favor that implied those races would be close. As he writes: "The purpose of exit polls is obviously not simply to prevent the announcement of an 'incorrect winner.' It's also to allow the earlier announcement of the correct winner"].

Where the exit polls were obviously wrong: As the Washington Post's Richard Morin put it in November, the errors were "just enough to create an entirely wrong impression about the direction of the race in a number of key states and nationally." And Kaus is right -- it wasn't just bloggers, but sophisticated journalists and political insiders who reached the wrong conclusion looking at those numbers on Election Night.

Of course, supporting the official network projections is only one mission, and arguably the least important. The exit poll subscribers also pay to get (a) some early indication of the outcome on Election Day so they can plan their coverage and (b) data to support analytical stories written on Election Night that explain the outcome and characterize the race among demographic subgroups. Here the exit polls obviously failed. News organizations planned coverage on the assumption that Kerry would win. Some stories based on the early evening cross-tabs apparently had to be rewritten. As John Ellis -- a former analyst for both NBC and Fox News -- wrote on his blog shortly after the election:

The lost productivity at places like The New York Times and The Washington Post, where literally hundreds of reporters and editors spent the equivalent of an 8-hour work day writing and preparing fiction...all of the consumers of this content have to be asking themselves: "why in the world do we pay for this?"

So who is to blame for that wrong impression? That is the central argument between Mitofsky and Kaus and others. Was it Edison/Mitofsky for how it managed and disseminated the results? Should the networks have spent more to assure better interviewing and coverage? Were they both wrong to resist disclosure of basic methodological details that might have helped reporters and editors and even bloggers better understand the limits of exit polls? Should those editors, reporters, and bloggers have known better? I tend to exempt the consumers, but otherwise, I find it difficult to place all the blame in one place, especially given how little we really know about what went wrong and why.

Having said all this, I will admit that I may have a bit of a blind spot with respect to Warren Mitofsky. He is, deservedly, a living legend in the field of survey research. In the 1960s and 1970s, working with a small group of colleagues at CBS News, he helped invent not only the exit poll, but also the CBS likely voter model and a practical methodology for random digit dial (RDD) telephone surveys that remains in use in to this day. He also spearheaded creation of the disclosure standards that explain the ubiquity of the "margin of error" in news stories about polling.

Of course, he also has a notoriously thin skin about criticism. In this regard, unfortunately, the email to Kaus speaks for itself.

I am willing to cut Mitofsky some slack -- at least until I know more -- about the nuts and bolts of why the exit polls were off. However, I tend to agree with his critics in one respect: The lack of transparency about basic methodology, the instinct to deny obvious problems and then blame the bloggers and his habit of lashing out in anger at criticism are at odds with someone of Mitofsky's well deserved reputation and stature.

Comments

- Kaus is a jerk. (Given the amount of traffic he sends towards this site, I don't expect this sentiment to be replicated outside comments...)

- Mitofsky is also a jerk.

- Mitofsky's exit polls are weak.

The first and third conclusions are not anything new.

I'd expect that the weaknesses of Mitofsky's results are directly tied to his budget. If his clients want better accuracy, I'd expect they'll have to pony up more dollars.

But low budget or not, given that Mitofsky has had a decades-long monopoly on exit polls, and given that his results are inevitably weak, if I were the client, I'd think it was time to give someone new a shot.

Mitofsky is "someone new", even though he goes back a long way in the EP biz. Mitofsky was uninvolved in the VNS era and the 2000 meltdown. (I don't recall if he was implicated in the abortive 2002 effort to build "Son of VNS".)

Posted by: RonK, Seattle | Jan 5, 2005 9:14:40 PM

"Mitofsky is "someone new", even though he goes back a long way in the EP biz. Mitofsky was uninvolved in the VNS era and the 2000 meltdown."

And perhaps I'm an uninformed jerk as well...

I was under the impression that Mitofsky has been the guy in charge of ALL American national exit polling since he invented the genre over 20 years ago.

Or the exit polls were RIGHT, Kerry won, but Bush stole another election through voter inimidation and fraud.

Posted by: aaa | Jan 6, 2005 8:00:03 AM

Petey, RonK:

You're both right (though I'll stay out of the "jerk" exchange): Mitofsky was not formally part of VNS in the last few election cycles. He did help set up the first "network pool" exit poll, "Voter Research and Surveys" which later evolved into VNS. And in 2000 (perhaps earlier, I'm not sure) he and his current partner Joe Lenski served as exit poll/Election Night consultants for CBS and CNN. The leadership at VNS consisted of his principal deputies. See the bio at the Edison/Mitofsky site:
http://www.exit-poll.net/election-night/aboutmitofsky.html

I think there are a couple of real problems with exit polls as they are done in this country, and they are related. One is that the pollsters don't make a real attempt to sample each state in such a way as to accurately measure the demographics. The second is that they go a long way to hide as many details as possible, and to make the process as opaque as they can imagine. I suppose this is intended to forestall criticism. Good criticism would either improve the polling methodology, or it would make the customers realize what a pathetic piece of crap they are really purchasing.

All this aside, it is my understanding that they use the "Zogby method" of adjusting the demographics of their unweighted sample to either a prior demographic model, or they use the final results from the election to adjust the demographics. (Hence, if you look at the final exit polling numbers, magically they are equal to the final vote percentages.)

I've heard Andrew Kohut of the Pew Research Center discuss the exit poll methodology. It seemed to me in listening to him that he could barely keep from holding his nose while talking about the methodology being used (though he admitted that he didn't know all of the details due to the lack of transparency).

Posted by: Krusty Krab | Jan 7, 2005 2:04:46 AM

"... or they use the final results from the election to adjust the demographics"

That would be just ... wrong! I don't see how that could possibly be justified.

Anybody?

Posted by: Thor's Hammer | Jan 7, 2005 8:25:24 AM

Could somebody please explain to me why we need exit polls at all? To me, their results are too shaky and questionable to be worth the huge problems they have caused in the recent elections. When wrong results get out, as just happened, they shake people's faith in the election results--a far worse consequence, if you ask me, than the wasted time of a few journalists who wrote the wrong stories on Election Night. When election results are in question, network commentators pretend they can't tell us what the exit polls show, while winking and mugging through the cameras to make sure we understand the real message. And when polls are correct, as far as they go, but the results get out too early, as happened a couple of election cycles ago, the results of the election itself are compromised when those in Western time zones are discouraged from going to the polls at all. Why do we need this confusing hassle? Why can't the media just wait for the actual election results like all the rest of us, and write their stories after the polls close? Seems to me this is what they ought to be doing anyway, since the early results, even when weighted, are so patently unreliable that anybody who leaks them or relies on them is derided as foolish. Why is this waste of time and money considered such a good idea?

Posted by: beatrix | Jan 7, 2005 8:49:08 AM

Thor's Hammer writes: '... or they use the final results from the election to adjust the demographics"

That would be just ... wrong! I don't see how that could possibly be justified.'

I've made the same point, and asked the same question. The only answer I've got back is the rather Stalinist one that the tallies are, ipso facto, correct. Not being a Stalinist, it's hard for me to swallow that, but evidently other people aren't similarly burdened.

Posted by: Mairead | Jan 7, 2005 9:07:15 AM

Beatrix writes: "When wrong results get out, as just happened, they shake people's faith in the election results"

Personally, I'd rather not rely on 'faith' when it comes to elections. I'd rather rely on something more substantial, such an an independent verification that the results are what officials claim they are.
Exit polls are one way to get that verification.

Posted by: Mairead | Jan 7, 2005 9:14:11 AM

Not when the exit polls are so obviously wrong that they have to be "adjusted" (read "cooked") to match the demographics of the actual results. I'd agree with you, Mairead, if there were any actual reason to believe that exit polls accurately reflect the actions of voters. But there's no reason to think they do. The pollsters themselves concede that the polls do not do this when they use this last-minute "adjustment" to repair their results.

Posted by: beatrix | Jan 7, 2005 9:19:55 AM

Exit polls are one way to get that verification.

One thing to remember is that exit polls are just that, polls. They have a margin of error associated with them. While you can use them as rough guides, you have to remember that they can be skewed one way or another due to random chance.

With that in mind, remember that all of the exit polls, please read a previous post at
http://www.mysterypollster.com/main/2004/12/have_the_exit_p.html

With the money quote:
"In short, Mitofsky and Lenski have reported Democratic overstatements to some degree in every election since 1990. Moreover, all of Lenski and Mitofsky's statements were on the record long before Election Day 2004."

Posted by: Mark S. | Jan 7, 2005 10:29:31 AM

Mitofsky and Edelman's review of the 1992 Exit polls also said that there was Democratic bias in 1984 and 1988 as well...

"The difference between that final margin and the VRS estimates (in 1992) was 1.6 percentage points. VRS consistently overstated Clinton’s lead all evening…Overstating the Democratic candidate was a problem that existed in the last two presidential elections" (Mitofsky and Edelman, 1995, pp91-92).

So... the "problem" has been in every election since 1984.

Were they off before 1984? Perhaps, but I'd have to go re-read the texts a bit closer.

So... the polls have been overstating the Democratic candidate's proportion at least since 1984. Does this mean there is fraud in the election tally in every election since 1984? What else could explain such systematic bias?

The issue here is that the average of states or the national exit poll has NEVER been this far off. A casual look at the Freeman data tells you, that while for ~40 of the 50 states, they were within the margin of error, the bias of the polls was pretty solidly toward Kerry. And, the national popular vote poll was WAY WAY off.

What can explain this? It could be either: 1) sampling error; 2) non-sampling error; or 3) innacurate tally.

Given that it is highly improbable (don't believe the probability calcs you've heard from Simon/Baimon and Freeman, but it is still highly improbable) that the discrepancies could have occurred by chance alone, I suspect it's a combination of #1 and #2. The tally may be innacurate as well, but how can we tell this from the exit poll? I don't think you can.

beatrix,
You are mistaken, the exit polls were not skewed due to random chance, they were skewed due to bad methodology. Exit Polls can in fact be done extremely accutately..in fact BYU did an exit poll in utah that was extremely accurate.

Posted by: Briad Dudley | Jan 7, 2005 10:45:13 AM

Hey Mark,
Do you know what plans, if any there are to improve the exit poll methodology for future elections? I have personally been concerned that I havent heard any suggestions from pollster on improving the exit polls.

Posted by: Brian Dudley | Jan 7, 2005 10:50:30 AM

I don't think I said random chance caused the bad results--instead, I questioned the methodology that has the pollsters tinkering with their results to match the elections. Frankly, I have no idea what causes bad results in exit polls; I am no statistician, just a voter who's watched the messes caused by exit polls in the last three presidential elections in a row. No matter what causes the bad results, the polls seem to be generating them. My actual question was why the polls are accepted as a necessity in the first place. And I'm still wondering.

Posted by: beatrix | Jan 7, 2005 10:56:03 AM

Beatrix:

The exit polls are the best way to get a sense of the voter profile. Read this blog's Exit Poll FAQ for the type of data unique to exit polls.

The problem is that while valuable for voter profile data, it is true that exit poll vote counts probably should not be trusted with current methodologies. Of course, we should ask how accurate voter profiles themselves are in cases of such large discrepencies. This doesn't mean the vote tallies are super accurate or worthy of your faith, IMO. It is not either/or, both can be wrong, and believing vote tallies in close elections without independent verification is not my cup of tea.

BUT don't forget that the exit polls CAN be done accurately, as Mairead pointed out. And there may be other ways to audit or otherwise independently verify close election results. Sadly, I think the motivation to do so on a national scale, or even duplicate nationally BYU's exit poll methodology does not yet exist.

Posted by: Alex in Los Angeles | Jan 7, 2005 1:01:50 PM

"You are mistaken, the exit polls were not skewed due to random chance, they were skewed due to bad methodology. Exit Polls can in fact be done extremely accutately..in fact BYU did an exit poll in utah that was extremely accurate."

Actually, they could be skewed due to random chance AND bad methods (AND vote fraud). Likewise the BYU poll could have "nailed" the election result based on skewed random chance AND bad methods.

The BYU exit poll had some margin of error. I imagine it couldn't have been better than +/-2%.

That means 95 of 100 times, the poll will show within this range - this is due to random chance alone. Therefore, assuming a PERFECT poll methodology (and perfect vote count), the poll could be +/-2% from the election tally.

Suppose the poll underestimated Bush's percent by 2% - is this "accurate"? Statistically, yes. According to the methods, it's the best that could be done.

A little tougher to measure is the effect of the methods. E.g., what if sampling error alone yielded a 2% skew toward Bush, but then there was something wrong with the methods (either coverage error, differential non-response, poor training of pollsters, coding error, etc.) that skewed the result BACK 2% to the center.

The result would be an exit poll that "nailed" the election tally, but can we say that the exit poll was "extremely accurate", can we?

The problem is, statistically, we don't know if the BYU exit poll was "extremely accurate" or if some combination of random sampling error and non-sampling error can explain the "extremely accurate" outcome.

But consider that the exit poll for every presidential election since 1984 (that we know of) has had Democratic bias. That is like saying you flip a fair coin 6 times and get all 6 tails (not considering the significance of the discrepancy here - only the odds that it falls on one side of the distribution or another). Can random chance alone explain this? Sure.

But this is likely a simplistic way to look at this question. Meaning, analysis of the state-by-state or precinct-by-precinct variance have probably lead Mitofsky and Edleman to conclude that 100% of the 1992 discrepancy could not be explained by sampling error alone.

Rick writes: "But consider that the exit poll for every presidential election since 1984 (that we know of) has had Democratic bias."

How do we know that? Because it differs to the tallies. How do we know the tallies are correct? We don't!

Therefore, we also don't know that there was any 'Democratic bias'.

All we *know* is that something in the system is broken somewhere. Everything else is either a pious hope or a tinfoil-hat theory, depending on the state of one's liver.

To have that be the limit of our knowledge is *not* good enough in a nation that purports to be a democracy. This should be the political equivalent of a ten-alarm fire. That it's not is a betrayal of democracy on many levels, beginning with those who spin the situation with bland, reassuring words that aren't supported by the evidence.

Posted by: Mairead | Jan 7, 2005 2:52:09 PM

Beatrix writes: 'Not when the exit polls are so obviously wrong that they have to be "adjusted" (read "cooked") to match the demographics of the actual results. I'd agree with you, Mairead, if there were any actual reason to believe that exit polls accurately reflect the actions of voters. But there's no reason to think they do. The pollsters themselves concede that the polls do not do this when they use this last-minute "adjustment" to repair their results.'

It's worse than that, tho, Beatrix--they're 'repairing' their numbers using tally numbers they have no good reason to believe are correct. The exit polls might be much closer to the truth than the tallies--they might well be *reducing* the accuracy when they 'repair' the exit numbers. And other evidence suggests that that's exactly what's been happening.

Our problem is that assumptions are being pushed that are very servicable to the few in power, but harmful to us--the majority of people whose nation it is. And there are a lot of professionals who, whether innocently or maliciously, are colluding in that and betraying the scientific principles they have an ethical obligation to uphold.

Posted by: Mairead | Jan 7, 2005 3:11:27 PM

Rick,
First off, the margin of error for the Utah Poll was pretty small .2% or something like that if I recall because they used HUGH samples. and they got a result pretty close to the money.

Second the national Exit poll also has some small margin of error like .2% because it involves a hugh sample...but differed by much more than that...there is no doubt that skew in the exit poll due to random chance was a small and insignificant percentage of the skew.

Im not saying they need to do a utah size sample exit poll in every state..that would be hughly expensive...im saying they need to use methods which remove all the systemic error..so that when they look at the national exit poll..that will be right on the money.

They may be repairing their numbers, but not to trick you. The "repairing" is not nefarious and hidden, it is a standard procedure, we all know about it, and Mark discusses it here:
http://www.mysterypollster.com/main/2004/11/the_difference_.html

Also, Mark is not claiming that reweighting exit poll data to match the vote tally in any way proves the vote tally. The claim is that reweighting serves other purposes.

Unfortunately, the NEP does not release the unweighted data, so that confuses many people and makes things seem more sinister than they are.

I point this out to you, because I otherwise agree with the principles in your posts. Modern, national election standards are sorely needed in this country, I agree.

Posted by: Alex in Los Angeles | Jan 7, 2005 4:14:57 PM

Brian, you wrote:

"First off, the margin of error for the Utah Poll was pretty small .2% or something like that if I recall because they used HUGE samples. and they got a result pretty close to the money."

I was just guessing about the CI. BTW - to get +/- .2%, they would have had to had sampled over 125,000 voters assuming SRS (at n this high, the clustering effect is virtually non-existent). Did they? [(s.e. = CI/CL; se = .0002/1.96 = .0010204.) AND (s.e.=1.96*SQRT(.25/n); .0010204^2=1.96*(.25/n); .000001/1.96=.25/n; n=.25/.0000005; n=125,000)] Unless I did the math too quickly, I get 125,000 for +/-.2% at 95% CL. +/-1%, 95% CL is around 16,000 if I'm not mistaken (haven't calc'd it).

So, if you know the sample size, you can calculate the margin of error using the above formulas (again, assuming SRS; if you know the reported CI, you can calculate an estimate of the design effect!).

"Second the national Exit poll also has some small margin of error like .2% because it involves a hugh sample...but differed by much more than that...there is no doubt that skew in the exit poll due to random chance was a small and insignificant percentage of the skew."

Go to the NEP web-site. The 2004 national exit had a CI of +/-1%. If you assume a SRS, it was +/-.8%. This shows that at around 13,000 interviews, the design effect of the 2004 US exit polls virtually evaporated. But, for the Ukranian exits of similar sample size, the design effect was MUCH larger (I've done some analysis of this if you care to read it - it was linked to by The Daou Report).

"Professional pollster Mark Blumenthal started Mystery Pollster to provide better interpretation of polling results and methodology... offers much needed help to Political Wire readers" - Political Wire