Demystifying the Science and Art of Political Polling - By Mark Blumenthal

November 17, 2004

The Freeman Paper

And speaking of MIT educated PhDs...

The latest "must read" among those who want to pursue theories that the vote count was wrong and the exit polls were right (or who want to debunk them) is a paper released by an MIT PhD named Stephen F. Freeman, now a Visiting Scholar in Organizational Dynamics at the University of Pennsylvania. His report, entitled, "The Unexplained Exit Poll Discrepancy" is available for download here and here.

Freeman's paper makes one very helpful contribution to this debate. He reports exit poll results captured by CNN just after midnight on Election Night. He extrapolates from vote-by-gender tabulations for 11 battleground states posted that appear to be the last available before the complete samples were weighted to conform to the reported results (although the sample sizes are slightly lower than those now posted online). Given how late they appeared on the CNN website, they are presumably weighted by actual turnout, although absent confirmation from the National Election Pool (NEP) we will never know for certain.

Click on table for full size version

Freeman's data confirms the consistent skew to Kerry evident in leaked exit poll numbers posted on blogs earlier in the day (see my earlier post on this topic). "In ten of eleven consensus battleground states," Freeman writes, "the tallied margin differs from the predicted margin, and in every one, the shift favors Bush."

[An aside: Freeman justified his list of battleground states with a footnote: "These eleven states are classified as battleground states based on being on at least 2 of 3 prominent lists, Zogby's, MSNBC, and the Washington Post." Okay, fair enough, but if Freeman has data for other states, why not release it all? Or would that make the pattern less consistent?]

But Freeman is not content to confirm the small but consistent skew to Kerry in the exit polls. His paper makes three arguments: (1) Exit polls can "predict overall results" in elections "with very high degrees of certainty," (2) the odds against unusual "anomalies" in just three states -- Florida, Pennsylvania and Ohio -- "are 250 million to one" and (3) none of the official "explanations" (his quotations, not mine) for the discrepancies are persuasive. So while he cautions against "premature" conclusions of "systematic fraud or mistabulation," he nonetheless sees vote fraud as "an unavoidable hypothesis."

I have problems with all three arguments. Let me take them one at a time.

It's easy to get a statistically valid sample; and there is not a problem with figuring out who is going to vote - or how they will vote.

Then Freeman quotes two "experts:" Dick Morris, who says "exit polls are almost never wrong" and Thom Hartman who says German exit polls "have never been more than a tenth of a percent off." Then he cites an exit poll conducted by students at BYU that was only off by two tenths of a percent this year.

Whoa, whoa, whoa.

I can set aside, for a moment, my qualms about Dick Morris as an expert on exit poll methodology, and I will suspend disbelief about Hartman's claims about the German exit polls until I learn more. However, Freeman's assertion that it is "easy" for an exit poll to get a statistically valid sample is unconvincing.

It is true that exit polls have no problem identifying "likely voters," but they trade that problem for a huge set of logistical challenges. The national exit polls hire 1500 interviewers for just one day of work every two years and deploy them to randomly chosen precincts nationwide. Telephone surveys can train and supervise interviewers in a central facility. No such luck for exit polls. They depend on interviewers with relatively little prior experience or training. The year, in fact, NEP conducted most of its interviewer training by telephone. Yes, exit pollsters can easily draw a statistically valid sample of precincts, but some interviewers will inevitably fail to show up for work on Election Day. NEP tries to deploy substitutes to fill the gaps, but some precincts inevitably go uncovered. In 2000, 16 percent of sampled precincts were uncovered (Konner, 2004; although this statistic may have applied to those covering both the exit poll and sampled "key precincts").

Next, consider the challenges facing each interviewer as they attempt to randomly select voters emerging from the polling place (some of which I learned about in recent emails from NEP interviewers): Interviewers typically work each precinct alone, soliciting participation from every "nth" voter to exit the polling place (the "n" interval is typically between 3 and 5). But these interviewers must also break away to tabulate responses and call in results three separate times during the day. They make their last call about an hour before the polls close and then stop interviewing altogether. If too many voters emerge from the polling place at once, they will miss some potential respondents. If polling place officials are not cooperative, the interviewer may have to stand so far from the polling place that they cannot intercept voters or are lost in the inevitable gaggle of electioneering partisans. If several precincts vote at single polling place, the interviewers have no way to identify voters from their specifically selected precinct and samples from all of those who vote at that polling place.

All of these real world factors make it hard, not easy, for an exit poll to get a "statistically valid sample." That's why Warren Mitofsky, the NEP official who helped invent the exit poll, describes them as "blunt instruments" and why Joan Konner, dean of the Columbia School of Journalism concluded in a review last year for Public Opinion Quarterly that "exit polls do not always reflect the final margin" (Konner, 2000, p. 10).

Remember, the networks use exit polls to project the outcome only in states where a candidate leads by a margin far in excess of mere sampling error, states like New York or Utah. They did not depend on exit polls alone to call any of the 11 battleground states in Freeman's table because they know that exit polls lack the laser precision that Freeman implies. And discrepancy or not, they called every state right.

2) The odds against the unusual "anomalies" in just three states -- Florida, Pennsylvania and Ohio -- "are 250 million to one."

The important point here is that everyone, even the officials from NEP, now concedes that the exit polls showed a small but statistically significant bias in Kerry's direction across most states in 2004 before they were weighted to match the actual results. Freeman's data show Kerry doing an average of 1.9 percentage points better than the actual count in the 11 states for which he has data. In a public appearance last week, Joe Lenski of the NEP reported that the exit polls had "an average deviation to Kerry" of 1.9 percentage points - exactly the same number. Warren Mitofsky confirmed Lenski's comments in an email to me over the weekend.

Also, as I noted here on November 4, Kerry's standing in exit polls exceeded the actual result in 15 of the 16 states for which Slate's Jack Shafer posted results at 7:38 EST on Election Night. Freeman's data show the same pattern in 10 of 11 states. This is akin to flipping a coin and having it come up heads 10 of 11 times, an outcome with a probability of 0.6% or 167 to 1.

So when Freeman is right when he says it nearly impossible to explain these discrepancies by sampling error alone. Having said that, his 250 million to 1 statistic is exaggerated. The reason is that Freeman assumes "simple random sampling" (see his Footnote 15). Exit polls are well known to use "cluster sampling." They first select precincts, not people and then try to randomly select multiple voters at each cluster. While NEP reports only minimal information about sampling error ("4% for a typical characteristic from... a typical state exit poll)," an analysis of the 1996 exit polls by those who helped conduct it estimated that the cluster sample design ads "a 30 percent increase in the sampling error computed under the assumption of simple random sampling" (Merkle and Edelman, 2000, p. 72). That study is useful because the 1996 state exit polls involved roughly the same number of precincts (1,468) as this year's polls (1,480). Merkle and Edelman also provided a table of the estimated "clustered" sampling error that I have adapted below.

Having said that, the observed discrepancies from the actual count in Freeman's data still appear to be statistically significant using the Merkle & Edelman margins of error in Ohio, Florida and Pennsylvania. If NEP were to provide the the actual "p-values" (probability of an error) for all three states, and we multiplied them as Freeman did, the real odds that this happened by chance alone are still probably at least 1,000,000 to 1. In a business where we are typically "certain" when there is a 5% chance of an error (e.g. 1 in 20), one in a million is still pretty darn certain. Still, you can decide why Freeman chose to ignore a well-known facet of exit polling design and report the most sensational number available.

3) None of the official "explanations" are persuasive

Freeman notes the claim by the New York Times' Rutenberg that NEP's internal report had "debunked" theories of vote fraud (something I wrote about here) and laments, "it does not explain beyond that declaration how the possibility was debunked." That is correct. I can add one new wrinkle: A reporter who had been working on the story shared a rumor that the Times story mischaracterized the NEP report, that it never used the word "debunked" to describe theories about vote fraud. I put this question to Warren Mitofsky via email, and he refused to characterize the report in any way, except to described it as confidential.

Freeman argues that pollsters can magically weight away differences caused by non-coverage or demographic differences caused by non-response. Since the only measure of the demographics of actual voters on Election Day is the exit polls themselves, what would they weight to exactly?

Regarding the possibility that the polls sampled too many women, he quotes Dick Morris:

The very first thing a pollster does is weight or quota for gender. Once the female reaches 52 percent of the sample, one either refuses additional female respondents or weights down the ones subsequently counted. This is, dear Watson, elementary.

It may be elementary to Watson, but it is flat wrong to those who know exit polls. Telephone surveys typically set quotas for gender (because women are more likely to answer the phone), but exit polls do not. That's why the exit polls report different percentages of men and women from state to state. So much for Dick Morris, exit poll methodologist.

Freeman also dismisses the theory suggested by NEP's Warren Mitofsky, that "Kerry voters were more anxious to participate in our exit polls than the Bush voters" as a mere hypothesis:

The problem with this "explanation" or even one that would have considerably more "face validity" (which means that it makes sense on the face of it)...is that it is not an explanation but rather a hypothesis. It's apparent that "Kerry voters were much more willing to participate in the exit poll than Bush voters" only given several questionable assumptions. An explanation would require independent evidence.

[The NEP's Joe] Lenski told me that such a probe [of what went wrong] is currently underway; there are many theories for why the polls might have skewed toward Kerry, Lenski said, but he's not ready to conclude anything just yet. At some point, though, he said we'll be able to find out what happened, and what the polls actually said.

Let's hope that happens soon. For now, consider whether any of the following adds "face validity" to the notion that "Kerry voters were much more willing to participate than Bush voters:"

a) This discrepancy favoring Democratic candidates is not new.

Consider this excerpt from a report by Warren Mitosfky published last year in Public Opinion Quarterly:

An inspection of within-precinct error in the exit poll for senate and governor races in 1990, 1994 and 1998 shows an understatement of the Democratic candidate for 20 percent of the 180 poll in that time period and an overstatement 38 percent of the time...the most likely source of this error is differential non-response rates for Democrats and Republicans (Mitofsky, 2003, p. 51).

So they showed twice as many state exit polls overestimating the Democratic candidate performance nearly twice as often as they underestimated it.

Or consider this from Joan Konner's report published in the same issue:

A post-election memo from Mitofsky and Joe Lenski, Mitofsky's associate and partner on the election desk, stated that on election day 2000, VNS's exit poll overstated the Gore vote in 22 states and understated the Bush vote in nine states. In only 10 states, the exit polls matched actual results. The VNS post-election report says its exit poll estimates showed the wrong winner in eight states (Konner, 2003, p. 11).

So much for the previously "high degrees of certainty" Freeman told us about.

b) Exit poll response rates have been declining.

The average response rates on the VNS exit polls fell from 60% in 1992 to 55% in 1996 to 51% in 2000 (Konner, 2003). NEP has not released a response rate for this year, but there has certainly been a downward trend over the last three elections.

Given the overall 50% rate, differences in response between Bush and Kerry supporters would not need to be very big to skew the results. Le me explain: I put the vote-by-party results into a spreadsheet for Ohio. I can replicate the skew in Ohio (one that makes Kerry's; vote 3 percentage points higher than the count and Bush 3 percentage points lower) by assuming a 45% response rate for Republicans and a 55% response rate for Democrats. Not a big difference.

c) Perceptions of news media bias are consistently higher among Republicans and rising.

According to a study conducted in January 2004 by the Pew Research Center, 42% of Republicans believe news coverage of the campaign is biased in favor of Democrats compared to only 29% of Democrats believe news coverage is biased in favor of the Republicans. The overall percentage that believes the news is free of any form of bias bias has declined dramatically over the last seventeen years: 67% in 1987, 53% in 1996, 48% in 2000 and 38% this year.

Now consider that when exit poll interviewers make their pitch to respondents, they are supposed to read this script (the text comes from NEP training materials shared via email by an interviewer):

Hi. I'm taking a short confidential survey for the television networks and
newspapers. Would you please take a moment to fill it out?

I am taking a public opinion survey only after people have voted and it is completely anonymous. It is being conducted for ABC, the Associated Press, CBS, CNN, Fox and NBC, nor for any political candidate or party.**

The questionnaire they presented, and the identifying badge they wore, were both emblazoned with this logo:**

So to summarize: [If you want to explain the exit poll discrepancy] Absent further data from NEP, you can choose to believe that an existing problem with exit polls got worse this year in the face of declining response rates and rising distrust of big media, that a slightly higher number of Bush voters than Kerry voters declined to be interviewed. Or, you can believe that a massive secret conspiracy somehow shifted roughly 2% of the vote from Kerry to Bush in every battleground state, a conspiracy that fooled everyone but the exit pollsters - and then only for a few hours - after which they deliberately suppressed evidence of the fraud and damaged their own reputations by blaming the discrepancies on weaknesses in their data.

Please.

Don't get me wrong. I am disturbed by the notion of electronic voting machines with no paper record, and I totally support the efforts of those pushing for a genuine audit trail. If Ralph Nader or the Libertarians want to pay for recounts to press this point, I am all for it. I know vote fraud can happen, and I support efforts to pursue real evidence of such misdeeds. I am also frustrated by the lack of transparency and disclosure from NEP, even on such simple issues as reporting the sampling error for each state exit poll. Given the growing controversy, I hope they release as much data as possible on their investigation as soon as possible. The discrepancy also has very important implications for survey research generally, and pollsters everywhere will benefit by learning more about it.

Finally, I understand completely the frustration of Democratic partisans with the election results. I'm a Democrat too. Sure, it's tempting to engage in a little wishful thinking about the exit polls. However, to continue to see evidence of vote fraud in the "unexplained exit poll discrepancy" is more than wishful. It borders on delusional.

[11/19 - Clarification added in the third to last paragraph. See some additional thoughts here]

Update: Mayflower Hill has an exclusive interview with Warren Mitofsky conducted earlier today. Using the type of analysis anticipated previously on this site, Mitofsky explains that his data show no evidence of fraud involving electronic voting machines.

Offline Sources on the "jump:"

**Correction/Update - 8/15/2006 - The introduction by interviewers originally included in this post was the one intended for interviewers to use to introduce themselves to polling place officials, not to introduce themselves to voters. Also the logos displayed on the questionnaires were black and white, not color.

Comments

Thanks for the only coherent, well-informed, and non-hysterical examination of the exit poll discrepancies I've seen. I appreciate your taking the time to explain the exit polling process so thoroughly and at such length, and to counter some of the buck-passing and conspiracy-mongering that is going around.

Posted by: aj | Nov 17, 2004 7:36:14 PM

Thank you Mark. I'm not disappointed. Your experience makes you much more capable of saying these things than I am.

The only point I covered that you avoided was the BYU studies. I claim those studies had much greater coverage with a much simpler questionnaire.... in a very homogeneous state.

From the form of the survey results I conclude that the questionnaire must have had at least 54 questions (not counting the one on cell phones that you reported to us.) No wonder only 76,000 (including telephone responses for early or absentee votes) results are given when Mitofsky expected 150,000. Looks like another 50% return to me.

Posted by: esp | Nov 17, 2004 9:29:00 PM

Tremendous post that should be noted far and wide. Some journalists would do well to learn from your obvious respect for your readers, the point about delusion not withstanding. I feel we'll be well prepared to read the NEP report when it comes out.

Question: Are "key precincts" data kept separate from the random precincts data? It seems problematic to mix them as "key precints" are presumably not randomly selected.
You write:
"(Konner, 2004; although this statistic may have applied to those covering both the exit poll and sampled "key precincts")."

Most satisfying about your posts is they cover seemingly all exit poll issues expertly, if not conclusively. One exception appears to be the effect of spoiled/provisional ballots on exit polls. It certainly isn't a new issue so it may be covered in the reports you cite, and I will check.
Greg Palast calls this a major travesty in this and most elections:
http://www.inthesetimes.com/site/main/article/1686/

I wonder how many people who answered the exit poll said they voted for Kerry when they actually voted for Bush. There are many reasons why a voter might do this. For example, consider this: on Election Day, I went door to door in New Hampshire to "Get Out the Vote". There were many feet on the ground that day from a variety of organizations, including the Democratic party and America Votes, all working the same territory. In fact, this territory had been heavily worked over by many organizations for months leading up to the election, and it quickly became obvious that many people I talked to were fed up with all the vote lobbying they had received, and were just telling me whatever they thought I wanted to hear to get rid of me as quickly as possible. This makes me wonder how many voters at the exit polls answered Kerry (because that is what they were trained to answer in order to appease people who were lobbying for their vote), even though they cast their actual vote for Bush.

Posted by: Alan | Nov 18, 2004 12:05:50 AM

A reminder of the importance of vote spoilage in our elections.
http://macht.arts.cornell.edu/wrm1/overvotes.pdf

Of course, this is not to argue against the compelling case for non-respondent bias.

Vote spoilage is an additional factor I am curious about.

Posted by: Alex in Los Angeles | Nov 18, 2004 12:23:21 AM

I was wondering if anyone could tell me why only one organization conducted exit polls this election? Wouldn't exit polls conducted by several independent organizations give the voters more faith in their legitimacy? At the very least, accusations that "pollsters can magically weight away differences" would be less credible if they were conducted by multiple independent organizations instead of one umbrella group, wouldn't they?

My second question concerns point "a) This discrepancy favoring Democratic candidates is not new" where you state that previous statistics "showed twice as many state exit polls overestimating the Democratic candidate performance nearly twice as often as they underestimated it". Were these exit polls also done by one organization? If there has been either some sort of exit poll manipulation or voting fraud wouldn't that make the evidence you cite suspect?

Personally, from looking in detail at how this election has been (mis)handled, how many voters were evidentally deliberately disenfranchised, how partisan many election officials are (not to mention the owners and CEOs of the voting machine manufacturers) and how little scrutiny all of this has recieved in the mainstream media, my faith that previous US elections or exit polls were legitimate is shattered.

Posted by: aaa | Nov 18, 2004 12:34:43 AM

Alan, the quickest way to get rid of the pollsters is to tell them you're not interested in taking a poll.

I don't see why Bush supporters would be less interested in talking to pollsters than Kerry supporters. In fact, considering the truism that many people want to be on the side of "the winner" and vote accordingly, it would surprise me very much if Bush supporters wanted to make Kerry appear to be winning.

Also, if one candidate appeared to be winning decisively, many people might have just thrown in the towel and decide not to vote, thinking their vote would be wasted.

It just doesn't make sense that Bush supporters would try to hasten their own candidate's demise by not showing their support of Bush in the media.

Posted by: aaa | Nov 18, 2004 12:49:08 AM

The question I have is what do folks like Stephen Freeman and Sam Wang have in mind when they produce their pseudo-science reports?

These guys are trained and accredited statisticians. Are they really this clueless, or is the concept to produce propaganda no matter the facts?

I understand that many math folks get into deep trouble about polling because they don't understand that the numbers are only as reliable as the rough edges of the number gathering.

But you'd think these folks would bother to understand the most elementary basics of the area they are trying to expertly comment on.

-----

Mark writes: "However, to continue to see evidence of vote fraud in the "unexplained exit poll discrepancy" is more than wishful. It borders on delusional."

This is actually the least harmful part of the delusional behavior to come. Next up will be two years of claims of the Dolchstosslegende by "Democratic Party Insiders".

"I don't see why Bush supporters would be less interested in talking to pollsters than Kerry supporters."

An obvious possibility: Because Bush supporters (or at least enough of them to make a difference) are convinced the pollsters belong to the "biased librul media." (Anyone who doesn't think a lot of conservatives believe this is invited to listen to right-wing talk radio for a few hours...) To say that strategically it would make sense to talk to pollsters even if you think they're biased may be true, but most people do not think in such "strategic" terms. They just don't want to talk to people who they think are politically hostile. This may also explain why the last pre-election polls also slightly underrated Bush's showing, though not as badly as the exit polls. (Remember, it's not necessary for all Bush supporters to behave this way: just enough to make the polls inaccurate by a few points.)

Anyway, that sounds more plausible to me than a vast conspiracy that somehow got the exit polls to understate Bush's showing both in paper-trail and in non-paper-trail states--and in states which don't use Diebold at all...

Posted by: David T | Nov 18, 2004 8:55:35 AM

Two notes: VNS exit polling data from 1990 to 1998 was accurate. It was only in 2000 when it became problematic, and in 2002 was so off that reporting results were canceled. I would only note that the proliferation of e-Voting machines corresponds with the sudden "failure" of exit polling methodology.

Secondly, the makers of these e-Voting machines are a highly-partisan group (Diebold, ES&S, Sequoia & SAIC), and have a history of using highly questionable people to develop these "trade secret" software applications. Would you trust unauditable machines designed by people connvicted of fraud via installing computer "back-doors." Do you have a Firewall on your PC at home, or do you "trust" that nobody is going to bother your PC - or your election?

http://angelingo.usc.edu/issue02/politics/a_evotes.htm

'The Politics of Businessmen who sell these Equipments'

"Given the inability to inspect machines and audit election results, the political ties of executives at electronic voting machine manufacturers are troubling. The crowd is full of businessmen with strong financial ties to the Republican Party. In one case a company, Global Election Systems (recently acquired by Diebold) has had a tendency to hire ex-convicts. Ironically, some members of the management of G.E.S. have criminal records that would probably prevent them from voting on their own machines in seven different states.

Michael K Graye, a former G.E.S. director, was arrested in 1996 in Canada for tax-fraud and money-laundering schemes that involved $18 million. Before he could be sentenced though, he was indicted in the US for stock fraud. G.E.S. also hired Jeffrey Dean as a senior VP after he finished serving time for 23 felony embezzlement counts. Court documents describe these offenses as having, "a high degree of sophistication and planning in the use and alteration of records in the computerized accounting system that the defendant maintained for the victim." (Note: he built a hard to detect "back-door" into the application....and I'm not kidding.)

"After Diebold acquired G.E.S., a prison friend of Dean, John Elder, was hired as a consultant. They had met while Elder was serving five years for cocaine trafficking at the same time that Dean was incarcerated. Although their direct involvement in actual elections is unclear, Diebold claims Dean spent most of his time supervising ballot printing, the fact remains that the choice of a company writing software and building computer systems that count votes to hire a senior VP with a history of manipulating computer systems is certainly questionable."

(Food for thought)

Posted by: Observer | Nov 18, 2004 9:19:22 AM

Observer: "I would only note that the proliferation of e-Voting machines corresponds with the sudden 'failure' of exit polling methodology."

The problem with this, as I indicated, is that the exit polls also failed in non e-Voting machine states (or states which use them but require a paper trail).

Posted by: David T | Nov 18, 2004 10:11:39 AM

Re: Comment from Petey

<"The question I have is what do folks like Stephen Freeman and Sam Wang have in mind when they produce their pseudo-science reports?

These guys are trained and accredited statisticians. Are they really this clueless, or is the concept to produce propaganda no matter the facts?">

--Not sure where you're coming from on this, if you check Sam Wang's site you'll find that he's long maintained that the evidence for fraud is inconclusive, He notes a statistically significant deflection in counties with a high democratic registration voting for bush that overlaps with the use of optical scanners, but accepts the theory that the scanners were used in rural counties where the voters still maintain democratic registrations but vote republican. In fact, his meta-statistical analysis of the polling results before the election showed a clear Bush lead, enabling him to make a guess of the final electoral count that proved very accurate. Professor Wang has actually done more with clear analysis to debunk election fraud theories than any number of pundits, so disparaging remarks about his "pseudo-science" are perhaps misplaced.

Posted by: dave | Nov 18, 2004 10:17:28 AM

Great post!

"I would only note that the proliferation of e-Voting machines corresponds with the sudden "failure" of exit polling methodology."

Wait a minute! I thought Kathy Dopp had proven that actually the e-Voting machine counties were more accurate? (Heh).

You make a mistake in the coin flip issue. The chances of 10 of 11 states going for Bush is not the same as flipping the coin eleven times and it coming out heads 10. That would be true if you were starting from the position of Bush and Kerry being tied. Actually Kerry was ahead in all of these states so the odds of a complete switch in 10 of 11 states are much higher.

A second question. The only evidence you use to debunk exit polling data is 2000 (and again I think you are not completely honest here - the issue wasn't "difference" as you suggest in your post, the issue was a complete switch of positions which statistically is much different). But exit polls are used in many international elections. I think it might be a good idea to go to the Carter Center and ask what they think about the reliability of exit polls (I think you can get a non-partisan answer from them). If you can't trust exit polls then what you are saying is there is absolutely no way to monitor an election anywhere in the world. Are we really ready to say that.

Finally, you claims of face validity do not stand up (and you do not explain that face validity is by far the weakest types of validity). There is nothing in what you say to suggest that there is a difference between how often Democrats respond and how often Republicans respond. Before the election people were actually hypothesizing that Democrats responded less to pollsters (remember that). This who idea that Republicans respond less sounds awful convenient (you site one very unreliable statistical report - why is this right when the much more reliable exit polling must be wrong?)

Posted by: Wilbur | Nov 18, 2004 11:57:42 AM

dave,

As to where I'm coming from:

Sam Wang is another guy who doesn't understand the first thing about polling, who brandishes unrelated credentials to give credence to his ramblings.

You refer to his predictive abilities, but Wang's final pre-election prediction assigned a 98% chance of a Kerry win. Think about that, given the polling data available on the eve of the election.

This was similar to his absurd methodology during the entire campaign season, which at one point had a 99.9% chance of Kerry winning at a time when Kerry was tied or up by a point or two in the national polls.

The guy doesn't have the first clue about ANY of the issues of how to read polling data, or what polling data means. I don't think comments about his "pseudo-science" are misplaced in the least.

His comments about the exit polls show his usual lack of comprehension of the issues involved.

With respect, please allow me to deconstruct the flawed logic of your critique of Freeman's paper. I will address specifically the below statement:

"So to summarize: Absent further data from NEP, you can choose to believe that an existing problem with exit polls got worse this year in the face of declining response rates"

...well, we do not know if the exit polling response rates went up or down in 2004, so suggesting "declining response rates" is conjecture unsupported by any empirical evidence....(we do know that voter turn-out was higher this year)...

"and rising distrust of big media,"

...while I *personally* agree with that, this is again conjecture that in no way relates to Dr. Freeman's or anyone's statisitical analysis of the unexplained variances between exit polling data vs. the vote tabulations....

"that a slightly higher number of Bush voters than Kerry voters declined to be interviewed."

...That is not supported by facts. Actually, and to the contrary, the exit polling data was correct in MOST states, but not the "critical battleground" states as listed in Freeman's study. Thuis, in order to beleive this "chattiness theory," one also has to believe that exit poll voters who voted for Kerry were more "chatty" than those who voted for Bush AND this unique "personality" factor apllied to ALL 50 states. The data does not support this, as the variances outside the MOE occur in *only certain contested states*...and besides, this is again conjecture for which no data exits regarding the 2004 election.

"Or, you can believe that a massive secret conspiracy somehow shifted roughly 2% of the vote from Kerry to Bush in every battleground state, a conspiracy that fooled everyone but the exit pollsters - and then only for a few hours"

...well, perhaps history repeats itself? In 2002 VNS held an emergency meeting on hte afternoon of Election day trying to figure out why their exit polling data did not agree with the machine counts in states like Georgia and Minnesota, and so on Election Day, 2002 they made an annoucement:

VNS issued a statement saying it was "not satisfied with the accuracy of today's exit poll analysis and will not be in a position on election night to publish the results of state and national surveys of voter attitudes."

"CNN.com is committed to bringing you full exit poll results when available. However, Due to problems with exit polls from Voter News Service (VNS), no exit poll data is available for the 2002 elections."

###

....and know for your final statement, I shall offer a different hypothesis:

"after which they deliberately suppressed evidence of the fraud and damaged their own reputations by blaming the discrepancies on weaknesses in their data."

...I don't think "damage to their reputation" is restraining NEP or the media Executives from addressing these obviously critical issues about the 2004 election. Maybe the NEP/media doesn't want to release the data b/c if systemic proof emerged that out national Election was indeed "hacked," the fall-out would be unpredictable....

Perhaps Watergate times ten, or maybe times 100? Afterall, systemic fraud would not result in the simple resignation of the President and Vice-President like back in 1974, but likely would entail the dissoluation of the ENTIRE Executive Branch of our government. Then what? Another possability if fraud is exposed - widespread civil unrest?

This is serious stuff.

Regardless, the exit poll variances as analyzed by Freeman and others would likely cause the election results to be canceled in many 3rd world countries. But too many of us live inside the bubble of "American Exceptianalism" - where "it can't happen here." (whether it be the subject of war or elections). That my friend is navie, and I don't think this issue is going away...

WAS IT HACKED?
http://www.orlandoweekly.com/news/Story.asp?ID=4688

Posted by: Observer | Nov 18, 2004 12:42:05 PM

Freeman's fever is evident in his willingness to grant "space aliens" the same face validity as "response bias".

On the cluster sampling issue, I would be interested in knowing how accurately 100% samples of the ~30 (per state) polled precincts would have predicted statewide outcomes.

This test is not hard to conduct in theory, and probably not in practice either, at least for Mitofsky -- since he knows which precincts they are, and (with subjective probability 99.97%, give or take any delays in final certification) already has these on spreadsheets.

OT - Petey - where ya hangin' out these days? Folks back at the old joint are still trying to prove Gallup made up their top line numbers first, and then invented raw data to support them. ;-)

Posted by: RonK, Seattle | Nov 18, 2004 1:41:11 PM

Petey:

Again, Sam Wang makes clear that he posted two predictions:

1. Meta-analysis of Straight Polls, which predicted the electoral college accurately.

2. Meta-analysis of Straight Polls plus a factor for undecideds and turnout favoring Kerry. This he admits was wrong, and he was always clear that these adjustments were experimental. In fact, he cites Mystery Pollster on the Incumbent Rule. He believed them, but admits his belief was probably biased.

So his meta-analysis method was accurate but his experimental adjustments were wrong. He clearly documented and qualified his adjustments.

It's all on his site today, so while you may attack his bias, the contributions of his Meta-analysis method is clear. Psuedo-science, hardly.
http://election.princeton.edu/

Posted by: Alex in Los Angeles | Nov 18, 2004 2:36:18 PM

Ruy Texeira weighs in on the Freeman paper. He critizes Freeman on the grounds of general inaccuracy of exit polls since 1988. He cites "raw exit poll" figures:

Isn't this misleading? Are these "raw exit polls" unweighted by turnout and non-respondents?

Ruy does go into weighting, and somewhat qualifies the above numbers. But it still seems misleading.

However, he lists three weighting stages:

1. Samples are weighted to correct for oversampling of precincts (for example, exit polls have historically selected minority precincts in some states at higher rates than other precincts) and for nonresponse bias (exit poll interviewers try to keep track of refusers by sex, race, and age).
2. Samples are weighted to correct for changing turnout patterns in the current election, since the sample design is based on past turnout behavior.
3. Samples are, in end, simply weighted to correspond to the actual election results. This is done by first weighting exit poll results in sample precincts to the true precinct results, as they are known, and then weighting the overall sample to the overall election result, once it is known.

These don't match exactly what we've learned here. Isn't his #1 factor, Oversampling of Precincts, something new? You've indicated precincts are randomly selection. Although "key precincts" have not been explained on this blog.

Thank you!

Posted by: Alex in Los Angeles | Nov 18, 2004 3:38:26 PM

Alex in Los Angeles,

"It's all on his site today, so while you may attack his bias, the contributions of his Meta-analysis method is clear. Psuedo-science, hardly."

While Sam Wang's bias has always been obvious, it's the least of his problems.

The real thing that's always made his "scientific" analysis laughable has been the utter quakery of his methodology.

As mentioned above, his finest moment was back in July when he extrapolated a basic tie in the national polls to a 99.9% chance of a Kerry win.

I was able to follow how he got to that conclusion, but his reasoning was "interesting", to say the very least. It involved wonderful concepts like: if the last 3 polls in Iowa all show Kerry up by 4 points with a 3% MOE, then Kerry has a 100% chance of winning Iowa. And his rationales and methodologies got even more "interesting" from there.

If you look up the word "charlatan" in a dictionary, you'll see photos of both Sam Wang and Stephen Freeman. But if you enjoy learning about political polling from a molecular biologist who knows some math, be my guest...

---

RonK,

I hang out at various spots around town. I miss the structure of the old place, but I don't miss the content these days - the current insanity over ballot fraud theories, to be soon followed by the coming insanity about trying to purify the party.

Here's a bit more detail about Sam Wang, if you're interested. When he was first publishing his math games back in the summer, I came up with a test case for him to try, hoping it would clearly expose for him the problems with his methodology:

Suppose that if you averaged all the recent polls in every state, each individual state showed the identical thing - Kerry up by 1%. Under these circumstances, what would the chances be of Kerry winning at least 270 EV's?

Wang ran this test through his methodology and it determined that there would be a 99.1% chance of Kerry winning.

He then ran the same test with Kerry up by 0.2% in polling in each individual state, and determined a 75% chance of Kerry winning.

Perhaps you see the problem(s) now? Sam Wang didn't.

Almost every crucial assmuption Wang makes about what polling numbers mean is wrong. And this is just as true for his post-election musings.

"if the last 3 polls in Iowa all show Kerry up by 4 points with a 3% MOE, then Kerry has a 100% chance of winning Iowa."

Assuming you quoted him accurately, can you explain what is wrong with that concept? Obviously, 100% is wrong but it looks above 95% to me. Obviously, future polls could and did show different numbers, but, on that day, above 95% sounds right. I'm not sure what you find strange, but maybe you can explain it to me.

I think email would be best so feel free to email me.

Posted by: Alex in Los Angeles | Nov 18, 2004 5:18:38 PM

"Assuming you quoted him accurately, can you explain what is wrong with that concept? Obviously, 100% is wrong but it looks above 95% to me."

First of all, for the purposes of Wang's calculations, the difference between 95% and 100% makes a big difference in the final numbers he gets. He's multiplying the results of his assumptions in many states together, and thus multiplying his many errors together to arrive at a much larger error.

But more importantly, the 100% figure shows how Wang is fundamentally misunderstanding the meaning of polling data. He thinks he is working with the pure numbers of mathematics, rather than the rather ragged numbers of polling, where the numbers are only approximating an outside reality.

You can see this in his post-election conclusions about how odd it seems to him that Bush could have won Florida despite Kerry having been ahead in some polls.

---

And as to your supposition, if candidate A is ahead in three polls on the eve of an election by 4% in each poll, with a MOE of 3% in each poll, and knowing nothing else about the race, I'd guess the odds of candidate A winning the election to be substantial, but less than 95%.

What if he really did get 18 percent - or 20 - or 22? This would go a long way toward explaining the "across the board" error.

I assume that responding to an exit pollster is not a secret to anyone within earshot. Would not a certain percentage of African American voters prefer not to risk potential ridicule from their peers inherent in proclaiming a vote for Bush?

"Professional pollster Mark Blumenthal started Mystery Pollster to provide better interpretation of polling results and methodology... offers much needed help to Political Wire readers" - Political Wire