Friday, January 27, 2006

Palestinian Exit Poll Errors on National List Vote

The three Palestinian Legislative Council election exit polls tended to overestimate Fatah's vote in the national party list voting but they all seriously underestimated Hamas' strength. The result is that all got the leader wrong, and this was beyond the margin of error of two of the three surveys (and at the extreme end of the MOE of the third, and least precise, survey.) The bottom line: the preliminary vote count produces results that fall outside the margin of error of the exit polls. But even regardless of technical issues, the serious underestimate of the Hamas vote presents interesting questions about why this happened, both technical and political.

The national party list vote is the basis of a proportional representation allocation of 66 seats in the legislative council. While the district votes have serious challenges to exit polls (see here), there is nothing in principle so difficult about conducting an exit poll for a party list ballot such as this. Therefore the problems with this part of the exit poll are different from those I discussed before concerning the district votes.

(This part gets really geeky. Fun for me, but perhaps not for anyone else! Read at your peril.)

The exit polls differ in the clarity of their technical details. The PSR poll is described in the most detail on their web page. The exit poll interviewed 17573 respondents at 242 polling centers from a total of 1014 centers. The margin of error for the national party list percentage is reported as 4%. Given the clustering of the sample involved, this strikes me as a reasonable estimate, and the PSR deserves praise for being clear on these technical but statistically crucial issues.

The DSP exit poll is less clear on the matters of respondents and margins of error. Their web site includes a PowerPoint presentation of their results, produced on election night. Unfortunately it fails to give any estimate of the margin of error for reported votes or seats estimates. I have to rely then on an Associated Press story from election night which quotes DSP pollsters as saying their poll interviewed 8000 voters at 232 polling stations. The AP report quotes them as saying there is a "one-seat margin of error. Pollsters did not give the margin in percentage points." That is a bit of a puzzle. The "Exit Poll Fact Sheet" posted at the DSP website says they planned to interview 15000 respondents, though it does not indicate what the expected margin of error would be for their sample design. If the AP story is correct and they completed just more than half the expected sample, then that is bad news. At the same time, a "1-seat margin of error" is hard to understand. The Palestinian national party list vote is a strong PR system with a 2% threshold for winning seats. That would mean that seats are very closely related to votes. With 132 members, the Legislative Council would therefore have a "seat margin of error" only slightly more than the "percent of vote" margin of error. But 1-seat or 1% seems very optimistic.

So I've made my own calculation of a plausible margin of error for their survey. Based on a sample size of 15000, and using the same design effect as in the PSR survey, I estimate a margin of error of 4.3% for the DSP study. If their sample size were really only 8000 then this MOE rises to 5.9%. In the graphs I adopt the estimated MOE based on their "Exit Poll Fact Sheet".

The An-Najah exit poll is the most poorly documented. Their web page has no mention of the exit poll, so only press accounts are available. An Associated Press article (not the same as the one mentioned above) quotes an anonymous An Najah pollster as saying the exit poll interviewed 6500 voters with a margin of error of 5%. That is possible, though for that sample size and the same design effect as the PSR poll, the margin of error would be more like 6.5%.

The reason I'm using the PSR design effect is that there isn't much magic available for exit polls. The design effect depends on the number of clusters (polling places) in your sample, the number of voters in each, and any efficiency you can squeeze out by stratifying on some relevant political variables, like past vote or known regional differences in voting patterns. The PSR account is the most thorough and clear, and from it I can calculate a design effect, that is, the inflation of the margin of error using the clustered design compared to what would be the MOE for a simple random sample. While different designs might differ slightly, I doubt that any decent design would vary by much from any other in this case. Hence I use the PSR calculation (which is well documented) and apply that to the DSP and An Najah polls where I have less documentation.

Which is a long way of saying that for the graph above, I've used the PSR and An Najah reported margins of error (4% and 5%, respectively) and my estimate of the MOE for DSP based on the planned 15000 voter sample with the PSR design effect, or a MOE of 4.3%. I should stress that my estimates here are just that, and I will gladly revise them if I can find more documentation from the pollsters. However, I'd be surprised if any of these polls had margins of error much beyond the 4-6% range, given their sample sizes and similar levels of clustering.

(OK, back to the substance here.)

In the graph above, the striking thing is the underestimate of the Hamas vote. All three polls get estimates substantially too low. The truth (the vertical red line) lies outside the confidence interval for PSR and DSP, and just barely touches the extreme high end of the confidence interval for An-Najah. It is fair to say from this that all three polls seriously underestimated the Hamas vote, and that the errors were due to something systematic, not random variation in the sampling.

So what could be the reason for this? One possibility is that Hamas voters were a) less willing to talk to the pollsters or b) less willing to admit to a Hamas vote when they did fill out the exit survey. The first possiblity is similar to the problems US exit pollsters had in 2004, where the evidence is that Republican voters were somewhat less willing to respond to the exit survey, producing an underestimate of the Bush vote. In this case, if we imagine Hamas voters as being very unhappy with the status quo and with the Palestinian "establishment", it is plausible that this unhappiness might transfer to being less willing to coopertate with pollsters from "established" Palestinian institutions. In this case, the errors would be due to systematic non-response, a problem that bedevils all surveys. The second possibility seems somewhat less plausible to me. While Hamas has been an "outlaw" faction in Palestinian politics, it does not seem to me to be a stigma that would lead voters to underreport their support for Hamas. For example, Hamas did quite well in recent local elections. Their legitimacy would seem to be well established by that success and by their very willingness to take part in these elections, having boycotted the 1996 legislative elections. Now I don't claim to be any kind of expert on Palestinian politics or culture, so I may be missing something here, but it doesn't seem to me that reporting a vote for Hamas would be all that socially unacceptable that it would lead to this big a bias in the estimated Hamas result.

There is a third possible reason for the underestimate of Hamas vote. There is very little historical data on Palestinian elections which can be used to create estimates of turnout. We saw in another post here that turnout was up substantially from the 2005 Presidential election, and Hamas and other boycotted the 1996 legislative elections. So it is also possible that the exit polls were making assumptions about turnout which were not well founded. An unexpectedly high turnout in pro-Hamas areas could be underestimated based on past elections data, and this in turn could lead to underestimating the Hamas vote.

Whichever of these three (or other) possible explanations for the underestimate of Hamas strength, the fact remains that the problem with these exit polls came primarily in the failure to accurately estimate the strengh of Hamas.

If we turn to Fatah, the confidence intervals for all three polls at least touch the actual outcome. An-Najah and DSP overestimate Fatah strength, and the truth comes only at the very low end of their confidence intervals. PSR hits the Fatah vote on the nail, squarely in the middle of its confidence interval. So not great, but not bad estimates under the circumstances. An implication of this is that Fatah voters were willing to be interviewed, that Hamas supporters might possibly have mis-reported Fatah votes, and again the turnout estimates might have inflated the Fatah strength a bit.

But the proof of the pudding is in the estimated gap between Hamas and Fatah. This is the statistic that tells you who is ahead, by how much, and whether that is statistically significant. Here two of the polls clearly miss the mark, and An-Najah barely touches the truth (in part thanks to its larger margin of error.) All three estimate Fatah leading by 6-7%, when the truth was a Hamas lead of 3.27%. This can again be explained by the factors that drove the Hamas underestimate combined with any systematic factors inflating the Fatah vote.

The bottom line is that the exit poll errors cannot be explained by random variability due to sampling. Systematic response errors, turnout estimation, or non-response are likely culprits in this case. In principle, an exit poll should have been able to detect the Hamas lead. With the sample designs used here, and their associated margins of error, it is unlikely any of them could have concluded that Hamas' lead was statistically significant. But getting the direction right was a possibility.

One refreshing aspect of these exit poll problems is that they do not easily lend themselves to the conspiracy theory interpretations common after the U.S. 2004 presidential elections. With ballot counting conducted under the Palestinian Authority, any fraudulent counting would seem more likely to favor Fatah than Hamas. Sometimes the exit polls are just wrong. We should all remember that lesson (and continue to strive to improve the science of exit polls.)

Links

About Me

I am co-founder of Pollster.com and founder of PollsAndVotes.com.
I am also a professor of political science at the University of Wisconsin, where I teach statistical analysis of polls, public opinion and election results. Data visualization is central to my approach to analysis.
Email me!