November 8, 2008

Was the passage of Prop. 8 always a foregone conclusion, despite poll results throughout the summer and early fall showing most likely voters opposed it?

Or were the major polls correct, and the sentiment of California voters actually shifted in the weeks leading up to Election Day, from opposition to support?

Some Prop. 8 supporters maintained throughout the campaign that survey results consistently understated support for their side because many respondents wished not to appear bigoted to a pollster. They cited a “study” they had conducted with polling data from previous state anti-equality campaigns to support their argument.

There is typically a 7-10 point difference between what people tell the pollster about their views on LGBT rights and how they really vote. In other words 7-10% say they believe in equality but actually vote against us.

The existence of a racial Bradley effect — i.e., a pattern in which the polls’ accuracy is affected by significant numbers of racist Whites lying to pollsters and saying they would vote for a Black candidate — has been widely disputed, and wasn’t evident in polling this year.

Now that the election results are in, we can compare the actual vote tallies to the last Field poll before Election Day. My own reading of the data is that they reveal no evidence that survey respondents said they would vote No when they actually supported the measure.

The final Field Poll, conducted about one week before the election and released on October 31, indicated that 49% of likely voters opposed the measure. The margin of error was +/- 3.3 points, so the poll’s estimate was that opposition in the population of California voters ranged between 46% and 52% at that time.

The official tally so far (which still excludes some absentee and provisional ballots) puts the actual No vote at 48%, well within the range of the Field Poll’s estimate.

The same Field Poll found that 44% of likely voters supported Prop. 8 a week before the election, and another 7% were undecided. Factoring in the margin of error, the poll’s estimate of the actual population proportions ranged as high as 47% for the Yes side, with anywhere from 4-10% undecided.

These numbers fall short of the final Yes tally, but it’s not difficult to construct a scenario whereby they are consistent with the Prop. 8 win on Election Day.

First, if most of the undecideds ultimately voted Yes (the pattern that apparently occurred with antigay Prop. 22 in 2000), the result would have been close to what happened on Tuesday.

Add to this the fact that the No vote was trending downward in the weeks leading up to the election. Polls by Field and the Public Policy Institute of California (PPIC) estimated that opposition was around 55% in September. But it declined to 52% in an October PPIC poll and 49% in the final Field Poll a week later. To the extent that those declining numbers indicated that “soft” No voters were in the process of switching to the Yes side, it would help to account for the election day outcome.

Finally, turnout in this election was unusually high, especially among groups that historically haven’t voted in large numbers. This created unique challenges for pollsters in identifying likely voters. To the extent that the criteria used by the various survey organizations to estimate turnout in advance of Election Day were inaccurate — especially among key voter groups — their figures would have missed the mark.

In the final Field Poll, for example, African Americans were projected to constitute about 6% of likely voters, and a plurality of about 49% said they supported Prop. 8; another 9% (roughly) were undecided. These figures were derived from interviews with a fairly small number of Black respondents, so the margin of error was substantial (perhaps as much as +/- 12 points) and generalizing from them is risky. If we simply take them at face value, they suggest that Blacks’ contributions to the total vote a week before the election was about 3 points on the Yes side, and slightly less on the No side. Taking the margin of error into account, however, Blacks’ support for Prop. 8 could have ranged as high as 60%. And the undecideds could have subsequently added even more to that total — especially if they were persuaded to vote Yes by appeals from the pulpit on the Sunday before Election Day.

Exit polls were consistent with the latter scenario, finding that about 70% of Blacks ultimately voted Yes. Moreover, they constituted 10% of voters — not 6% — making the impact of their opposition considerably stronger.

It’s important to remember that exit polls — like any survey based on a sampling of the population — have a margin of error associated with their estimates. And the margin can be large for relatively small groups. In the case of Blacks’ votes, the exit poll’s error is probably +/- 5 to 6 points, and there remains the bigger question of whether the specific precincts that were sampled yielded an accurate reflection of African Americans statewide.

Nevertheless, it seems safe to assume that Blacks ultimately provided substantial support for the Yes side — perhaps enough to account for the election outcome. Most likely, there are other groups for whom turnout projections were also incorrect and, in combination with the downward trend in No voters and last-minute decisions by undecideds, these factors can probably account for the disparities between pre-election polling and the actual outcome.

Thus, it’s difficult to conclude that significant numbers of Prop. 8 supporters lied to pollsters and said they were planning to vote No. Perhaps some Yes voters disingenuously told researchers they were undecided, but it’s equally plausible that most undecideds truly didn’t make up their minds until late in the campaign, perhaps not until Election Day.

They ignored the polls’ margin of error. In more than 40% of the polls cited in the “study,” the discrepancy between the poll estimate and the actual vote was 5% or less. For many statewide polls, this is within the margin of error.

They only noted undercounts in the anti-equality vote, suggesting that all discrepancies resulted from voters telling pollsters they supported the right of same-sex couples to marry but then voting against marriage equality. But in many of the polls listed in their spreadsheet, the actual vote counts against the anti-gay measures also were higher than the polls’ estimates. How can this be? The answer lies with the undecided poll respondents. They had to make a decision in the voting booth and they tended to favor the winning side — which was anti-equality in all cases except the 2006 Arizona campaign.

They included polls that were conducted weeks (in some cases, months) before the election. As all pollsters know, surveys are usually more accurate to the extent that they’re completed close to voting time. But the “study” included polls that were published more than a month before election day.

They were selective in which polls they picked. For example, in the 2004 Arkansas election for Amendment 3, the “study” used an October Zogby poll, which indicated that 65% of respondents supported the amendment. But an Opinion Research Poll released in late October found that 77% of Arkansas voters supported Amendment 3, slightly more than actually voted for it.

In summary, I don’t believe that the findings of the PPIC and Field Polls leading up to the election were wrong. Rather, I suggest we assume that a majority of typical California voters truly were opposed to eliminating the right of same-sex couples to marry throughout the summer, but their numbers began eroding by October. Among actual voters, supporters of Prop. 8 came to outnumber opponents by Election Day, albeit by a surprisingly small margin. (Recall that just 8 years ago Prop. 22 won by more than 20 points.)

Thus, we can use the PPIC and Field Poll data as a tool for better understanding how the various strategies pursued by each side between May and November ultimately affected the outcome of the election.