Pollster.com Guest Pollsters Onlyhttp://www.pollster.com/blogs/
enCopyright 2010Wed, 29 Sep 2010 10:43:21 -0500http://www.sixapart.com/movabletype/?v=4.1http://blogs.law.harvard.edu/tech/rsshttp://creativecommons.org/licenses/by-nc-sa/3.0/http://creativecommons.org/licenses/by-nc-sa/3.0/http://creativecommons.org/images/public/somerights20.gifSome Rights Reservedpollster/guesthttps://feedburner.google.comWilson: O'Donnell's Delaware Win About Turnout and Messagequestions@pollster.com (Guest Pollster)by Guest Pollster<p><em>David C. Wilson is a professor of Political Science and International Relations, and Psychology, at the University of Delaware. He studies public opinion, polling and survey methods, and political psychology. His research has appeared in the Journal of Applied Psychology, Public Opinion Quarterly, and the Du Boise Review.</em></p>
<p>Christine O'Donnell's win over the long tenured U.S. Representative Mike Castle, 53% to 47% (+6% points), might have been a shocker to most, but what really happened, and what most observers missed, was that turnout was higher than normal in lower Delaware (Kent and Sussex Counties), and average in upper Delaware (New Castle County). </p>
<p>Polls underestimated these levels for most of the campaign, and thus, missed the trend. Plus, the lack of in-state polling provided no clues about the sources and substance of information that mobilized voters. It turns out that lower Delaware counties, which are traditionally Republican, are losing their liberal and moderate appeal. It suggests that the GOP leadership may not be in as much touch as they think with their constituents. And, questions abound about the ability of existing state GOP leadership's ability to mobilize support given the shock of the O'Donnell win. In sum, evidence points to a geo-political realignment of the GOP within Delaware.</p>
<p>Castle won New Castle County 58% to 42%, but lost Kent and Sussex counties, 64% to 36%. O'Donnell's support in both Kent and Sussex was twice that of Castle's. It appears that Castle failed to mobilize liberal and moderate Republicans, and relied too heavily on the state party for his campaigning. Although Castle was well funded, O'Donnell's last minute support from outside sources allowed her to communicate her message and get out the vote; and it paid off.</p>
<p>Segue to the polls. The last poll conducted before the election (Public Policy Polling), 9/11-9/12) showed O'Donnell with a 47% to 44% advantage over Castle with 8% undecided, and a margin of error of roughly 4%. So how did O'Donnell beat her estimates? It could be that the 8% of formerly undecided voters decided to go with O'Donnell over Castle. However, I think the answer is probably turnout.</p>
<p>Approximately 57,582 registered Republicans voted in Tuesday's primary. An estimated 27,021 voted for Castle and 30,561 voted for O'Donnell; a vote difference of 3,540 (6% points). Interestingly enough, Castle received far more actual votes in the 2008 general election for Representative than O'Donnell received for Senate that same year, suggesting that Delawareans voted for Castle and Biden (or Castle and not O'Donnell). This splitting of the ticket in 2008 raises questions about how turnout might affect the state's mid-terms; especially across counties in the state. O'Donnell should expect that her win will move some Castle supporters to her Democratic opponent, New Castle County Executive Chris Coons.</p>
<p>I think turnout will be the key in November because some of the popular media arguments about what's going on in the state are somewhat untenable. The September PPP poll found that only 24% of Republicans consider themselves "members of the Tea Party," and a plurality of 47% felt the Republican Party was "about right" in terms of their ideology; 17% felt they were "too conservative." Approximately 42% of Republicans said that a Sarah Palin endorsement would not make a difference in their vote for a candidate, and 24% said it would make them "less likely" to vote for a candidate. Thus, I see no big Tea Party movement in terms of attitudes and beliefs. However, Tea Party funding <em>is </em>related to turnout.</p>
<p>According to the state of Delaware's Elections Commissioner, the 2010 Republican primary produced a 32% turnout rate. On the surface this might seem low; however, the turnouts for past Republicans primaries were 16% in 2008, 8% in 2006, 12% in 2004, 14% in 2002, and 16% in 2000. Thus, the 2010 primary doubled Republican turnout.</p>
<p>The PPP polling likely underestimated this higher than usual turnout when they calculated their likely voter estimate or in weighting their final estimates. So what does this mean going forward? It's likely that O'Donnell will continue to run the same type of campaign but receive more outside funding and attention. The interesting part will be how the electorate in Delaware, and the nation, responds to the results. Mid-term turnout percentages in the state usually hover around the mid to upper 40s, while in presidential election years, turnout is in the mid to high 60s.</p>
<p>Coons has been leading in the polls in all head to head match-ups against O'Donnell. And, in the general election, O'Donnell will have to convince independent voters, moderate Republicans, and Castle supporters that she will represent their interests. This will be an uphill battle given that she's already indicated that she feels she can win without "them" referring to the Republican Party Organization, and suggesting the GOP might be too lazy to help her. </p>
<p>All of this bodes well for Coons who will certainly win the Wilmington area, and much of the Wilmington suburbs which make up the largest portion of the state's electorate. But it's tough to gauge Democratic turnout in the state because Coons did not have a primary challenger, and thus we cannot use primary numbers as an indicator of enthusiasm. Traditionally, Republican turnout during the primaries is slightly higher than for Democrats, but in 2008 the latter's turnout was 12% points higher than the former's. O'Donnell's win could actually work to mobilize support for Coons. It will also be interesting to see if Castle's supporters, and perhaps Castle himself, will remain loyal to the party or decide to support Coons because he has governing experience and is not considered an outside candidate.</p>
<p>According to 2008 exit poll data on that year's Senate race, 75% of Republicans voted for O'Donnell, while about 25% voted for Joe Biden, who was also running for Vice President. Biden won the contest by nearly 30% points, 64% to 35%. More telling, approximately 38% of Democrats voted for Mike Castle over his Democratic challenger, Karen Hartley-Nagel. Half of the individuals who say they voted for Castle in 2008, also voted for Democrat Joe Biden. In fact, 36% of Democrats who voted for Biden also voted for Castle. This all suggests that Castle has good standing among Democrats, which could help Coons, who according to Public Policy Polling, in early August held a 31% approval rating with 39% saying they were "unsure" about their approval of him.</p>
<p>What does all of this signal? </p>
<p>First, the media will heavily scrutinize the race and the candidates. O'Donnell is particularly vulnerable because she is a woman (yes, sexism still exists), she has no governing experience, she is not well know or at least revered by the state and national GOP, and there are many questions about her personal and campaign finances, educational background, ethics issues related to non-profit work, past gender discrimination lawsuits, and her personal relationships. O'Donnell does appear to be media savvy, but as things heat up, those skills will be tested. </p>
<p>Second, Coons' single most important priority will need to be turnout. If he can mobilize support among the electorate in New Castle country, especially the suburbs of Wilmington, he will win the election. He should not ignore Kent and Sussex counties either; they hold more opportunities than barriers to his election. His message must be at least two-fold: he can govern and he will represent Delawareans with pride and uphold the reputation of the state. How he frames and packages those messages will be up to his campaign.</p>
<p>O'Donnell's single most important priority will be to somehow move slightly more to the ideological and political center, and make friends with the state and national party. The September PPP poll showed O'Donnell having strong support only among self-described conservatives. Conservatives make up the largest portion of the Republican Party in DE, but they are heavily outnumbered in the state when moderate Republicans are combined with all Democrats regardless of ideology. </p>
<p>Also, the outside funding by the Tea Party movement may become a problem if Delawareans, who traditionally like to handle their own politics, perceive too much outside influence. O'Donnell must now come up with solid policy proposals that will show she can actually be effective in the male dominated, seniority ruled world of the Senate. She also has weak support among seniors, who heavily favored Castle.</p>
<p>Finally, regardless of the outcome Delaware will elect someone other than Joe Biden for the first time in almost four decades. That's big.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/QbQm41da5GQ" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/QbQm41da5GQ/wilson_odonnells_delaware_win.php
http://www.pollster.com/blogs/wilson_odonnells_delaware_win.php2010Thu, 16 Sep 2010 10:10:14 -0500http://www.pollster.com/blogs/wilson_odonnells_delaware_win.phpMcGoldrick: What Voters Expect Of A GOP Majorityquestions@pollster.com (Guest Pollster)by Guest Pollster<p><em>Brent McGoldrick is a Senior Vice President with FD, a communications strategy consulting firm. He leads public affairs research for FD's Washington, D.C. office.</em></p>
<p>In the last week, polling junkies and reporters alike have been delving into a fresh batch of post- Labor Day polls and debating just how big of a majority the Republicans will win in the House of Representatives in November.</p>
<p>Last week my company, the communications and strategy consulting firm FD, fielded several questions on a <a href="http://big.assets.huffingtonpost.com/FDNATIONALSURVEY.pdf" target="_hplink">national survey</a> that pre-supposed Republicans would win majority control of the House. The question we wanted to answer was "How do Americans feel about that prospect?" Like other polls, our polling finds news to cheer the GOP. But, we also find a note of caution about taking a potential takeover in stride.</p>
<p>Namely, in our poll, we find that voters generally believe:</p>
<ol>
<li> A GOP majority in the House will improve overall economic conditions;</li>
<li> A GOP House would do a better job than past GOP-controlled Congresses (i.e., the party has
learned their lesson);</li>
<li>But, voters want a GOP Congress to work with President Obama and Democrats, as opposed to pursuing their own agenda.</li>
</ol>
<p>Let's take each of these one by one.</p>
<p><strong>1. More voters think economic conditions will improve as a result of a Republican takeover of The House.</strong></p>
<p>Our polling finds that 47% of voters think economic conditions will significantly or somewhat improve as a result of GOP control of the House, while 38% think conditions will significantly or somewhat worsen. Among those "very likely" to vote, 49% say conditions will improve and 39% say conditions will worsen.</p>
<p><strong>2. More voters think a Republican-controlled House will do a better job than past Republican Congresses.</strong></p>
<p>Specifically, our Poll finds that 49% of voters say that a Republican -controlled Congress would do a better job than past Republican Congresses, while 36% say they would do a worse job. Among "very likely" voters, a majority (51%) say that a Republican-controlled Congress would do a better job than previous Republican Congresses, while 37% say they would do a worse job.</p>
<p>Interestingly, this finding clearly signals that the GOP has begun to repair its "brand" in less than two years. Additionally, taken together, the similar double-digits margins on these questions do suggest to me that a double-digit GOP lead on the Generic Ballot that we have seen in other polls might not be far off.</p>
<p><strong>3. That said, voters want a Republican Congress to work with President Obama and Democrats.</strong></p>
<p>When asked which approach they would prefer a hypothetical GOP-controlled Congress take, a whopping 71% of voters say they would prefer them to "compromise and work with President Obama to get things done." Only 27% of voters would want Republicans to "pursue their own agenda to get things done."</p>
<p>Among "very likely" voters, 68% want to see the two parties to work together, while 27% want the GOP to pursue their own agenda. (I won't know until I field it, but my bet is if we had put the question to voters whether a Republican victory in November is a signal to President Obama and Democrats that it is time to compromise, we would see similar numbers.)</p>
<p>Most significantly, even among Republican "Very likely" voters, while 50% say they want Republicans to pursue their own agenda, a sizeable 47% say they want Republicans to work with President Obama and Democrats.</p>
<p><br />
So, what do all of these data tell us? By a significant margin, voters appear poised to vote for divided government, with the expectation that it will improve the economy. But, they also expect that the two parties will work together to solve economic challenges.</p>
<p>It seems like we hear that message from every election. But, I would posit that, in the face of such dire economic conditions, the data show us the limits of either party's pursuit of a "base" strategy have been reached. The Great Recession as an added an "or else" to what seems to be the electorate's biennial electoral plea, and the failure of a party in power (or perceived to be in power ) to heed that message carries major electoral risks.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/CKlmRDGQzoM" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/CKlmRDGQzoM/mcgoldrick_what_voters_expect.php
http://www.pollster.com/blogs/mcgoldrick_what_voters_expect.php2010Mon, 13 Sep 2010 15:18:13 -0500http://www.pollster.com/blogs/mcgoldrick_what_voters_expect.phpBerinsky: Poll Shows False Obama Beliefs A Function of Partisanshipquestions@pollster.com (Guest Pollster)by Guest Pollster<p><em>Adam J. Berinsky is associate professor of political science at The Massachusetts Institute of Technology and is the author of <em>Silent Voices: Public Opinion and Political Participation in America</em> and <em>In Time of War: Understanding American Public Opinion from World War II to Iraq</em>.</em></p>
<p>In politics, as in life, where you stand depends upon where you sit. Recent polling I have conducted demonstrates that what people believe to be true about the political world is in large part a function of whether they are a Democrat or a Republican.</p>
<p>Last month the Pew Center for the People and the Press conducted a poll which found that <a href="http://pewforum.org/Politics-and-Elections/Growing-Number-of-Americans-Say-Obama-is-a-Muslim.aspx" target="_hplink">almost 20 percent of Americans mistakenly believe that President Obama is a Muslim, and another 43 percent cannot identify his religion</a>. Recently released polls by <a href="http://www.time.com/time/nation/article/0,8599,2011799" target="_hplink">Time </a>and <a href="http://nw-assets.s3.amazonaws.com/pdf/1004-ftop.pdf" target="_hplink">Newsweek</a> confirm the prevalence of this false information.</p>
<p>These findings have sparked a flood of analysis. Some commentators have rightly pointed out that large numbers of Americans believe a number of crazy things. <a href="http://www.slate.com/id/2264539/" target="_hplink">For instance, according to Gallup, 18 percent of Americans believe the sun revolves </a>around the earth. <a href="brendan-nyhan.com/blog http://www /2010/08/pundits-blame-the-victims-on-obama-muslim-myth-.html" target="_hplink">Others </a>have argued that Republican politicians and conservative media sources have helped perpetuate the myth of Obama's religious identity. Recent polling I have conducted seems to support the latter view. There is a strong political component to misinformation about Obama's beliefs and identity. But politically motivated misinformation is not limited to Republicans. Some Democrats are quite willing to believe false information about Republican politicians. The politics of misinformation, it seems, is not so much a product of direct reactions to Obama as it is to the polarized nature of the current political times.</p>
<p>At their heart, questions about Obama's religion are critical because they are tied into broader questions about his character and ability to lead. As part of a larger project on the political consequences of misinformation, I measured belief in another controversy that gets to heart of Obama's identity as an American - whether people believe that he is a citizen of the United States.</p>
<p>I contracted Polimetrix/Yougov to conduct a national internet sample of 800 Americans, from July 8th to July 15th, 2010. I asked, "Do you believe that Barack Obama was born in the United States or not?" Consistent with other polls on the "birther" controversy, I found that 27 percent of respondents said that Obama was not born in the U.S. and another 19 percent did not know if he was or not. These findings paint a picture that is similarly unsettling to the Pew polling - misinformation about Obama's national and religious identity is pervasive.</p>
<p>My results raise a number of important questions. One question is whether some people are simply ignorant about politics - as they are about other aspects of the world (as the Gallup question mentioned above would suggest) - or if instead the uncertainty about Obama's background is politically motivated. </p>
<p>To adjudicate as best I could between these two explanations, I asked a follow-up question of those people who said that Obama was not born in the U.S. or were unsure about where he was born. Specifically, I gave them a multiple choice question: "Where do you think Obama was born: Indonesia, Kenya, The Philippines, Hawaii, or some other place."</p>
<p>I picked this multiple-choice question rather than an open-ended question in part because it was easier to ask the question this way, but also to see how the story dominant among "<a href="http://en.wikipedia.org/wiki/Barack_Obama_citizenship" target="_hplink">birthers</a>" (Obama was born in Kenya) fared in relation to other possibilities, including one that could be derived from general ignorance (Hawaii was made a state in 1959; Obama was born in 1961).</p>
<p>The vast majority of these respondents subscribed to the dominant conspiracy story, choosing Kenya as Obama's birthplace. Among the 46 percent of respondents who either said that Obama was not born in the U.S. or were unsure if he was, two thirds said he was born in Kenya. This pattern was especially pronounced among those who said that Obama was not born in the U.S. - almost three-quarters of these respondents said he was born in Kenya.</p>
<center><img src="http://big.assets.huffingtonpost.com/chart.png"></center>
<p>There is some evidence that, since the beginning of the year, the story about Obama's citizenship has become clearer. Earlier in the year, in January 2010, I designed the follow-up question described above for inclusion on a survey conducted by Angus Reid Global Monitoring. In that poll, the distribution of beliefs about Obama's citizenship were roughly similar to what they are now - 25 percent said that he was not born in the U.S. and 20 percent were not sure where he was born. However, the follow-up looked very different - only 41 percent chose Kenya (the dominant "birther story"), while 25 percent chose Hawaii (a clear demonstration of ignorance). Thus, over the last seven months, it seems that the "birther" story has become more pervasive.</p>
<p>Partisan differences in beliefs about Obama's citizenship also indicate that the uncertainty about Obama's background is politically motivated. Though it has been said before, the difference between partisans in their beliefs about Obama's citizenship is striking. As the data show, the vast majority of Democrats say that Obama was born in the U.S. and a plurality of Republicans say that he was not. Similar patterns emerge when beliefs are broken down by approval for Obama; the President's supporters think he is a natural-born citizen and his opponents do not. Put simply, on the question of Obama's citizenship, where you stand depends on where you sit.</p>
<center> <img src="http://big.assets.huffingtonpost.com/berinsky1.png"></center>
<p>This pattern of partisan misperception is striking and carries over to other political rumors. On the July Polimetrix/YouGov survey, I also asked my respondents questions about whether they thought that the changes to the health care system that have been enacted by Congress and the Obama administration create "death panels" and whether John Kerry lied about his actions during the Vietnam war in order to receive medals from the U.S. Army. </p>
<p>The large partisan gaps found in the acceptance of false beliefs about Obama's citizenship, not surprisingly, extended to rumors about Obama's policies. But they also extended to rumors about other Democratic politicians as well - a majority of Republicans said that Kerry lied to receive medals and a majority of Democrats said that he did not. </p>
<p> <center><img src="http://big.assets.huffingtonpost.com/berinsky2.png"></center></p>
<p>The pervasiveness of politically motivated perceptions of reality is not limited to Republicans. On my survey I also asked respondents if they thought that "people in the federal government either assisted in the 9/11 attacks or took no action to stop the attacks because they wanted the United States to go to war in the Middle East." The overall acceptance of this particular piece of misinformation was lower than the Obama citizenship case - 18 percent thought that government officials were aware of the attack beforehand and another 18 percent were unsure - but the accusation here is certainly more severe. What is important for present purposes is that partisan differences in acceptance of this statement were large, as shown in this graph (which has been placed on the same scale as the birther graph above to facilitate comparisons).</p>
<center> <img src="http://big.assets.huffingtonpost.com/berinsky3.png"></center>
<p>These same differences do not, however, extend to rumors that are not grounded in partisan politics. I also asked respondents a question that has been asked on several surveys in the past, "Do you believe that a spacecraft from another planet crashed in Roswell, New Mexico in 1947?" As the graph below shows, the stark partisan differences found on the other questions do not emerge in the case of beliefs about alien life.</p>
<center> <img src="http://big.assets.huffingtonpost.com/berinsky4.png"></center>
<p>All these results beg the question of what can be done to correct these persistent misperceptions. The answer is difficult, largely because the incorrect beliefs about politics are as much a function of partisan perceptions as they are about genuine ignorance. </p>
<p>Clearly, some people hold false beliefs because they do not pay much attention to the political world. Providing these individuals with greater knowledge of politics might improve the situation. In order to assess the impact of general ignorance, I measured how much my respondents knew about politics by asking them a series of three factual questions about political figures and political processes. </p>
<p>The results here are somewhat heartening. I found that the more of these factual questions the respondents got right, the more likely they were to think that Obama was a citizen. <a href="http://www-personal.umich.edu/~bnyhan/health-care-misinformation.pdf" target="_hplink">Contrary to the findings of some scholars who examined beliefs about rumors concerning death panels</a>, I found that information had the same effect for both Democrats and Republicans. However, the news is not all rosy on this score; even information can only get us so far. There were large differences between the beliefs of Democrats and Republicans at all levels of political attentiveness and even among Republicans who got all three of my factual questions right, 27 percent believed that Obama was not born in the U.S. </p>
<p>So what can be done? In a recently published paper that has received a great deal of <a href="http://www.boston.com/bostonglobe/ideas/articles" target="_hplink">deserved attention</a>, <a href="http://www.springerlink.com/content/064786861r21m257/?p=3da72999788a46bea1d812a8a07e8c8d&pi=0" target="_hplink">Brendan Nyhan and Jason Reifler </a>hold out little hope for the possibility of correcting false beliefs. In fact, they argue that providing misinformed people the truth can exacerbate the problem, because these people just cling more firmly to their false beliefs. In a project associated with the Polimetrix/YouGov survey, I have begun to explore other possibilities and I remain hopeful. Still, given the nature of the current political climate, it may be a long road to find a common political reality that everyone can believe in.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/p8iBNpWZHhE" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/p8iBNpWZHhE/poll_shows_false_obama_beliefs.php
http://www.pollster.com/blogs/poll_shows_false_obama_beliefs.phpBarack ObamaMon, 13 Sep 2010 10:40:27 -0500http://www.pollster.com/blogs/poll_shows_false_obama_beliefs.phpAbramowitz: Registered vs. Likely Voters- How Large a Gap?questions@pollster.com (Guest Pollster)by Guest Pollster<p><br />
According to several recent national polls, Democrats may be headed toward their worst showing in a congressional election since World War II. A new NBC/Wall Street Journal Poll has Republicans leading Democrats on the generic House ballot by 9 points among likely voters while a new Washington Post/ABC News Poll has Republicans with an astonishing 13 point lead. The most recent Rasmussen weekly tracking poll has Republicans with a 12 point lead among likely voters.</p>
<p>If these polls prove to be accurate, Republicans could achieve their biggest popular vote margin since the 1920s. In 1946, Republicans won the national popular vote for the House of Representatives by a margin of about 9 points and that was their biggest win in the past 64 years. The Republicans' second biggest popular vote margin was 7 points in 1994.</p>
<p>What would such a popular vote margin mean in terms of seats? In 1946, Republicans won 246 seats in the House--a gain of 56 seats over their previous total of 190. A 12 or 13 point Republican margin would likely produce close to 260 Republican seats--a gain of about 80 seats over their current total of 179. That would be the biggest seat swing in a House election since 1932 when Republicans lost 101 seats. It would dwarf the 1994 shift when Democrats lost 52 seats, their worst showing since 1946.</p>
<p>It is very likely that Republicans will make substantial gains in this year's midterm election. Democrats are defending many seats in Republican-leaning districts that they picked up in 2006 and 2008, Americans are very anxious about the condition of the economy, and President Obama's approval rating has fallen into the low-to-mid 40s in recent weeks. My own forecasting model now has Republicans gaining between 40 and 50 seats in the House. But how realistic are polls that show Republicans winning the national popular vote by a double digit margin-- enough to produce record-setting Democratic losses?</p>
<p>There is one reason to be skeptical about some of these recent poll results--they reflect an enormous gap between the preferences of registered and likely voters. Rasmussen does not release generic ballot results for registered voters, nor do they provide any information about how they identify likely voters. But the recent NBC/Wall Street Journal Poll reported a tie on the generic ballot among registered voters. Likewise, the new Washington Post/ABC News Poll reported only a 2 point Republican advantage among registered voters.</p>
<p>It is not surprising that Republicans would be doing better among likely voters than among all registered voters, especially in a low turnout midterm election. Republicans generally turn out in larger numbers than Democrats because of their social characteristics and this year Republicans appear to be especially motivated to get to the polls to punish President Obama and congressional Democrats. But a double-digit gap between the preferences of registered and likely voters is unusually large.</p>
<p>According to data compiled by the Gallup Poll, in 13 midterm elections between 1950 and 2006 for which relevant data were available, the average gap between the preferences of registered and likely voters was 5 points. Only once, in 2002, did the gap reach double digits. In that year Democrats had a 5 point lead among registered voters but Republicans led by 6 points among likely voters. However, the gap in party preference between registered and likely voters did reach 9 points in 1962 and 8 points in both 1974 and 1982 and in every one of these years, the preferences of Gallup's likely voters were closer to the actual election margin than the preferences of registered voters. In fact, across all 13 midterm elections, the Democratic margin among likely voters differed from the actual Democratic margin in the national popular vote by an average of only 2.1 percentage points while the Democratic margin among registered voters differed from the actual Democratic margin by an average of 6.5 percentage points.</p>
<p>These results appear to support two conclusions. First, while a double-digit gap between the preferences of registered and likely voters is unusual, based on the history of Gallup's generic ballot polling, it is not unprecedented. Second, result of the final Gallup generic ballot among likely voters has been a very good predictor of the national popular vote for the House of Representatives. If that poll finds Republicans with a double-digit margin, Democratic losses in November could be substantially greater than those the party suffered in 1994.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/CyzkG2IMrIo" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/CyzkG2IMrIo/abramowitz_registered_vs_likel.php
http://www.pollster.com/blogs/abramowitz_registered_vs_likel.php2010Fri, 10 Sep 2010 14:55:54 -0500http://www.pollster.com/blogs/abramowitz_registered_vs_likel.phpBafumi, Erikson, and Wlezien: A Forecast of the 2010 House Election Outcome questions@pollster.com (Guest Pollster)by Guest Pollster<p><em>Joseph Bafumi is an assistant professor in the government department at Dartmouth College. Robert S. Erikson is a professor in the political science department and faculty fellow at the Institute for Social and Economic Research and Policy at Columbia University. Christopher Wlezien is a professor in the political science department and faculty affiliate in the Institute for Public Affairs at Temple University.</em></p>
<p>How many House seats will the Republicans gain in 2010? To answer this question, we have run 1,000 simulations of the 2010 House elections. The simulations are based on information from past elections going back to 1946. Our methodology replicates that for our ultimately successful forecast of the 2006 midterm. Two weeks before Election Day in 2006, we <a href="http://www.pollster.com/blogs/bafumi_erikson_wlezien_forecas.php">posted</a> a prediction that the Democrats would gain 32 seats and recapture the House majority. The Democrats gained 30 seats in 2006. Our current forecast for 2010 shows that the Republicans are likely to regain the House majority.</p>
<p>Our preliminary 2010 forecast will appear (with other forecasts by political scientists) in the October issue of PS: Political Science. By our reckoning, the most likely scenario is a <br />
Republican majority in the neighborhood of 229 seats versus 206 for the Democrats for a 50 seat loss for the Democrats. Taking into account the uncertainty in our model, the Republicans have a 79% chance of winning the House.</p>
<p>The model has two steps. Step 1 predicts the midterm vote division from only two variables, the generic poll result and the party of the president. With this estimate of the partisan tide in place, step 2 forecasts the winners of 435 House races using separate statistical models for open seats and races with incumbent candidates. At each step, the forecast takes into account uncertainty about the inputs.</p>
<p>First, we simulate 1000 separate outcomes of the national vote. The pooled generic polls <br />
conducted 121 to 180 days in advance of the 2010 election show a very close division of 49.1% Democratic and 50.9% Republican. But a near tie in the polls in mid-summer projects to a significant vote plurality for the Republicans in November, close to a 53%-47% split. This prediction is not due to any bias in the polls, but rather stems from the electorate's tendency in past midterm cycles to gravitate further toward the "out" party over the election year--ultimately gaining about two extra points beyond what summer polls would otherwise show.</p>
<p>The national vote only tells us part of the story, and we still need to determine how it would translate into seats. For each of the 1000 simulated values of the national vote, we simulate the outcome in 435 congressional districts. Open seats and incumbent seats are treated separately. Open seat outcomes are estimated based on the simulated national vote swing plus the 2008 presidential vote in that district. Outcomes with the incumbent on the ballot are estimated based on the simulated national swing plus the incumbent's vote margin in 2008 and whether the incumbent is running as a freshman. The weight that these variables are given in predicting the final outcome depends on their explanatory power in past elections. Full details are presented in our forthcoming <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1641945">PS paper</a>.</p>
<p>To sum up, first, we generated 1,000 simulations of the national vote. Then, we applied each of the 1,000 simulated national outcomes to each congressional district, noting the party of the "winner." For each of the 1,000 simulated outcomes of the national vote, we project the partisan division of the 435 congressional districts.</p>
<p>The figure below displays the range of simulated results. As can be seen from the predominance of red bars, the Republicans win the majority of seats in 79% of the trials. On average, the Republicans win 229 seats, 23 more than the Democrats and 11 more than the 218 needed for a majority. However, the simulations yield considerable variation, with a 95% confidence interval of 176 to 236 Republican seats.</p>
<p><a href="http://www.huffingtonpost.com/theblog/archive/BEWmodel.html" onclick="window.open('http://www.huffingtonpost.com/theblog/archive/BEWmodel.html','popup','width=632,height=459,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://images.huffingtonpost.com/2010-08-26-BEWmodel-thumb.png" width="450" height="326" alt="" /></a></p>
<p>This prediction comes with important caveats. Applying our model to 2010 assumes that the forces at work in 2010 are unchanged from past midterm elections. However, we should be wary of the possibility that the underlying model of the national vote works differently in 2010 or is influenced by variables we have not taken into account. Because the 2010 campaign started to heat up earlier than usual, the usual tilt toward the out party may already be complete, with no further drift to the Republicans. It is also uncertain how voters will react to the tea-party movement as the public face of the Republican Party.</p>
<p>The key will be to follow the generic polls from now to November. If the polls stay close, the Democrats have a decent chance to hold the House. But if the polls follow the past pattern of moving toward the "out" party and move further toward the Republicans--even by a little--the Republicans should be heavily favored.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/sq5KdGsRqe8" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/sq5KdGsRqe8/bafumi_erikson_and_wlezien_a_f.php
http://www.pollster.com/blogs/bafumi_erikson_and_wlezien_a_f.php2010Fri, 27 Aug 2010 13:28:45 -0500http://www.pollster.com/blogs/bafumi_erikson_and_wlezien_a_f.phpAbramowitz: OMG! GOP Up by 7 in Gallup Tracking Pollquestions@pollster.com (Guest Pollster)by Guest Pollster<p><i><a href="http://polisci.emory.edu/faculty%20pages/abramowitz.htm">Alan I. Abramowitz</a> is the Alben W. Barkley Professor of Political Science at Emory University in Atlanta, Georgia. He is also a frequent contributer to<a href="http://www.centerforpolitics.org/crystalball/"> Larry Sabato's Crystal Ball.</a></i></p>
<p>If you heard a loud thump on Monday afternoon it just may have been the sound of worried Democrats hitting the panic button. That's when the latest Gallup weekly tracking poll was released and it showed Republicans with their largest lead yet on the generic ballot--7 points. It's the third consecutive week that Republicans have had a significant lead--following a 5 point lead two weeks ago and a 6 point lead last week. And that's among all registered voters, not just those likely to vote in November. Once Gallup begins screening for likely voters the GOP lead will almost certainly get larger since registered Republicans traditionally turn out at a higher rate than registered Democrats and this year Republicans are more enthusiastic about voting than Democrats.</p>
<p>But do Gallup's latest results actually mean that Republicans are likely to maintain a significant advantage on the generic ballot? Not necessarily. A closer examination of Gallup's weekly generic ballot data indicates that the current GOP advantage is likely to shrink over the next few weeks. In fact almost all of the week-to-week change in the standing of the parties appears to be due to random variation. There is little evidence of any real trend, at least so far. </p>
<p>Over the past 18 weeks, from April 12-18 through August 8-15, Republicans have received an average of 46% of the vote to 45% for Democrats on the generic ballot. There has been considerable week-to-week variation, from a 6 point Democratic lead only four weeks ago, to the current 7 point Republican lead, but no clear trend. Over this period, the correlation between the week of the survey and the size of the GOP lead is a very small and statistically insignificant .14. </p>
<p>Figure 1 displays both the week-to-week and the five week running averages for the Republican margin on the generic ballot between week 5 and week 14 of the Gallup weekly tracking poll. While the weekly average has shown considerable volatility, the five week running average has been fairly stable, fluctuating between a 2 point Democratic lead and a 2 point Republican lead with no clear trend. </p>
<div style="text-align: center;"><img src="http://big.assets.huffingtonpost.com/generic_0.PNG" width="400" height="380"></div>
<p>The results in Figure 1 suggest that the weekly fluctuations in the generic ballot results are largely random. This conclusion is reinforced by the fact that there is a fairly large negative correlation of -.55 (p < .025) between the size of the GOP lead one week and the change in the size of that lead the next week. This means that the larger the GOP margin in a given week, the more that lead tends to shrink in the following week. These results again suggest that the week to week variation in the results is largely random. </p>
<p>Of course the fact that the current 7 point Republican lead on the generic ballot is likely to shrink doesn't alter the fact that Republicans are poised to make substantial gains in the midterm election. Even a tie on the generic ballot, given normal turnout patterns, is good news for the GOP. So while it may not be time yet for Democrats to hit the panic button, there is plenty of reason for them to be worried. </p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/37myYMuiFIA" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/37myYMuiFIA/abramowitz_omg_gop_up_by_7_in.php
http://www.pollster.com/blogs/abramowitz_omg_gop_up_by_7_in.php2010Tue, 17 Aug 2010 15:57:44 -0500http://www.pollster.com/blogs/abramowitz_omg_gop_up_by_7_in.phpRivers: Random Samples and Research 2000questions@pollster.com (Guest Pollster)by Guest Pollster<p><em><a href="http://www.polimetrix.com/company/team.html#douglas">Douglas Rivers</a> is president and CEO of <a href="http://www.polimetrix.com/">YouGov/Polimetrix</a> and a professor of political science and senior fellow at Stanford University's Hoover Institution. <a href="http://pollster.com/pollster-bio/">Full disclosure</a>: YouGov/Polimetrix is the owner and principal sponsor of Pollster.com. <br />
</em><br />
I am, like most in the polling community, shocked by the recent accusations of fraud against Research 2000. <a href="http://www.dailykos.com/story/2010/6/29/880179/-Research-2000:-Problems-in-plain-sight">Marc Grebner, Michael Weissman, and Jonathan Weissman convincingly demonstrate</a> that something is seriously amiss with the research reported by Research 2000, which may well be due to fraud. </p>
<p>But some of the claims by the critics, such as <a href="http://www.fivethirtyeight.com/2010/06/nonrandomness-in-research-2000s.html">Nate Silver's post this morning</a> on FiveThirtyEight.com (as well as part of the Grebner et al. analysis), exhibit a common misunderstanding about survey sampling: "random sampling" does not necessarily mean "simple random sampling." I do not know what Research 2000 did (or claimed to do), but very few surveys actually use simple random sampling.</p>
<p>To recapitulate Nate's argument: if you draw a simple random sample of size 360 from a population of 50% Obama voters and 50% McCain voters, the day to day variation in the Obama vote percentage in the sample should be approximately normal, with mean 50% and standard deviation 2.7%. (Nate gets this by simulating 30,000 polls and rounding the results, but most students in introductory statistics would just calculate the square root of 0.5 x 0.5 / 360, which is about 2.6%.) This would give you the blue line in Nate's first graph, reproduced below.</p>
<p><span class="mt-enclosure mt-enclosure-image" style="display: inline;"><a href="http://www.pollster.com/blogs/obr2k.php" onclick="window.open('http://www.pollster.com/blogs/obr2k.php','popup','width=443,height=334,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.pollster.com/blogs/obr2k-thumb-550x414.png" width="550" height="414" alt="obr2k.png" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></a></span></p>
<p>However, what happens if the poll is not a simple random sample? Suppose (and this is entirely hypothetical) that you polled off of a registration list composed of 50% Democrats and 50% Republicans (to keep things simple, let's pretend there are no independents). Further, suppose that 90% of the Democrats support Obama and 90% of the Republicans support McCain, so it's still 50/50 for Obama and McCain in the population. Instead of drawing a simple random sample, we draw a "stratified random sample" with 180 Democrats and 180 Republicans each day. That is, we draw a simple random sample of 180 Democrats and a simple random sample of 180 Republicans and combine them. What should the distribution of daily poll results look like?</p>
<p>I should caution that there is a little math in what follows, but nothing hard. The variance (the square of the standard deviation) of each subsample is 0.90 x 0.10 / 180 = 0.0005. The combined sample mean is just the average of these two independent subsamples, so its variance is 0.0005/2 or 0.00025, so the standard deviation is the square root of 0.00025 or approximately 1.6%, not the 2.6% that Nate thought it should be. This distribution is shown in the figure below as a green lines, which is a lot closer to the suspicious red line in Nate's graph, showing the Research 2000 results.</p>
<p><span class="mt-enclosure mt-enclosure-image" style="display: inline;"><a href="http://www.pollster.com/blogs/riversgraphic1.php" onclick="window.open('http://www.pollster.com/blogs/riversgraphic1.php','popup','width=692,height=693,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.pollster.com/blogs/riversgraphic-thumb-550x550.png" width="550" height="550" alt="riversgraphic.png" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></a></span></p>
<p>Does this absolve Research 2000 of fraud? Of course not. There are other factors (such as weighting) that usually increase the variability, so Nate is right that the Research 2000 results look suspicious. But we should be a little more cautious before convicting upon the basis of this sort of evidence.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/0oaZMYkSqlo" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/0oaZMYkSqlo/rivers_random_samples_and_rese.php
http://www.pollster.com/blogs/rivers_random_samples_and_rese.phpSampling IssuesWed, 30 Jun 2010 12:33:28 -0500http://www.pollster.com/blogs/rivers_random_samples_and_rese.phpMurray: Are Nate Silver's Pollster Ratings 'Done Right'?questions@pollster.com (Guest Pollster)by Guest Pollster<p><em>Patrick Murray is director of the Monmouth University Polling Institute</em></p>
<p>The motto of Nate Silver's website, <a href=http://www.fivethirtyeight.com>www.fiverthirtyeight.com</a>, is "Politics Done Right." Questions have been raised whether his latest round of <a href=http://www.fivethirtyeight.com/2010/06/pollster-ratings-v40-results.html>pollster ratings</a> lives up to that claim.</p>
<p>After Mark Blumenthal noted errors and omissions in the data used to arrive at <a href=http://www.pollster.com/blogs/transparency_and_pollster_rati.php>Research 2000's rating</a>, I asked to examine Monmouth University's poll data. I found a number of errors in the 17 poll entries he attributes to us - including six polls that were actually conducted by another pollster before our partnership with the Gannett New Jersey newspapers started, one eligible poll that was omitted, one incorrect candidate margin, and even two incorrect election results that affected the error scores of four polls. <i>[Nate emailed that he will correct these errors in his update later this summer.]</i></p>
<p>In the case of prolific pollsters, like Research 2000, these errors may not have a major impact on the ratings. But just one or two database errors could significantly affect the ratings of pollsters with relatively limited track records - such as the 157 (out of 262) organizations with fewer than 5 polls to their credit. Some observers have called on Nate to <a href=http://politicalwire.com/archives/2010/06/09/wheres_the_transparency_in_pollster_ratings.html>demonstrate transparency</a> in his own methods by releasing that database. Nate has refused to do this (with a somewhat dubious justification), but at least he now has a process for pollsters to <a href= http://www.fivethirtyeight.com/2010/06/fivethirtyeight-is-pleased-to-let.html>verify their own data</a>.</p>
<p>Basic errors in the database are certainly a problem, but the issue that has really generated buzz in the polling community is his new "transparency bonus." This is based on the premise that pollsters who were members of the <a href=www.ncpp.org>National Council on Public Polls</a> or had committed to the <a href=http://aapor.org/AAPOR_Transparency_Supporters.htm>American Association for Public Opinion Research (AAPOR) Transparency Initiative</a> as of June 1, 2010 exhibit superior polling performance. These pollsters are awarded a very sizable "transparency bonus" in the latest ratings.</p>
<p>Others have remarked on the apparent <a href=http://www.pollster.com/blogs/yost_borick_the_silver_standar.php>arbitrariness of this "transparency bonus" cutoff date</a>. Many, if not most, pollsters who signed onto the initiative by June 1, 2010 were either involved in the planning or attended the AAPOR national conference in May. A general call to support the initiative did not go out until June 7.</p>
<p>Nate claims that, regardless of how a pollster made it onto the list, these pollsters are simply better at election forecasting, and he provides the results of a <a href=http://1.bp.blogspot.com/_5ieXw28ZUpg/TAvyOjEZqPI/AAAAAAAABtA/lYhbqsQoVYg/s1600/rawscore.png>regression analysis</a> as evidence. The problem is that the transparency score misses most researchers' threshold for being significant (p<.05). In fact, of the three variables in his equation - transparent, partisan, and Internet polls - only partisan polling shows a significant relationship. Yet, his Pollster Introduced Error (PIE) calculation awards "transparent" polls and penalizes Internet polls, but leaves partisan polls untouched. Moreover, his model explains only 3% of the total variance in pollster raw scores (i.e. polling error).</p>
<p>I decided to run some ANOVA tests on the effect of the transparency variable on pollster raw scores for the full list of pollsters as well as sub-groups at various levels of polling output (e.g. pollsters with more than 10 polls, pollsters with only 1 or 2 polls, etc.). The F values for these tests range from only 1.2 to 3.6 under each condition, and none are significant at p<.05. In other words, there may be more that separates pollsters <i>within</i> the two groups (transparent versus non-transparent) than there is <i>between</i> the two groups.</p>
<p>I also ran a simple means analysis. The average error among all pollsters is +.54 (positive error is bad, negative is good). Among "transparent" pollsters, the average score is -.63 (se=.23), while among other pollsters it is +.68 (se=.28). A potential difference, to be sure.</p>
<p>I then isolated the more prolific pollsters - the 63 organizations with at least 10 polls. Among this group, the 19 "transparent" pollsters have an average error score of -.32 (se=.23) and the other 44 pollsters average +.03 (se=.17). The difference is now less stark.</p>
<p>On the flip side, organizations with fewer than 10 polls to their credit have an average error score of -1.38 (se=.73) if they are "transparent" - all 8 of them - and a mean of +.83 (se=.28) if they are not. That's a much larger difference. Could it be that the real contributing factor to pollster performance is the number of polls conducted over time?</p>
<p>Consider that 70% of "transparent" pollsters on Nate's list have 10 or more polls to their credit, but only 19% of the "non-transparent" organizations have been equally as prolific. In effect, "non-transparent" pollsters are penalized for being affiliated with a large number of colleagues who have only a handful of polls to their name - i.e. pollsters who are prone to greater error.</p>
<p>To assess the tangible effect of the transparency bonus (or non-transparency penalty) on pollster ratings, I re-ran Nate's PIE calculation using a level playing field for all 262 pollsters on the list to rank order them. [I set the group mean error to +.50, which is approximately the mean error among all pollsters.] Comparing the relative pollster ranking between his and my lists produced some intriguing results. The vast majority of pollster ranks (175) did not change by more than 10 spots on the table. On its face, this first finding raises questions about the meaningfulness of the transparency bonus.</p>
<p>Another 67 pollsters moved between 11 to 40 ranks between the two lists, 11 shifted by 41 to 100 spots, and 9 pollsters gained more than 100 spots in the rankings, solely due to the transparency bonus. Of this last group, only 2 of the 9 had more than 15 polls recorded in the database. This raises the question of whether these pollsters are being judged on their own merits or riding others' coattails, as it were.</p>
<p>Nate says that the main purpose of his project is not to rate pollsters' past performance but to determine probable accuracy going forward. The complexity of his approach boggles the mind - his <a href=http://www.fivethirtyeight.com/2010/06/pollster-ratings-v40-methodology.html>methodology statement</a> contains about 4,800 words including 18 footnotes. It's all a bit dazzling, but in reality it seems like he's making three left turns to go right.</p>
<p>Other poll aggregators use less elaborate methods - including straightforward means - and have been just as, or even more, accurate with their election models (see <a href=http://www.pollster.com/blogs/bowers_vs_538_vs_pollster.php>here</a> and <a href=http://election.princeton.edu/2008/11/11/post-election-evaluation-part-2>here</a>). I wonder if, with the addition of this transparency score, Nate has taken one left turn too many.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/P7F4VWKPxgo" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/P7F4VWKPxgo/are_nate_silvers_pollster_rati.php
http://www.pollster.com/blogs/are_nate_silvers_pollster_rati.phpPollstersFri, 18 Jun 2010 10:20:17 -0500http://www.pollster.com/blogs/are_nate_silvers_pollster_rati.phpYost & Borick: The Silver Standardquestions@pollster.com (Guest Pollster)by Guest Pollster<p><em>This guest pollster contribution comes from <a href="http://www.fandm.edu/x7172">Berwood Yost</a>, director of the Floyd Institute for Public Policy Franklin and Marshall College, and <a href="http://www.muhlenberg.edu/main/aboutus/polling/staff/borick.html">Christopher Borick</a>, director of the Muhlenberg College Polling Institute.</em></p>
<p>Nate Silver's compilation of performance data for election polling in the United States and his <a href="http://www.fivethirtyeight.com/2010/06/pollster-ratings-v40-results.html">ratings</a> of polling organizations should be applauded for increasing the ability of the public to judge the accuracy of the ever increasing number of pre-election polls. Helping the public determine the relative effectiveness of polls in predicting election outcomes can be compared to Consumer Reports equipping individuals with information about which products meet minimum standards for quality. As with the work of Consumer Reports, Mr. Silver is explicit in his <a href="http://www.fivethirtyeight.com/2010/06/pollster-ratings-v40-methodology.html">methodology</a> and provides substantial justification for the assumptions he adopts in his calculations. But as is the case in the construction of any measure, there are some reasonable questions that can be raised about what was included in those calculations. One such question has to do with the "affiliation bonus."</p>
<p>Silver's decision to include an "affiliation bonus" for pollsters that are either in the NCPP or have joined AAPOR's Transparency Initiative has significant consequences for his final ratings. Table 1 provides two pollster-introduced error (PIE) estimates for a sub-group of academic polling organizations, one that uses the calculation for all telephone pollsters and the other that uses the calculation for those pollsters who receive the "affiliation bonus." We chose this group because all of the organizations, regardless of their affiliation with NCPP or the AAPOR Transparency initiative, consistently release full descriptions of their methodology and provide detailed breakdowns of their results. The scores highlighted in yellow are those reported for each pollster on Silver's site. As Table 1 shows, the rankings are substantially different depending on whether a firm receives the "affiliation bonus."</p>
<p>[<em>Editor's note: Chris Borick informs us that Muhlenberg University has signed on to the AAPOR Transparency Initiative, but did so after June 1, so they were not classified as a participant in Silver's ratings. Berwood Yost tells us that Franklin and Marshall intends to sign on, but has not done so yet</em>].</p>
<p><span class="mt-enclosure mt-enclosure-image" style="display: inline;"><a href="http://www.pollster.com/blogs/2010-06-14-borick-Yost-538scores.php" onclick="window.open('http://www.pollster.com/blogs/2010-06-14-borick-Yost-538scores.php','popup','width=695,height=289,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.pollster.com/blogs/2010-06-14-borick-Yost-538scores-thumb-550x228.png" width="550" height="228" alt="2010-06-14-borick-Yost-538scores.png" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></a></span></p>
<p>As part of his rating methods Mr. Silver makes the decision to discount the "raw scores" for polls despite noting that those scores are the most "direct measure of a pollster's performance." His primary justification for discounting the "raw scores" is because his project is, "not to evaluate how accurate a pollster has been in the past--but rather, to anticipate how accurate it will be going forward" (taken from Silver's methodological discussion). Those who read his rankings should take care to understand the distinction that Silver is making between past performance and expected future performance. We are not sure why the scores based on past performance are inferior to PIE and he does not make a sufficiently strong case for the very heavy discount that he applies to those scores in his calculations. It would be valuable to see some more evidence about what makes PIE a better indicator of polling performance. The "affiliation bonus" may indeed be correlated with the performance of polls, but is it actually the affiliations that are leading to better performance or is it some other unmeasured variable that is at work? Silver's calculations show that the "affiliation bonus" explains only three percent of the variance in his regression equation and has a p value that is greater than .05. One may ask if that is sufficient evidence to provide such a strong advantage to some pollsters.</p>
<p>In closing we would once again like to applaud Mr. Silver for taking on the important task of applying solid methods to the evaluation of pollster accuracy. The public needs such efforts in order to more effectively sift through the avalanche of polls that greet them every election season. Our intention is simply to note that the scores produced by Silver should be evaluated in terms of both their strengths and limitations.</p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/L-il_Dmnkvc" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/L-il_Dmnkvc/yost_borick_the_silver_standar.php
http://www.pollster.com/blogs/yost_borick_the_silver_standar.phpPollstersTue, 15 Jun 2010 08:41:20 -0500http://www.pollster.com/blogs/yost_borick_the_silver_standar.phpLundry: Twitter as Pollsterquestions@pollster.com (Guest Pollster)by Guest Pollster<p><em>Alex Lundry is Vice President and Director of Research for <a href="http://www.targetpointconsulting.com/">TargetPoint Consulting</a>, a conservative political polling, microtargeting, and knowledge management firm. You can connect with <a href="http://www.twitter.com/alexlundry">him on Twitter</a> where he expresses his opinions with great clarity so as to avoid confounding CMU's sentiment analysis.</em></p>
<p>Researchers at Carnegie Mellon have shown that unstructured text data pulled from Twitter can in some instances be used as a reliable substitute for opinion polling (<a href="http://bit.ly/cNI6wh">link to study PDF</a>). The results are impressive, and though pollsters needn't start looking for another line of work, I think they ignore this study at their peril. </p>
<p>Using very simple tweet selection mechanisms along with measures of the tweet's sentiment ("Obama's awesome" = approve, "Obama sucks" = disapprove), these researchers were able to:</p>
<ul>
<li>extract an alternate measure of consumer confidence that was very highly correlated (r=73.1%) with the standard poll derived confidence metric, </li>
<li>use this Twitter-derived measure of consumer confidence to accurately forecast the results of the consumer confidence poll, and</li>
<li>measure President Obama's job approval rating and correlate it with Gallup's daily tracker at a level of r=72.5%. </li>
</ul>
<p>However, the same methodology failed miserably when it came to the 2008 presidential horse race obtaining a correlation of r=-8% with Obama's level of support in the Gallup tracker. </p>
<p>It seems then that aggregate Twitter sentiment shows great promise as a polling substitute for high volume and relatively binary opinions and attitudes: are you hot or cold on the economy, do you like or dislike the President? But the polynomial nature of items like a campaign horserace or the health care debate makes it difficult to extract meaningful opinions amid a crush of unstructured data. </p>
<p>Yet this is no reason for pollsters to shrug away these results. There is great predictive power hidden away inside this sort of latent data just waiting for the extraction of opinions, attitudes and trends in voter sentiment. Pollsters would be wise to begin incorporating these data into their work: analyzing <a href="http://www.google.org/flutrends/">Google Trends</a> search data, counting <a href="http://www.politico.com/pdf/PPM130_social_media_and_the_2010_us_senate_elections_draft.pdf">Facebook friends</a>, YouTube views and web traffic, or simply doing more with the <a href="http://www.wordle.net/">rich verbatim data</a> we typically capture in our surveys and focus groups. (And it's not just politics where this is applicable; tweet volume and sentiment have also been shown to be an <a href="http://www.fastcompany.com/1604125/twitter-predicts-box-office-sales-better-than-anything-else">incredibly accurate predictor</a> of a movie's box office returns). </p>
<p>This study also highlights a debate the polling community must have sooner or later: can the shortcomings of dirty data be overcome by a mix of sheer volume, sound data preparation/manipulation and savvy analysis? In this new era of IVR, online panels, social media and <a href="http://www.economist.com/specialreports/displaystory.cfm?story_id=15557443">big data</a>, the answer is increasingly pointing to yes - especially when you consider the advantages of speed, cost and access that these non-traditional data collection methods enjoy. </p>
<p>Finally, it's worth taking a moment to consider just how stunningly impressive these results are. What level of precision might there have been with a more sophisticated methodology? Tweets were selected for study based merely upon the presence of a single word - imagine the accuracy if selection allowed for the use of synonyms, alternate spellings or <a href="http://en.wikipedia.org/wiki/Boolean_logic">Boolean</a> operators. Moreover, as the researchers themselves point out, there were no geographical restrictions and no consideration of either online idioms or the practice of retweeting. </p>
<p>This is an exciting, important study, and the polling community should be taking it very seriously. It is well worth your time to <a href="http://bit.ly/cNI6wh">read the whole thing</a>, and I'm very curious to hear your take on it in the comments section below. </p><img src="http://feeds.feedburner.com/~r/pollster/guest/~4/agTiRCWLVkk" height="1" width="1" alt=""/>http://feedproxy.google.com/~r/pollster/guest/~3/agTiRCWLVkk/lundry_twitter_as_pollster.php
http://www.pollster.com/blogs/lundry_twitter_as_pollster.phpPolls in the NewsThu, 13 May 2010 13:19:13 -0500http://www.pollster.com/blogs/lundry_twitter_as_pollster.php