Category: Voting and elections

In empirical research in political science and public policy, we often need estimates of the political positions of governments (cabinets) and the salience of different issues for different governments (cabinets). Data on policy positions and issue salience is available, but typically at the level of political parties. One prominent source of data for issue salience and positions is the Manifesto Corpus, a database of the electoral manifestos of political parties. To ease the aggregation of government positions and salience from party-level Manifesto data, I developed a set of functions in R that accomplish just that, combining the Manifesto data with data on the duration and composition of governments from ParlGov.

The results from the British elections last week already claimed the heads of three party leaders. But together with Labour, the Liberal Democrats and UKIP, there was another group that lost big time in the elections: pollsters and electoral prognosticators. Not only were polls and predictions way off the mark in terms of the actual vote shares and seats received by the different parties. Crucially, their major expectation of a hung parliament did not materialize as the Conservatives cruised into a small but comfortable majority of the seats. Even more remarkably, all polls and predictions were wrong, and they were all wrong pretty much in the same way. Not pretty.

This calls for reflection upon the exploding number of electoral forecasting models which sprung up during the build-up to the 2015 national elections in the UK. Many of these models were offered by political scientists and promoted by academic institutions (for example, here, here, and here). At some point, it became passé to be a major political science institution in the country and not have an electoral forecast. The field became so crowded that the elections were branded as ‘a nerd feast’ and the competition of predictions as ‘the battle of the nerds’. The feast is over and everyone lost. It is the time of the scavengers.

The massive failure of British polls and predictions has already led to a frenzy of often vicious attacks on the pollsters and prognosticators coming from politicians, journalists and pundits, in the UK and beyond. A formal inquiry has been launched. The unmistakable smell of schadenfreude is hanging in the air. Most disturbingly, some respected political scientists have voiced a hope that the failure puts a stop to the game of predicting voting results altogether and dismissed electoral predictions as unscientific.

This is wrong. Political scientists should continue to build predictive models of elections. This work has scientific merit and it has public value. Moreover, political scientists have a mission to participate in the game of electoral forecasting. Their mission is to emphasize the large uncertainties surrounding all kinds of electoral predictions. They should not be in the game in order to win, but to correct on others’ too eager attempts to mislead the public with predictions offered with a false sense of precision and certainty.

The rising number of electoral forecasts done by political scientists has more than a little bit to do with a certain jealousy of Nate Silver – the American forecaster who gained international fame and recognition with his successful predictions of the US presidential elections. (By the way, this time round, Nate Silver got it just as wrong as the others). For once, there was something sexy about political science work, but the irony was, political scientists were not part of it. And if Nate, who is not a professional political scientist, can do it, so can we – academic experts with life-long experience in the study of voting and elections and hard-earned mastery of sophisticated statistical techniques. So the academia was drawn into this forecasting thing.

And that’s fine. Political scientists should be in the business of electoral forecasting because this business is important and because it is here to stay. News outlets have an insatiable appetite for election stories as voting day draws near, and the release of polls and forecasts provides a good excuse to indulge in punditry and sometimes even meaningful discussion. So predictions will continue to be offered and if political scientists move away somebody else will take their place. And the newcomers cannot be trusted to have the public interest at heart.

Election forecasts are important because they feed into the electoral campaign and into the strategic calculations of political parties and of individual voters. Voting is rarely an act of naïve expression of political preferences. Especially in an electoral system that is highly non-proportional, as the one in the UK, voters and parties have a strong incentive to behave strategically in view of the information that polls and forecasts provide. (By the way, ironically, the one prognosis that political scientists got relatively right – the exit poll – is the one that probably matters the least as it only serves to satisfy our impatience to wait a few more hours for the official electoral results.)

Hence, political scientists as servants of the public interest have a mission to offer impartial and professional electoral forecasts based on state of the art methodology and deep substantive knowledge. They must also discuss, correct and when appropriate trash the forecasts offered by others.

And they have one major point to make – all predictions have a much larger degree of uncertainty than what prognosticators want (us) to believe. It is a simple point that experience has been proven right times and again. But it is one that still needs to be pounded over and over as pollsters, forecasters and the media get easily carried away.

It is in this sense that commentators are right: predictions, if not properly bracketed by valid estimates of uncertainty, are unscientific and pure charlatanry. And it is in this sense that most forecasts offered by political scientists at the latest British elections were a failure. They did not properly gauge the uncertainty of their estimates and as a result misled the public. That they didn’t predict the result is less damaging than the fact they pretended they could.

Since the bulk of the data doing the heavy-lifting in most electoral predictive models is poll data, the failure of prediction can be traced to a failure of polling. But pollsters cannot be blamed for the fact that prognosticators did not adjust the uncertainty estimates of their predictions. The tight sampling margins of error reported by pollsters might be appropriate to characterize the uncertainty of polling estimates (under certain assumptions) of public preferences at a point in time, but they are invariably too low when it comes to making predictions from these estimates. Predictions have other important sources of uncertainty in addition to sampling error and by not taking these into account prognosticators are fooling themselves and others. Another point forecasters should have known: combining different polls reduces sampling margins of error, but if all polls are biased (as they proved to be in the British case), the predictions could still be seriously off the mark.

Offering predictions with wide margins of uncertainty is not sexy. Correcting others for the illusory precision of their forecasts is tedious and risks being viewed as pedantic. But this is the role political scientists need to play in the game of electoral forecasting, and being tedious, pedantic and decidedly unsexy is the price they have to pay.

Political Data Yearbook Interactive is a new source for data on election results, turnout and government composition for all EU and some non-European countries. It is basically an online version of the yearbooks that ECPR printed as part of the European Journal for Political Research for many years now.

The interactive online tool has some (limited) visualization options and can export data in several formats.

“The populist radical right constitutes the most successful party family in postwar Western Europe.” (Cas Mudde, Stein Rokkan Lecture published in the latest issue of the European Journal of Political Research)

I hope this is a typo or some other type of unintentional misunderstanding. How can the populist radical right be the most successful party family when they have never gotten more than 16% of the votes outside Austria and Switzerland (according to Table 1 in the same lecture)?

Mudde, C. A. S. “Three Decades of Populist Radical Right Parties in Western Europe: So What?” European Journal of Political Research 52, no. 1 (2013): 1-19.AbstractThe populist radical right constitutes the most successful party family in postwar Western Europe. Many accounts in both academia and the media warn of the growing influence of populist radical right parties (PRRPs), the so-called ‘verrechtsing’ (or right turn) of European politics, but few provide empirical evidence of it. This lecture provides a first comprehensive analysis of the alleged effects of the populist radical right on the people, parties, policies and polities of Western Europe. The conclusions are sobering. The effects are largely limited to the broader immigration issue, and even here PRRPs should be seen as catalysts rather than initiators, who are neither a necessary nor a sufficient condition for the introduction of stricter immigration policies. The lecture ends by providing various explanations for the limited impact of PRRPs, but it is also argued that populist parties are not destined for success in opposition and failure in government. In fact, there are at least three reasons why PRRPs might increase their impact in the near future: the tabloidisation of political discourse; the aftermath of the economic crisis; and the learning curve of PRRPs. Even in the unlikely event that PRRPs will become major players in West European politics, it is unlikely that this will lead to a fundamental transformation of the political system. PRRPs are not a normal pathology of European democracy, unrelated to its basic values, but a pathological normalcy, which strives for the radicalisation of mainstream values.

This is a guest post by Markus Haverland, Professor at Erasmus University Rotterdam and author of a recent book on research methods.
***

Causal knowledge about the world proceeds by testing hypotheses. The context of discovery precedes the context of justification. We all know that journalists and pundits often do it the other way around: providing for an explanation after the fact.

A particularly hilarious example can be found in today’s issue of “Spits”, a Dutch daily newspaper. Anticipating that the result of the elections for the president of the US would arrive after the newspapers went to press, the newspaper prepared for both situations. It has turned the backpage into a second frontpage. Depending on the results the reader is advised to either read the frontpage or the backpage. On both pages the well-known Dutch journalists, a former correspondent in Washington, Charles Groenhuijsen analyses the results. On the “Obama wins” page he explains that it was evident that Obama would win, because he is a better campaigner and Romney’s economic program is inconsistent. On the “Romney wins” page he explains this outcome, by stating that, ultimately, the US is a conservative country, that voters were afraid of a turn to the left, laws against gun possession, and tolerance towards gay marriage, and that voters thought he was not effectively dealing with the economic crisis.

Over the last year two major Hollywood movies that touch upon the use of big data and sophisticated data analysis hit the big screen. Which, of course, is two more than the mean (or was that the median). Moneyball shows how crunching numbers helps win baseball games and Margin Call shows how crunching numbers helps ruin financial firms. It’s kind of fun to see Brad Pitt and Kevin Spacey stare at spreadsheets and nod approvingly while being explained some statistical subtleties. But watching someone stare at somebody else’s spreadsheets quickly becomes tiresome … which probably explains why Regressing with the Stars, Dotchart Master, and America’s Next Multilevel Model haven’t yet taken over reality TV.

So I was really disappointed to see that a third 2011 movie – The Ides of March – misses a golden opportunity to show the use of big data and sophisticated analysis for winning elections. The movie revolves around the primary presidential campaign of George Clooney (pardon, Governor Mike Morris) and the dirty politics behind the scenes. But for Hollywood in 2011, electioneering is still a game of horse-trading, media spinning and good-ol’ stabs in the back. All these things about election campaigns are probably true, but I was disappointed that there were no fancy graphs plotting approval ratings and prediction market quotes, no real-time election forecasts (or nowcasts) at which George Clooney to stare and nod approvingly, no GIS-supported campaign targeting, not even focus groups, twits, facebook pages, not to speak of google circles. Now, I have never been involved in an election campaign but I would have guessed that some of what political scientists are doing to analyze election outcomes and the effects of various elements of election campaigns has filtered through to campaign managers. But according to The Ides of March, electioneering is still stuck in the 1990-s. Someone get Hollywood a subscription to Political Analysis.