We got Gemini when I was 14, and she was my families's cat for 18 years. She had a nice and very simple life, enjoying going out to the garden and sitting on the front step sunning herself, as well as the occasional plate of tuna. In cat year terms, she was 88 years old, so it was hardly a surprise when she fell ill in November of last year and there was nothing the vets could do. But that does not mean she is not very missed. In fact, I was back at family home just this weekend, and it seemed very empty without her. No matter how old they are, losing a pet (especially the pet you grew up with) is sad.

And then we discovered something brilliant. Actually credit for this goes to my brother. He was browsing Google Street View and, as you do, started to look at addresses he knew. And then he noticed something.

As those of you who follow me on Twitter will know, I have largely been avoiding all social media for the past few months, as I have been on a research-focused sabbatical since September. This has provided a fantastic period for me to reflect and start new research projects, but now I am back and feeling refreshed ready for what promises to be one of the most exciting years in British politics in recent decades.

One of my new year’s resolutions was to blog a bit more, as I think it is a great way to get early versions of research work or just odd ideas down on (virtual) paper. In that spirit, I wanted to reflect briefly on a few things that strike me about the upcoming election. No particular order, but five points of interest that could become more prominent in the coming months:

The focus of the political class has never been so divided. Yesterday, it seemed that both major political parties were running with two versions of the same election slogan, roughly “Your have four more months to save…” For the Conservatives, the conclusion of the sentence was the economy. For Labour, it was the NHS. Within a few hours, predictably, Nigel Farage, popped up saying that the election was really about immigration. There is a very clear battle to set the agenda for the election, and the result might well play a defining role in who wins it.

This election will be about segmentation, both geographical and social. My guess is that the total number of votes won by the various parties will play only a limited role in generating the outcome of the election. What might be far more important is how efficiently parties are able to concentrate their vote in the particular places they need them to win seats. Which leads me to the next important point…

Opinion polling is going to change.It was great to see Chris Hanretty’s excellent election prediction work featured on Newsnight yesterday evening. The decline of universal national swing has opened the door for a whole host of new prediction techniques – most famously espoused by Nate Silver in the US – that draw on more complex statistical models and broader datasets. The coverage these methods are getting really demonstrates that the Gallopian paradigm of public opinion research (purportedly, seeking to sample the voice of the nation) is under attack. Instead, there is a growing interest in sub-samples and specific groups deemed to be of importance to the outcome. Prediction is also increasingly probabilistic in nature.

That said, the mass is not quite dead. Labour has claimed that they will base their campaign around talking to people and employ social media to mobilise activists. This is certainly a good approach for a party that lacks the financial resources of its rival. However, Labour risks neglecting the important lesson of Obama’s use of activism in the US. His success was not just in mobilising activists, but building links between his keenest supporters and the apolitical mainstream. This worked at two levels. The obvious tangible example was in fundraising, where Obama harnessed his support-base’s willingness to give as a mechanism to compete for mainstream voters. He also effectively mobilised his activists to do direct contact campaigning. But additionally, and as importantly, he built symbolic links between what his activists felt about the campaign and what mainstream America felt about politics. Of course, we can question the extent to which Obama actually “did this”, as opposed to it being created by broader political, economic and social patterns. But it was vital.

Pre-election will matter post-election. The Liberal Democrats might argue that the public haven’t been entirely fair to them this parliament. After all, no modern Westminster politicians have any real experience of coalition government, which inevitably involves compromises. And, given the number of 2010 Liberal Democrat voters likely to move to Labour in 2015, the supreme irony is that Ed Miliband’s chances of ending up in Downing Street are only really still standing because of Nick Clegg’s veto of boundary changes. Yet the Liberal Democrats were astonishingly naïve. During the 2010 election, they emphasised policies (notably tuition fees), which it very soon became clear were not their top priorities in coalition negotiations. One lesson from the 2010-15 is that parties will need to think a lot more carefully about their post-election game plan, and how this links to what they say during the campaign.

As I said, just some sketchy thoughts. But I am going to try to blog more in the coming months.

I will also be posting some provisional findings from my sabbatical work in the next few weeks, a big data analysis of 37 million words published by UK think tanks in the past decade.

Another day, another poll appears for the Scottish referendum. This time it is an ICM poll with the Guardian, and – among those who have made their mind up – it puts No at 51 per cent and Yes at 49 per cent. 17 per cent of those polled remain undecided. The past week of the campaign has been notable for the huge role played by polls in driving both the news and the political agenda. Indeed, the devo-max offer from the three major parties at the beginning of the week largely seemed to occur because a YouGov poll put the Yes camp in the lead for the first time.

I have recently been writing on the idea of public opinion, and one really interesting thing that comes across from the literature is the tension between polling as a science and an art. As Susan Herbst details in her book Numbered Voices, an account of the early days of the modern opinion polling industry in the United States, one of the great rhetorical innovations by George Gallup and his contemporaries was arguing that public opinion could be measured in a scientific way, certainly in comparison with older methods such as straw polls. But the truth is that, no matter how rigorous the method, opinion polling has always required a healthy dose of creative thinking and skilful judgement.

This truth is especially evident in the case of the referendum, as there are so many factors which might have an impact on the final result. Going into the last few days of the campaign, I would list five unknowns that mean we should take all polls, no matter how well constructed, with a big pinch of salt:

What does “don’t know” actually mean? Journalist Dan Hodges was quick to tweet after the release of today’s poll that “don’t know” was a euphemism for “no”. The theory here seems to be quite close the shy Tory factor or Bradley effect, namely that people have already made up their mind on voting no, but are not willing to publicly admit it. This may or may not be true, but it certainly seems that – even if people are genuinely undecided – then they might make their mind up in a predictable way, which would seem to make it more likely they would support the status quo.

Modelling the electorate is fiendishly difficult. General elections are relatively easy to model, as pollsters have a wealth of data on previous contests. When an interviewee says they are “quite likely” to vote, for example, then how that is understood is based on a range of pre-existing polling and turnout data. But the referendum is a unique case. Partially, this is because it is asking an unprecedented question. In addition, allowing 16-18 year olds the vote adds a completely new cohort of would-be voters. But perhaps most importantly, the 97 per cent registration rate reported earlier this week is completely unprecedented. This means that many more members of the public will be eligible to go to the polling booths on the day of the vote than has ever previously been the case (whether they do or not is a very different matter. See point 4 below).

What impact could postal voting have? Sky News is already reporting stories about postal voters who are “regretting their choice”. Nearly 800,000 people will vote by post. This has a couple of practical ramifications. The first is that postal voters, obviously, will be immune from events in the last days of the campaign. The second is that postal voting of this kind presents a methodological issue for pollsters. Polls are a snapshot of public opinion at the time they are taken, so we would assume that the polls immediately before the referendum will offer the most accurate predictions. However, when nearly 20 per cent of the electorate have already cast their vote, it means that the same poll might not accurately reflect their votes.

Who will be able to get their vote out? A lot has been made of the Yes campaigns grass roots mobilisation, as distinct from the more traditional top-down approach of Better Together. These characterisations are probably a bit glib, but there is no doubt that, while Better Together can fall back on the organisational muscle of the Labour Party, the Yes campaign has linked itself with a broad range of civic and political groups. Given the very high registration rate among voters and the seeming closeness of the race, effective get out the vote efforts from either side might carry the day.

How do we understand Labour supporters moving into the Yes camp? Slightly less of a polling issue this one, but one really interesting element of the referendum campaign thus far has been the number of Labour supporters who are moving into the Yes camp (as Peter Kellner notes in his commentary on the recent YouGov poll that put Yes in the lead). There are interesting parallels here with Rob Ford and Matthew Goodwin’s important revisionist work on UKIP, where they argue that a significant part of UKIP electoral support is coming from former and natural Labour voters. Ford and Goodwin’s argument is that this UKIP-supporting group feels estranged from the modern Labour Party, Westminster politics and deeply economically insecure – many of the same characteristics that are driving the movement towards the Yes camp in Scotland.

Of course, it may be that, come next Friday, the pollsters have got it dead right. It may also be that a last minute swing to one side makes these variables of academic interest only. But, in the meantime, spare a thought for the pollsters grappling with one the most difficult challenges they will have ever faced.

Last week, I was lucky enough to go to Seattle to present a paper at the International Communications Association annual conference. This was my first ICA, and I enjoyed the experience greatly. I featured on a wonderful panel entitled Really Useful Analytics and the Good Life with my colleague Nick Couldry, as well as Helen Kennedy and Giles Moss from Leeds University, and Caroline Basset from Sussex University.

The slides from my presentation are available here, but in this blog entry, I just wanted to outline the core shape of my argument, which will hopefully provide a framework for future work.

The first thing to say is that this paper was rather different to the work I have previously done in this area. With Ben’Loughlin, I have written a lot about what we have termed semantic polling (Anstead and O'Loughlin, Forthcoming). In these pieces, we worked to both understand and theorize about new research techniques that harvest vast amounts of data from social media (normally Twitter) to understand how the public are reacting to specific events or politicians. In those earlier papers, Ben and I tried to think about different understandings of public opinion – outside the dominant opinion polling paradigm established in the 1930s – and thought about how they problematized the arguments related to semantic polling.

The datasets used by semantic pollsters are certainly big, maybe running into many millions of tweets. However, for the paper at ICA, I wanted to draw a distinction between big data (defined simply through the size of the dataset or number of data points being worked with) and Big Data. The latter is distinct because it employs a fundamentally different epistemological framework to traditional social science research methods. This argument is clearly put in a couple of places. Most famously (or infamously, depending on perspective) is Chris Anderson’s claim that theory is now irrelevant.

“Out with every theory of human behaviour, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves”(Anderson, 2008).

More recently Mayer-Schonberger and Cukier have argued that:

“The era of big data challenges the way we live and interact with the world. Most strikingly, society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what” (Mayer-Schonberger and Cukier, 2013).

Such arguments have proved to be very divisive for obvious reasons (Couldry, 2013), yet there ramifications are certainly worth considering. Clearly government, political parties and other civic organisations have a great interest in big data and what it can tell them about the public. At the same time, traditional methods for understanding public opinion are, for various reasons that I detail below, struggling or at least evolving rapidly. So the question is: do we need a new theory of public opinion to cope with these developments?

As noted by Herbert Blumer as far back the as the 1940s (1948), public opinion research has always been rather adverse to theory, instead focusing its energies on practical methodological issues. However, one rather useful historically grounded theoretical framework has been outlined by the American academic Susan Herbst. Employing the idea of what she terms infrastructrues of public opinion, Herbst argues two things: first, that the definition of public opinion varies across time and place; and second that the definition actually has three components. These are shown, with historical examples, in Table 1 below (derived from Herbst and Beniger, 1994, Herbst, 2001).

Two previous examples of public opinion infrastructures, derived from the work of Susan Herbst

An infrastructure of public opinion therefore consists of a method for measuring public opinion; an understanding of politics which shapes that public and how it is conceived; and forums in which public opinion is discussed. This tripartite model has taken quite distinctive forms in different historical periods and geographies, as the comparison between pre-revolutionary France and the mid-twentieth century United States in the table indicates.

Before discussing how we might fit the development of Big Data research into this model, it is also worth noting something about more traditional techniques and understandings of public opinion. In many ways, the mid-twentieth century US paradigm described above persists, at least in the way we talk about public opinion. However, there are a number of reasons to suggest that this infrastructure of public opinion is in decline. These include:

The growing role for qualitative research. While opinion polling still plays a huge role in the development of political strategy, recent decades have seen growing prominence for qualitative researchers. While most researchers would claim that both techniques have to be combined for a rich understanding of public opinion, it is interesting to note that the most famous political researchers in the UK in recent decades have tended to be more associated with qualitative research than with polling, while the focus group has taken on a hugely important symbolic significance in contemporary politics (Gould, 2011, Mattinson, 2010, Schier, 2000).

Declining response rates to telephone surveys. This is a much considered problem, especially for American pollsters. It is now not uncommon to get response rates in the single digits, which is undermining traditional methodological approaches to public opinion research (Groves, 2011).

The development of internet panel surveys. The development of new online methods have challenged traditional telephone and face-to-face methods, and changed the market place for public opinion research (AAPOR, 2009).

The use of more complex statistical modelling techniques. Partially as a result of lower response rates and partially because of internet panel surveys, it can now be argued that pollsters have moved from sampling the population to modelling it. In short, the poorer quality of the raw data going-in (be this because of the inherent biases of online panel polls or lower response rates for telephone samples) means that more statistical jiggery-pokery is required to create representative numbers (Groves, 2011).

The rise of alternative metrics and predictors of public opinion. Opinion pollsters no longer have the field to themselves. Most famously Nate Silver employs Bayesian predictive modelling to predict US elections, while new social media research techniques have claimed reflect public opinion (Anstead and O'Loughlin, Forthcoming, Silver, 2012).

If we want to bind many of these trends together in an over-arching narrative, it perhaps relates to the decline of mass society. Traditional opinion polling, certainly as conceived by George Gallup and his contemporaries (and characterised by Herbst), was focused on understanding the political nation, as a singular entity. However, as the political nation has become more complex and differentiated, this model has started to look a lot less applicable. Therefore, as we do start to sketch out an infrastructure of public opinion where Big Data is becoming more influential, it is also important to hold in mind that this is not wholly a revolutionary development but also in continuity with other, older changes in the measurement and use of public opinion.

So what might a Big Data infrastructure of public opinion look like? One thing to note is that it is not really clear yet – we are still in the very early stages of the use of Big Data. What follows therefore is a slightly speculative attempt to start to answer this question.

Perhaps the easiest place to start is with an epistemology of public opinion and Big Data. I made a few points in my presentation. These are perhaps the most important:

As outlined above, Big Data approaches are correlative, meaning they are more interested in "how" than "why" questions.

Opinion polling is technically probabilistic in nature (hence the focus on margin of error). However, probability becomes far more important with Big Datasets, especially when the aim of the activity is prediction. As such, the very nature of the output analysis that is presented to politicians and the public might be different (Silver, 2012).

Big Data is integrative. In particular, Big Data techniques often seek to use multiple datasets – both structured and unstructured – from a variety of sources. This represents a dramatic shift in the kind of information that can be processed and used to construct public opinion (Mayer-Schonberger and Cukier, 2013).

Another important consequence of Big Datasets is that they can be more effectively sub-divided. Recent years have seen a rise in the so-called super-poll (in the UK, this technique is most famously used by Lord Ashcroft) where a sample of 25,000 is taken. The reason for this is that sub-samples can be more easily extracted from the dataset, without greatly increasing the margin of error. This would not work with a traditional 1,000 person poll. Big Data is also immune from this problem, and can very easily be organised in a way that allows for specific groups to be studied.

What though of ontology? What idea of the public might be embedded in Big Data?

One optimistic reading of this turn of events is that measurement of public opinion will become more conversational, rather than being simply about atomised individual opinion. This may even have the consequence of decentralising power as the tools for measuring public opinion become more accessible.

More pessimistically, big data techniques may alienate citizens even more from public opinion collection by harvesting unconscious expressed preferences, drawing on what has been termed “data exhaust”.

So this raises a question: how would this model work with classic liberal democratic ideas? If citizens are engaging in democracy but don't know they are, what does this mean? Are they really citizens anymore? Certainly the liberal idea of participation as an educative moment, which embeds an individual more in the political system would not make sense any more.

Finally, in what forums might public opinion be discussed in a Big Data infrastructure of public opinion?

Big data is already starting to bleed over into mainstream political journalism (as Ben and I have detailed in our work), but is still something of a novelty. As yet, it is not as respected as more traditional public opinion research methods.

However, it is questionable how much big data analysis citizens will get access to, and how transparent its construction will be. This is especially true if we are talking about data held in the private sector, such as social networks or health companies.

So this suggests a potentially interesting double standard: the public might be given access to more frivolous analysis (what Big Data says about a reality TV show, for example), while important information is held by government and corporations (how Big Data is used to influence healthcare policy, for example).

But it is important not to suggest that government is a singular identity. Some parts of government are clearly interested in big data, but it is not clear the legitimacy that various policy actors attribute to it (for example whether the civil service, executive, MPs or local councils have a great interest in it). What interest will MPs have in Big Data, for example? Will they take it more seriously than half a dozen letters from constituents?

These are really just some provisional ideas which I hope to fashion into something more substantial in the next few months. But any comments or questions are very welcome indeed!

I rarely do this, but it seemed worth preparing a brief methodological description of the data in my second Clegg verses Farage blog entry, which has just been published. I do this for two reasons. First, because it is good to be transparent when it comes to these kind of data. Second, because I am starting to work with some of these new techniques that allow for the analysis of bigger textual datasets. The Farage-Clegg dataset is reasonably small (approximately 30,000 words). In theory however, the same techniques could be scaled quite effectively for much larger datasets running into millions of words. So watch this space!

Constructing the dataset

For the PSA blog post, I was interested in examining how post-debate coverage presented the performance of the two men. In order to do this, I first used Lexis Nexis to gather a sample of all British newspaper articles that made major mentions of Clegg, Farage and debate between 27th March 2014 (the day after the first debate) and 4th April 2014 (two days after the second debate). You can find the results of this search in this document. In total, it includes approximately 480 articles or 178,000 words.

Generating the tag cloud

In an attempt to have a first look at the data, I entered it into the tag cloud generation site TagCrowd. I then cleaned it, removing any words that had been artificially created by Lexis Nexis, for example.

This generates quite an astehtically pleasing tag cloud, but its usefulness for this kind of exercise is actually quite limited. Why is this? Two reasons, really. First, the tag cloud chews through the whole document. It tells us about how often words appear in the coverage, but tells us little about how these words relate to each other. Second, the impression given by the tag cloud can be quite artificial. One obvious point to make: the size of the word reflects not just the number of appearances a word makes, but also the length of the word.

Cleaning the dataset to generate Clegg and Farage specific sentences

I particular, I wanted to examine what qualities were attributed to the two men’s performance by the media after the debates. So I now turned to two text analysis tools called QDA Miner and WordStat. In the first instance, I used QDA Miner to search for any sentences in the corpus that featured Clegg or Farage, and auto-coded them accordingly. I then exported these to Wordstat, where I could analyse the make-up of these two datasets, and most interestingly compare them.

It should be noticed that some sentences may feature twice in the dataset, as they could have featured both Clegg and Farage’s names. There is an option to exclude these double references, but since this was just quite a quick and dirty analysis, I let them appear twice.

I used Wordstat to pull out the 250 most used words. These appeared most frequently as a percentage across the whole dataset. The tables below show the calculations used to rank the words. The most distinctive Farage word is “PUTIN”. This is because it featured in 0.30 per cent of the words in the Farage dataset but only 0.18 per cent of the words in the Clegg dataset. Hence the difference was 0.12 per cent.

A couple of weeks ago, I blogged on the insight blog about the incentives for both the Liberal Democrats and the United Kingdom Independence Party to take part in the two-way Nick Clegg vs. Nigel Farage debates. Broadly, the argument was that – as the two parties were not really competing for the same base of voters – the debate would ultimately serve the interests of both parties.

It is too early to say yet whether that prediction is true. Post-broadcast polls following both debates suggested that, in the eyes of the audience at least, Farage had won a comfortable victory over Clegg (see here for public reaction to the first debate, and here for reaction to the second debate). Broadly, we should not be surprised that Clegg came off worse in these snap polls. As he himself knows from his 2010 debate experience, novelty is a powerful weapon, at least in the short term.

However, despite the heavy coverage they received in the immediate aftermath, the televised debates still seem to have had little impact on the polls. Most surveys conducted after debate, whether asking about potential general election voting intention or European parliament preferences, showed no real discernible change in the support level for the two parties (the one exception being a Sunday Times / YouGov poll which had the Liberal Democrats down two points, and UKIP up by five).

However, and as importantly for how politics is going to play out in the future, the debates did make very evident some of the rhetorical dividing lines that currently exist in British politics. In order to better understand this, I conducted a very quick study of post-debate media coverage.* Broadly, the sample I worked with was newspaper coverage of the debate published between 27th March 2014 (the day after the first debate) and 4th April 2014 (two days after the second debate). You can find the results of this search in this document. In total, it includes approximately 480 articles, containing some 178,000 words.

A very simple way of visualising this is a tag cloud. With this method, the size of the word equates with how frequently it occurs in the text. A tag cloud of all the newspaper articles text is shown below. This diagram tells us a few things about post-debate discussion. Certainly, questions of identity were heavily emphasised (English, Britain, British, Europe, European etc.). Additionally, what political scientists Frank Esser and Paul D’Angelo term meta-coverage seems to at the forefront of media reporting. This is when political reporters focus on who has won or lost, and the political strategies that have led to these outcomes (so in this tag cloud, words such as per cent and polls might represent meta-coverage. References to the other absent party leaders might also fit into this category). While they do feature, words like immigrations and jobs are surprisingly small.

However, it should be noted that, as a method to understand large bodies of text, tag clouds have a number of important limitations. The first issue is a simple presentational point. The size of words reflects not only the frequency of their use, but also the length of the word (so in the above example, the size of Debate and Mr reflects a similar number of uses, even though the former is far more prominent). Additionally, tag clouds only tell us how often words were used in a piece of text, but fail to tell us much about the relationships that exist between words.

Table 1: Most distinctive words in Clegg and Farage focused sentences

In order to overcome this difficult, I deployed a second method using the text analysis software package QDA Miner / Wordstat. First, I extracted every sentence in the dataset that referenced Farage or Clegg. I then had two datasets which I could compare. There are a number of things that can be done with this kind of data, but a simple and quick way of examining it is simply to look for the largest discrepancies between them i.e. words that appear a lot more in one dataset than the other. I did this for both politicians. The findings are presented in Table 1.

This offers us a few insights that are not available from the tag cloud. In particular, it suggests that there were two quite different debates going on, with Clegg and Farage talking across each other. Ironically, considering he was facing the leader of the United Kingdom Independence Party, it is Clegg who is most frequently referenced in conjunction with the EU, European, membership, and Britain. In other words, the Deputy Prime Minister’s performance in the debate does seem to have been reported through the prism of Europe. In contrast, the are two distinct strands to the coverage of Farage’s performance: first, a focus on his comments about Russia, the Ukraine and Vladimir Putin (Putin, admire, president and Russia); and second, more populist political issues (immigration, white and working).

This divergence is interesting. Recent research, notably Robert Ford and Matthew Goodwin’s Revolt on the Right, argues that the success of UKIP has very little to do with popular feeling about the European Union, and much more to do with economic insecurity and a broader alienation from the political class. Therefore to make UKIP all about Europe – and also to try to argue against them on those terms – is never going to work. In this context even the attacks on Farage about his alleged support for Putin will likely have little impact, with voters interpreting them as being either highly abstract, or an attempt to smear the party by a combination of established politicians and the mainstream media.

It should be noted there are huge limitations to the “quick and dirty” method I have employed here. The dataset does not examine what the candidates actually said, but instead only media coverage of the debates. Sadly, there is not full transcript of the debates available at the moment. Furthermore the analysis excludes social media commentary (although the think tank Demos had an excellent go at doing some of this kind of analysis on the debate night itself). The method I have used is also relatively crude, and could be improved by either more rigorous quantitative significance testing or more qualitative human engagement with the raw data.

Nonetheless, the results do point towards something interesting. Arguably the reason that Clegg lost both the debates was not because the British public disagree with him on Europe. In fact, polling evidence would suggest a majority of them more closely identify with his position than with UKIP’s (even if they do regard Europe as being a relatively insignificant issue). In fact, Clegg seems to have lost the debates because he was perceived to be the representative of the political class against Farage’s plucky everyman. Breaking this dynamic is the real challenge for mainstream politicians.

---

*So not to disrupt the flow of this blog post, I have produced a separate and much more detailed methodological discussion of my analysis on my own blog here. You will also find a more complete explanation of the Clegg and Farage specific word lists and how they were generated in this post.

The run up to the European Parliamentary elections on 22nd May will see two live debates between leader of the Liberal Democrats Nick Clegg and leader of the United Kingdom Independence Party Nigel Farage. The first debate will be broadcast on LBC Radio on 26th March, with the follow-up content appearing on BBC television on 2nd April. The original idea for the debates came when Clegg challenged Farage to a joint appearance in February. After a few days consideration, UKIP accepted the proposal, with the parties ultimately agreeing on the two debate format.

Why was the proposal made and accepted? By way of explanation, a few general points should first be made. First – and fairly obviously – politicians only ever agree to take part in live debates when they feel they have something to gain from them. However, the cost-benefit calculation is complicated by the high stakes at play in live debating. Simply put, when politicians put themselves in this situation a lot more can go wrong than can go right. It was for this reason that American political scientist Alan Schroeder called his history of American Presidential debates Fifty Years of High Risk Television.

In practice, the politician with the greatest incentive to debate is likely to be trailing in the polls. After all, they stand to benefit from shaking the contest up with a good performance and also have little to lose in the event of a bad performance. However, since a debate requires at least two participants, the poll-leader is likely to face exactly the opposite equitation (i.e. since they are already winning they have little to gain from a good performance, while a bad performance could really undermine their chances). As such, they are likely to veto any debate proposals. This is one of the reasons why it took so long for the United Kingdom to have pre-election Prime Ministerial debates. While numerous invitations were offered over the years by the parties playing catch-up, the idea was always nixed by the party that was leading in the polls, and thus had less to gain. Similarly, while vast quantities of ink has been spilt creating the mythology of the 1960 American Presidential debates, it is worth noting that there was not a repeat performance until the Carter-Ford contest of 1976, precisely because the 1960 debate became so linked to Nixon’s defeat.

The Clegg-Farage agreement to debate reflects this basic logic, at least to some extent. This is most obvious in the case of Nick Clegg. Ever since the early months of the coalition government, the Liberal Democrats poll ratings have struggled, while UKIP’s rise has regularly placed the Liberal Democrats in forth position, trailing the anti-European Union party. This is shown in Figure One, which based on ICM polling data from the start of the 2010 election until the present (the raw data for the graph is available from The Guardian. Note that the ICM dataset does not actually include the polling share for UKIP, but only the three major parties and “others”. However, the vast bulk of this group indicates support for UKIP).

UKIP too have an incentive to agree to the debate. While their poll ratings are buoyant and suggest a good performance in the European election, they still remain a fringe party in British politics. As such, they have a lot to gain from the exposure offered by prime time media coverage.

There is also an additional factor in play which may have also encouraged both parties to agree to the debate. In reality, they are not in direct competition with each other. Realistically, there are very few voters who are going to the spend the next few weeks weighing up the relative merits of a vote for the Liberal Democrat or UKIP. According to research done by YouGov for Prospect Magazine only 15 per cent of citizens saying they currently support UKIP claim to have voted Liberal Democrat at the last election. By challenging Farage so directly, Clegg seems to be trying to cast himself as the authoritative voice of British Euro-enthusiasm – the politician who is unafraid to take on the little Englander tendency. While such a position might not be very popular with many among the electorate, a full-throated attack on Farage might pull a few points back to the Liberal Democrats (especially when both David Cameron and Ed Miliband are struggling to clearly articulate their positions on Europe). Similarly, Clegg would seem to be the perfect target for Farage’s strongest rhetorical device – an attack on a self-interested political class disconnected from the concerns and values of ordinary voters. So debate might be a rare win-win scenario for both parties.

What does the broadcast of this debate mean for the future? There may be some interesting ramifications for any potential 2015 election televised debate. In 2010, the debate was restricted to the three major parties. Obviously, this approach does create certain problems in a parliamentary democracy with a complex party system. For example, nationalist parties are excluded even though they might be major parties or even parties of government in their region. Whether to include UKIP in 2015 could present an even bigger problem. On the one hand, the party will likely still have no seats in Westminster. However, it might – if it finishes top of the polls in May – have won a nationwide election, and could continue to score highly in opinion polls. It will likely also be fielding candidates across the country. At the very least, the negotiation process will be a lot more complex than the discussions in the past month, as the Conservatives and Labour will also be involved, and bring a much more complex tapestry of interests to the table.

One of the wonderful things about working at the LSE are the fantastic guest speakers we are able to get to come and visit us. It really is a great privilege. Yesterday was an especially exciting day, as political theorist Professor Nancy Fraser came and spent the afternoon at a symposium organised by my colleague Professor Nick Couldry along with colleagues at Goldsmiths.

Professor Fraser is perhaps best known – among many great achievements, it should be said – for her critical perspective on Jurgen Habermas’s work. In particular, Fraser’s famously argued that Habermas’s original historically grounded construction of the public sphere articulated in The Structural Transformation of the Public Sphere (1962 / 1989 in English translation) was highly exclusive in nature, excluding many people, especially women.

The event concluded with remarks from Professor Craig Calhoun, the Director of the LSE, responding to Fraser and the comments made by other speakers over the course of the afternoon. Craig’s remarks were interesting for a number of reasons, but one particular comment he made stuck with me (in fact, to the point that I asked a question about it in the subsequent Q and A). This was about the idea of political imagination, and in particular the failure of political imagination in the public sphere. This is an interesting comment in its own right, but actually seems to echo concerns I have encountered from other quarters in different ways in the past few months. My co-author on many pieces of work, Professor Ben O’Loughlin of Royal Holloway, spent a significant proportion of his inaugural professorial lecture last year talking about the death of imagination (you can watch a video of the lecture here). Ben’s argument was slightly different – that our obsession with recording the present is undermining our ability to think about the future, so we struggle to dream of a different and better world, but many of the ramifications are the same. Similarly, Professor Justin Lewis of Cardiff University has just published a new book entitled Beyond Consumer Capitalism: Media and the Limits to Imagination, dealing with many of these themes.

All this got me thinking: what role does imagination play in contemporary politics? Well, one thing to note is that politicians and political communication does still seem to rely to a great extent on imagination, just not of the optimistic kind. Many of the most famous American political ads – such as Daisy, the infamous Willy Horton spot, or Wolves – rely on tapping into fears that voters might have about the future. But what is much harder to find is a positive visions or the articulation of alternatives. So imagination is used to promote inertia rather than alternatives.

But it does strike me that there is one place in British politics today where imagination is central to an important debate, and this is the discussion surrounding the Scottish referendum on independence. Now the idea that discussions about nationalism lend themselves to imagination is hardly news. Benedict Anderson famously argued that nationality was constructed around imagined communities. That argument is about creating a shared past retrospectively. But what nationalism also does is offers a chance to dream, to imagine a future where a different kind of society can be built. Nationalist projects also allow for radically divergent visions of future societies to be bed fellows, while differences can be effectively papered over, in a manner in which normal politics does not allow.

There are of course very substantive issues involved in the independence debate. Indeed, one reading of last week’s debate about European Union membership and currency arrangements was that the debate had suddenly been elevated to include some very practical and important aspects of the independence question. Politically, the pro-Yes campaign has tried to walk a tight rope during the campaign, arguing for the dramatic change of independence but also stressing continuity (Scotland will enter into a currency union with the UK, European Union membership is unaffected, the Queen stays as Head of State for example). The tactics of the Better Together campaign in the past few days seems to have been to push the Yes Campaign off their tight rope.

And maybe political imagination offers one explanation for this. The no campaign has long been termed (allegedly because of an internal nickname) as Project Fear. In contrast, the yes campaign has licence to be unrelentingly positive about a new Scottish future, painted in broad and non-alienating strokes. The national project has broken the log-jam of the positive political imagination allowing people to, rightly or wrongly, conceptualise the future in a different way.

While a positive political imagination is clearly a good thing, two important observations follow from this. The first is that, while nationalism might promote optimistic visions of the future, it is still not necessarily a good thing. The classic argument against nationalism is that it is a distraction from other political projects and visions of society, as it based on exclusion rather than solidarity. The second point is perhaps more directly significant to the debate around independence in Scotland. Why is that the yes campaign have a monopoly on optimism and the future? Part of the problem is that the no campaign has completely failed to articulate a positive vision of what the United Kingdom might look like in the future, why the British project is worth continuing with and how the union might evolve to meet the concerns of those who now have doubts about it. In other words, they have had a failure of political imagination, and only seem able to offer arguments based on why an independent Scotland would be a bad thing. If Scotland does vote for independence – and it should be said that the odds remain that it will not – this might end up being one of the major reasons why.

Last week, I spoke at the Media and Communication Department research dialogue on the subject of Image. I was a last minute addition to the programme, so decided to take the opportunity to flesh out an idea I had been pondering for a while. I was very struck a few months ago when it occurred to me that London's theatres simultaneously contained two plays that offered a take on how Britain is governed, and in particular how our institutions cope with change and crisis. At the National Theatre, This House dealt with the tumultuous politics of the mid-to-late 1970s, and the struggle between the Labour and Conservative's Whip's office as James Callaghan's majority dwindled, then vanished. On the other side of the river in the West End, Helen Mirren was reprising the role she won as Oscar for in The Queen, this time in the The Audience, a play which focused on the weekly (and highly confidential) meetings between the Monarch and Premier in Buckingham Palace.

The argument in the paper - which I outline in more depth below - is that both plays reflect classic thinking and questions on the British constitutional settlement. The Audience though offers a more Whiggish reading of the system, strongly echoing ideas espoused by the Victorian constitutionalist Walter Bagehot about the role of the dignified elements of the constitution. In contrast, This House is more ambiguous its message, but engages with the debate - most famously articulated by Edmund Burke in 1790 - between government based on human nature and government based on human rationality. While the play text articulates arguments for both positions, my reading is that it ultimately highlights the weaknesses of government based human nature, and thus offers a space for opposing the Victorian constitution fetishised in The Audience.

You can listen to a podcast of my talk below, or alternatively watch the video which has audio and slides. A PDF of the slides is also available here.

The first thing to say is that is that I do not think it is a coincidence that these plays have been so successful, both critically and in terms of drawing an audience, at this moment in time. If one thinks of the Scottish Independence debate, the potential EU-exit referendum, the failure of the electoral system to create governing majorities, and the fracturing of the party system most evident in the rise of UKIP, it quickly becomes clear that the British political system is in a state of extreme flux. Constitutional scholar Anthony King has recently gone as far as to talk of the British constitution in its current state as being "a mess" and it is hard to disagree with him. Couple this with political institutions' inability to cope with the financial crisis, and it is unsurprising that people are looking back to the economic and political dislocation of the 1970s with interest.

So what do these plays attempt to tell us or ask about our political institutions? The first thing to note is that they both share a trick in common - they take us to places that we are not normally permitted to enter, either the party Whip's office or the audience between Queen and Prime Minister. This actually draws on a long-established idea that the British constitution has its secret elements. In his English Constitution, Bagehot talks a lot about the secrets and mystery of the constitution, while more recently scholar Peter Hennesey wrote a book on what he termed the “hidden wiring” of the British system.

But our role as the audience is slightly different in the two plays. In the original run of This House in the Cotteslow Theatre at the NT, the whole auditorium was rebuilt as a replica House of Commons. Audience members were sitting on the green benches and even interacting with the cast. As such, they complicit in the processes ongoing in the play. In contrast, in The Audience, the audience is positioned much more as an intruder, and possibly even an unwelcome one, a point made clear when the young Princess Elizabeth (who appears in a spectre-like fashion at various points during the play to interact with her older self) appears to look towards then audience and then recoils with a fear of being seen. This difference would suggest that the plays have quite a different attitude to hierarchy and social ordering.

Perhaps the clearest indication of constitutional doctrine is found in the conclusion of The Audience though, in a monologue delivered by Elizabeth.

“No matter how old-fashioned, expensive or unjustifiable we are, we will still be preferable to a elected president meddling in what they [Prime Ministers] do. Which is why they always dive into rescue us every time we make a mess of things. If you want to know how it is that the monarchy in this country has survived as long as it has – don’t look to its monarchs, look to its Prime Ministers” (Morgan, 2013: 88).

This directly echoes Bagehot's claim that the purpose of monarchy and other ceremonial aspects of the constitution is to act as a disguise for the real business of politics and, as such, it serves a useful function for the political class, who have a vested interest in preserving it. Thus the way through crisis presented in The Audience is essentially conservative: it relies on service, order and long-established precedents.

In contrast, This House offers a far more ambiguous reading of the constitutional settlement. It enters into a debate that has been going on a very long-time, perhaps most famously articulated by Edmund Burke who argued that constitutions must be based “not on human reason, but on human nature” (1790). At it heart, this debate comes down to the question of whether constitutions can be designed (in other words, be a product of reason) or whether they should be arrived at through shared memory, experience and values (and thus be the product of human nature).

This House asks us to empathise with MPs. In the post-expenses scandal world, this certainly seems like quite an unusual thing to to do. But far more importantly, This House seems to question a fundamental idea embedded in British constitutional thinking - namely, that shared values and established practices are, by themselves, enough to get through any period of crisis? As such, it is rather different to the far more conservative The Audience, and certainly a play for our times, as much as a play about an important period of political history.

I live blogged Professor Lakoff's discussion at the LSE yesterday. There can be no doubting the importance of his body of work, and the huge influence it has had on politics generally and American politics in particular. Certainly, the study of metaphors and their seeming power poses a huge challenge to more rational perspectives on political life and debate, and I mean that both in the Antony Downs and Jurgen Habermas sense of the term rational.

Indeed, for me Professor Lakoff's view of emotion in politics was perhaps the most striking idea he offered last night, in that it amounted to almost a post-revisionist perspective on the relationship between rationality and emotion. As you can see from the liveblog, Lakoff was highly critical of enlightenment views of rationality. This is not a wholly unique perspective. Many scholars focusing on deliberation (such as John Dryzek, for example) have argued that an overly prescriptive definition of "good" deliberation, which excludes emotions such as anger and humour is not very helpful. But where Lakoff took this a step further was in drawing on research from the field of neuro-science, and in particular arguing that because of the way the human brain is wired, the distinction between rationality and emotion is false. Put another way: if you take away people's emotion, they do not become wholly rational. In reality, rationality and emotion are wholly symbiotic. This is a very challenging insight for political scientist used to arguing about the relative merits of rational and emotional debate.

I was left with more questions on the relationship between metaphor and ideology. Perhaps Professor Lakoff's most famous idea is derived from two models of the family and how they relate to political world views. There is the nurturing family, where the assumption is that parents are equals and seek to bring out the best traits in their offspring, who they assume to be inherently good. This view is equated with progressive and liberal thought. Alternatively, there is the family model based on the strong and domineering father-figure, who commands his children, assuming them to be unruly and misguided. Only if they follow his instructions they can then be reformed. If they do not, they are guilty of a moral failing and the family's moral responsibility ceases. This metaphor is associated with a conservative political worldview.

But there is a great tension in these metaphors and their political ramifications, I felt. On the one hand, Professor Lakoff was keen to stress the permanent and geographically non-limited spread of metaphors (the examples given in the lecture were the link between increase and up, and affection and warmth). The reason is that metaphors are grounded in lived experiences, constantly creating and solidifying those neural-networks. In contrast though, the ideological consequences of the family metaphors are clearly grounded in the recent American experience of the past thirty years or so. These metaphors become far more problematic if we consider different strands of conservative thought, for example. How, for instance, would we think of Bismark? A stern father-figure, certainly, but also the founder of the modern welfare state model. Harold Macmillan presents another interesting challenge, as his ideology was the very model of conservative paternalism, but has no relationship to harshness of the contemporary US right. Even Richard Nixon, who might be regarded as the founder of the modern US conservative movement is a problematic figure. His administration fits well with the model in someways, but also attempted to expand healthcare greatly.

This leads to a broader question about the family metaphor and ideology: what is in the service of what? Put another way, does the metaphor shape the ideology, or does the ideology employ the metaphor, or are both these processes occurring at once?

It has taken a long time to get here, but at last David Cameron has delivered his big Europe speech. Judging by the generally broad grins of Tory Eurosceptics being wheeled out on rolling news channels, he at least seems to have been successful in appeasing elements of his party. Whether Cameron's strategy is ultimately successful though - and how it will influence his page in the history books - is an entirely different matter.

It has now become very apparent that there are two very distinct David Camerons. What is interesting about today's events is that both of them were prominently on display. The first is an idealist, a man who sees himself as the visionary leader of a one nation party rooted in the liberal-conservative centre of British politics. This version of Cameron thrives on big gesture politics, and has been most evident during the 2006 leadership campaign, and then again when making the coalition offer to the Liberal Democrats in 2010. In contrast, the second David Cameron is more instinctively conservative, risk-averse and focused on relatively short term electoral and partisan calculations. This version of David Cameron has perhaps been most evident in his dealings with his own party.

That both David Camerons played a role today is evident in the rather prescient comment made in The Guardian liveblog on the speech that while this was probably the most Eurosceptic speech ever made by a British Prime Minister, it was also probably the most pro-European speech made by David Cameron. And while short-term electoral and partisan calculation are clearly involved in what Cameron has argued, there is also a more idealistic idea being articulated as well - namely the desire to win an in-out referendum (after a successful renegotiation process, obviously), in the process laying to rest the most virulent strand of Tory Euroscepticism that has dogged Conservative leaders for a quarter of a century, and settling the European question for at least a generation.

First, Marquand notes that British diplomacy has traditionally failed in Europe because it has not appreciated the weakness of its position. This goes back as far as the original European Coal and Steel Community, and Britain's refusal to join, followed by subsequent attempts to join the EEC under Macmillan. The error made here was to attempt negotiations from ground up, with Britain's equal status assumed. This neglected the fact that other members of the EEC had already been through an extended process of negotiation while Britain stood aloof. It is no coincidence Marquand argues that the British application was only successful when Heath took a new approach - broadly accepting the rules of the club as fixed in order to win admission.

But Marquand also offers a possible course of action for Cameron. After all, we have been here before. Prior to the two election of 1974, Harold Wilson's Labour Party promised a renegotiation of Britain's membership of the EEC if elected, followed by a referendum on the outcome. Much like Cameron, Wilson claimed to be pro-European, and took this stance to appease the ideologues in his own party. In practice though, Wilson's renegotiation amended tiny details of the Britain's relationship with the EEC (tariff exceptions for the import of New Zealand butter and suchlike), yet was haled as a massive coup by Wilson. Whether Cameron could pull a similar trick is open to question, but this might be one possible course of action open to the Conservatives after a 2015 election.

But this is a dangerous game of political brinkmanship. And indeed, this may prove to be the supreme irony of Cameron's premiership. Writing about recent history, Marquand reminds us just what a constitutionally radical government Labour offered, especially between 1997 and 2001, when devolution occurred and House of Lords reform took place. This constitutional reform was far from perfect; indeed, in some ways it was downright flawed - the West Lothian question rumbles on, unaddressed, and House of Lords reform remains a half finished work-in-progress. Yet, in broad terms, Labour undoubtedly achieved what it set out to do.

Labour were followed in office by Cameron, the self-confessed constitutional conservative. Yet, thanks to one referendum, he might be the unionist who oversees the dissolution of the union, and thanks to another referendum, he might be the premier who, while doubtless a sceptic, claims to be in favour of Britain's membership of the European Union, but leads Britain out of the EU. It is one thing to achieve constitutional goals with and leave some mess behind, as Labour did. It is quite another to make a mark on the history of the British constitution wholly through unintended consequences.

On Saturday, I went to the National Theatre to see what The Telegraph had termed the riskiest piece of theatre of the year, DV8’s Can We Talk About This?The
performance occupies an unusual genre, halfway between theatre and
dance (what is normally termed physical theatre), but also taps into the
recent trend towards verbatim performances (such as London Road and The Colour of Justice),
which have on interviews or transcripts of public hearings. In this
case, this approach is used to dramatise an historical narrative going
back to 1985. The various examples offered construct a polemic -
seemingly - arguing that liberalism (or perhaps more accurately,
liberals) has failed to respond effectively to the challenge posed by
radical Islam. Instead, in the UK, state-multiculturalism has led only
to hand wringing and an inability to respond directly to a challenge
posed to fundamental rights, notably freedom of speech.

The play has generally got quite good reviews.[1] I can certainly understand the critics' admiration of the
performers' skills - there is something truly remarkable about watching
some of the movement that they undertake, while retaining the modulation
and tone of an interview with, for example, Ann Cryer MP. The embedded
videos give am indication as to the kind of feats the actors achieve.

But
the good reviews are not just about the artistic merits of the piece. A
number of reviewers have suggested that the piece is politically hugely
important. When the piece opened in Sydney, for example, the city's
edition of Time Out branded it "[O]ne of the most important works of our
age".[2] Certainly, Lloyd Newson, the Artistic Director of DV8 sees
the play's purpose as political, judging by interviews he has given on
the subject.

Here though, I think I would have to contest the position of many of the reviewers, and offer the counter opinion that Can We Talk About?, struck
me as being - at best - deeply politically deeply flawed, and - at
worst - dangerous to the very liberalism it claims to espouse.

There
are a few problems. First, the central conceit of the play - that it is
saying the unsayable - seems overstated and not really related to
reality. In fact, the arguments in the play actually feel a bit dated.
Journalists such as Nick Cohen and the late Christopher Hitchens adopted
similar positions a number of years ago.[3] So it is certainly not a radically new position
intellectually. More importantly, state multiculturalism has now been
attacked by David Cameron, Nicolas Sarkozy and Angela Merkel.[4] It is rather harder to claim to be radical and liberal when your
position is also backed up by the three most powerful leaders in Europe,
who also happen to be the three most powerful conservative politicians
in the world. Certainly, it should come as no surprise that the biggest
cheer leaders for Can We Talk About This? in the British press
were the Telegraph and the Mail.[5. Indeed, the arguments made by the
play have some far more extreme fellow travellers, as the liberal values
vs. Islam clash is now a staple element of rhetoric for Europe's far
right, including the BNP in the UK, Le Front National in France and the
various far-right Dutch factions that have been electorally successful
since 2000. The lack of sophistication and nuance in the way the play
handles its topic can only provide succour to such groups, and seems
perhaps the most irresponsible thing about the whole enterprise.]

All
that said, there are still important questions to be asked about the
future of liberalism, and there certainly is space for thoughtful
consideration of these issues. Sadly, we do not get that from this play.
The narrative structure that is offered draws together distinct events,
creating a false synchronicity between them. This is perhaps the most
pernicious aspect of the play. Can we really link a debate about
education in 1985, abandoned plans to show Geert Wilder's far right
short film Fitna at the House of Lords, the Pakistani
government's stance at the UN, and forced marriage in contemporary
Britain? All are distinct issues, yet are shown with a moral equivalence
(expressed symbolically through writing words and phrases on a wall at
the back of the stage), placed into a broader thesis about the failure
of British policy to cope with radical Islam. What is perhaps most
disturbing about this is that it binds the various actors - Muslim
parents in Bradford, Muslim members of the House of Lords, the Pakistan
government etc - together, casting Islam as a conspiratorial ideology,
an enemy within. There is little or no acknowledgement of nuance or
complexity, and certainly no reference to the Arab Spring, which may yet
lead to the most significant challenge to radical politicised Islam. As
such, the Islam portrayed in Can We Talk About This? is
monolithic and only contested internally in a limited (and brutally
repressed) fashion. As such, Safraz Manzoor is right to note that it
becomes a bit like an all-dance version of Melanie Phillip's Londonistan, all paranoia and generalisation.[5]

These
content issues are deeply troubling, placing the play closer to the
forces of reaction than of liberalism. But suppose we take Newson at is
word when he claims he is seeking to defend liberalism? Measured in
those terms, is the play likely to be successful? Not really, I would
contest, because it fails to systematically address the complexities of
liberalism, either historically or theoretically. Historically, the
liberal bargain has always been a balancing act, integrating seemingly
mutually exclusive groups and beliefs. As such, liberalism is always
work in progress, being redefined to meet the needs of each age.[6] But one would not get this sense
from DV8's work. Instead, liberalism would seem as monolithic and fixed
as Islam. This historical misrepresentation heightens the idea embedded
in the narrative that Islam is somehow alien or incompatible with innate
western values, making this a Huntingtonesque piece of work.[7]

Even theoretically, I am not sure if the model of liberalism Can We Talk About This? inevitably
leads one to is very attractive. It bares a close resemblance to what
scholar Christian Joppk has termed "repressive liberalism".[8] This
model of liberalism, now an important element of western debate, is
more muscular and combative.

Crucially, repressive liberalism sees
no contradiction in using illiberal laws to defend liberalism. The
classic example of such a measure is the French government's ban on
burqas and niqabs. This measure was defended on the grounds that dress
of this kind was used to repress women. Liberal philosopher Will Kymlica has pointed out the flaw in this argument, however: such laws are inherently illiberal, as they remove the right of a woman to choose to wear particular forms of Islamic dress in public. Essentially, one illiberal arrangement (familial power forcing women to
wear certain clothes) is replaced with another (the state using the
full force of its legal authority to prevent people wearing certain
clothes). Neither arrangement is liberal. Instead, the challenge for a
liberal society is to ensure that everyone has the autonomous freedom to
choose what to wear. The whole Clash of Civilisations narrative promulgated in Can We Talk About This? moves debate away from that outcome, not towards it.