Stuff

Today the polling inquiry under Pat Sturgis’ presented its initial findings on what caused the polling error. Pat himself, Jouni Kuha and Ben Lauderdale all went through their findings at a meeting at the Royal Statistical Society – the full presentation is up here. As we saw in the overnight press release the main finding was that unrepresentative samples were to blame, but today’s meeting put some meat on those bones. Just to be clear, when the team said unrepresentative samples they didn’t just mean the sampling part of the process, they meant the samples pollsters end up with as a result of their sampling AND their weighting: it’s all interconnected. With that out the way, here’s what they said.

Things that did NOT go wrong

The team started by quickly going through some areas that they have ruled out as significant contributors to the error. Any of these could, of course, have had some minor impact, but if they did it was only minor. The team investigated and dismissed postal votes, falling voter registration, overseas voters and question wording/ordering as causes of the error.

They also dismissed some issues that had been more seriously suggested – the first was differential turnout reporting (i.e, Labour people overestimating their likelihood to vote more than Conservative people), in vote validation studies the inquiry team did not found evidence to support this, suggesting if it was an issue it was too small to be important. The second was the mode effect – ultimately whether a survey was done online or by telephone made no difference to its final accuracy. This finding met with some surprise from the audience, given there were more phone polls showing Tory leads than online ones. Ben Lauderdale of the inquiry team suggested that was probably because phone polls had smaller sample sizes and hence more volatility, hence spat out more unusual results… but that the average lead in online polls and average lead in telephone polls were not that different, especially in the final polls.

On late swing the inquiry said the evidence was contradictory. Six companies had conducted re-contact survey, going back to people who had completed pre-election surveys to see how they actually voted. Some showed movement, some did not, but on average they showed a movement of only 0.6% to the Tories between the final polls and the result, so can only have made a minor contribution at most. People deliberately misreporting their voting intention to pollsters was also dismissed – as Pat Sturgis put it, if those people had told the truth after the election it would have shown up as late swing (but did not), if they had kept on lying it should have affected the exit poll, BES and BSA as well (it did not).

Unrepresentative Samples

With all those things ruled out as major contributors to the poll error the team were left with unrepresentative samples as the most viable explanation for the error. In terms of positive evidence for this they looked at the differences between the BES and BSA samples (done by probability sampling) and the pre-election polls (done by variations on quota sampling). This wasn’t a recommendation to use probability sampling (while they didn’t do recommendations, Pat did rule out any recommendation that polling switch to probability sampling wholesale, recognising that the cost and timing was wholly impractical, and that the BES & BSA had been wrong in their own way, rather than being perfect solutions).

The two probability based surveys were, however, useful as comparisons to pick up possible shortcomings in the sample – so, for example, the pre-election polls that provided precise age data for respondents all had age skews within age bands, specifically within the oldest age band there were too many people in their 60s, not enough in their 70s and 80s. The team agreed with the suggestions that samples were too politically engaged – in their investigation they looked at likelihood to vote, finding most polls had samples that were too likely to vote, and didn’t have the correct contrast between young and old turnout. They also found samples didn’t have the correct proportions of postal voters for young and old respondents. They didn’t suggest all of these errors were necessarily related to why the figures were wrong, but that they were illustrations of the samples not being properly representative – and that ultimately led to getting the election wrong.

Herding

Finally the team spent a long time going through the data on herding – that is, polls producing figures that were closer to each other than random variation suggests they should be. On the face of it the narrowing looks striking – the penultimate polls had a spread of about seven points between the poll with the biggest Tory lead and the poll with the biggest Labour lead. In the final polls the spread was just three points, from a one point Tory lead to a two point Labour lead.

Analysing the polls earlier in the campaign the spread between different was almost exactly what you would expect from a stratified sample (what the inquiry team considered the closest approximation to the politically weighted samples used by the polls). In the last fortnight the spread narrowed though, with the final polls all close together. The reason for this seems to be because of methodological change – several of the polling companies made adjustments to their methods during the campaign or for their final polls (something that has been typical at past elections, companies often add extra adjustments to their final polls). Without those changes them the polls would have been more variable….and less accurate. In other words, some pollsters did make changes in their methodology at the end of the campaign which meant the figures were clustered together, but they were open about the methods they were using and it made the figures LESS Labour, not more Labour. Pollsters may or may not, consciously or subconsciously, have been influenced in the methodological decisions they made by what other polls were showing. However, from the inquiry’s analysis we can be confident that any herding did not contribute to the polling error, quite the opposite – all those pollsters who changed methodology during the campaign were more accurate using their new methods.

For completeness, the inquiry also took everyone’s final data and weighted it using the same methods – they found a normal level of variation. They also took everyone’s raw data and applied the weighting and filtering the pollsters said they had used to see if they could recreate the same figures – the figures came out the same, suggesting there was no sharp practice going on.

So what next?

Today’s report wasn’t a huge surprise – as I wrote at the weekend, most of the analysis so far has pointed to unrepresentative samples as the root cause, and the official verdict is in line with that. In terms of the information released today there were no recommendations, it was just about the diagnosis – the inquiry will be submitting their full written report in March. It will have some recommendations on methodology – but no silver bullet – but with the diagnosis confirmed the pollsters can start working on their own solutions. Many of the companies released statements today welcoming the findings and agreeing with the cause of the error, we shall see what different ways they come up with to solve it.

224 Responses to “What the Polling Inquiry said”

One point that I noticed in the presentation was the tendency of postal voters to vote Conservative. From regular attendance at election counts and PV openings, I am aware that postal voters have a greater likelihood to vote than polling station voters. Yet I can never remember YouGov asking me if I have a postal vote when polling me about my politics. Is this a question that should be asked and should the sample of respondents be separated into postal voters and polling station voters with different LTVs for each.

As postal voters normally get their postal votes a fortnight before polling day, it might also be worth asking them whether they have already voted, for polls conducted in the last two weeks before polling day. In my experience, a large proportion of postal voters return their postal votes within a few days of receiving them.

Your point about the bookies being more united than the pollsters is well made: of course they were. If they weren’t then they would quickly lose a lot of money.

But the fact is that the betting public took a significantly more positive view as to the prospects for the Tories than did the pollsters. When you take into account that one of the main drivers of political betting must be the polls themselves, then that variation is significant.

Not only were the polls wrong, but the average man in the street knew they were, and was sufficiently confident as to risk his money.

@Millie,
It wasn’t just the man in the street. Quite a number of people here and in the media posited the notion that incumbents, and Conservatives, often outperform their poll ratings. They were generally poo-poo’d as partisans (or in the case of ChrisLane and JimJam as defeatist) and ignorant of the evidence.

It took someone of NumberCruncher’s intellect to provide the analysis to support the doubters, but it came too late in the day to change expectations.

A review of the discussions here would provide quite a good anthropological/psychological study into the mechanics of “herding”.

As others have suggested I think it highly likely that the polling industry had a major impact on the result in 2015.

By showing throughout the campaign that it looked highly likely that we would end up with a Labour minority government kept in office by the SNP it played right into the hands of the Conservative narrative of a Scottish tail wagging the English Labour dog – with all that implies. In reality the Conservatives were well ahead.

Had this been known by, for example, centre-right voters who wished to support a popular hardworking LibDem MP (despite that party’s general unpopularity!) but were swayed by the message of the polls into voting Conservative. In other words the polls forced them to choose what they believed to be safety above what would have been their primary choice of their sitting LD MP.

I wonder how many LD/Con marginals were swung to Conservative gains as a result of the misinformation from the polls and the subsequent Media/Conservative narrative?

If the polls had showed the Conservatives home and dry, I wonder if we might have ended up with another Con/LD Coalition instead?

Well I wasn’t herded, and didn’t go along with the polls. I was explicit that it was possible for the polls to be wrong and an outright Tory victory still possible.

But you have to bear in mind that people were in fact aware Tories can outperform polls. But they, including pollsters, thought they pollsters had allowed for this, and indeed in the previous election, polls underestimated Labour more than the Conservatives.

So you can see why peeps were taken in, even when aware of the underestimation thing.

More generally, peeps tend to really on heuristics, shortcuts. Sometimes the shortcuts work and peeps think they know what’s going on. But quite often, the shortcuts are not entirely reliable.

Not many do Numbercruncher’s kind of thinking where you try and consider all the evidence or indicators you can, and then try and bring it all together. Because it’s difficult…

There is a way to test the hypothesis of the Inquiry. Adjust the polling done during the election to ensure the correct demographics are replicated, and then see what the polling results would have found.

If that does not shift what pollsters found then their hypothesis is wrong and not replicable, and they have wasted a huge amount of money.

My own experience on election day in Patcham Ward in Brighton Pavillion was that the Conservatives were very good at pulling their vote, I even worked with one of their scrutineers who was in her 90’s

In contrast in the Canadian federal election in 2015 the higher turnout in Kootenay Columbia, rising from 63.45% to 74.02% was enough to assist the social democrat NDP candidate take the seat from the Conservatives for the first time since 1984.

As a study by Elections BC found after the 2009 provincial election, depending on the election the cohort of registered voters who show up changes by as much as half the electorate.

I therefore think it is extremely simplistic to simply ask, are you likely to vote, rather what needs to be asked is in which elections have you voted in the past. As consistency to vote is a far more likely predictor of likelyhood to vote.

“By showing throughout the campaign that it looked highly likely that we would end up with a Labour minority government kept in office by the SNP it played right into the hands of the Conservative narrative of a Scottish tail wagging the English Labour dog – with all that implies.”

And why should a Scottish tail be less important than a South of England tail? How often has the majority in Parliament been decided by a minority geographical and cultural group?

One of the implications was, of course, that Labour cared more for Middle England than it did for Scotland – and the results followed on inevitably from that implication. Labour lost my vote when Milliband said that, under no circumstances would Labour pay any attention to how the Scots voted.

Not that it would have made much difference over all in the UK, I suppose, but at least Labour would not now be in the ridiculous situation of having only one MP from north of the Border!

@”the Conservative narrative of a Scottish tail wagging the English Labour dog ”

Er-if memory serves-that was the SNP narrative-we were told relentlessly by Strurgeon & Salmond that they intended to ensure that there would be no Conservatives policies enacted at Westminster. That the Labour Party got itself on the hook of refuting the SNP claim that they would control proceedings ; whilst trying to persuade Scots to vote Labour-and failing in both efforts-was the fault of the Labour Party.

For sure Cons latched on this -why wouldn’t they?

But the “narrative” you refer to was SNP’s , and the Polls indicated that Labour had failed to counter it’s attraction for Scots voters.. The Polls were right .

@Andy Shadrack
Good point. Labour were slightly unrealistic about their ground war (round here anyway) because we rarely saw tories oot and aboot and the few Tory activists all seem to be about 90 (cf teenage Lab, where few are much over 70!)
I ‘supervised’ the postal vote opening in a council by-election shortly after the GE and observed that (although you’re not allowed to count!) the Tories were ahead about 70/30 in a safe Labour seat, which was very worrying for Labourites. As it turned out Labour got well over 50% of the vote and the Tories were nowhere but it demonstrated to me that the Tories are MUCH better at getting their PVs done. This is contrary both to a sometimes complacent Lab belief and to the Daily Mail view of life where postal votes are semi-fraudulent mass Lab voting by Asians.
Given that (if the polls have any credibility at all) Lab are ahead amongst people who work (and maybe are too busy/knackered to vote) and given that it is well established that Lab have a lower propensity to vote it seems to me that Lab should seek to have a comfortable advantage in PV registration, just to level the playing field a bit.

In the Times last week it also threw into the mix that Tories outspent Lab by around £3.5m and seems the Tories were also quite handy at the Facebook thing, in contrast perhaps to vague impressions given beforehand, in the hiring of Axelrod etc.

John B – “And why should a Scottish tail be less important than a South of England tail?”

Well, for 13 years no-one minded in the least the heavy Scottish influence in New Labour (Gordon Brown, John Reid, Robin Cook etc). Because it was taken as given that they’d govern for the UK as a whole.

What people objected to was the SNP tail. Particularly when Salmond was talking so contemptuously about making Miliband “dance to his tune”. People thought, OMG, if they’re that contemptuous of people from the centre-left how will they behave towards people in the centre and centre-right?

Then factor in how the SNP behaved during the referendum – screaming “smear” when their opponents mentioned the volatility of the oil price and the financial dangers it would bring. It was clear to objective people that they were trying to trick the vulnerable into voting for their own destitution with fairy tales, at which point they’d have said, Ha-ha, tricked ya, and you can’t do a thing to reverse it.

People so careless of the welfare of Scots would be even more careless of the welfare of the English and Welsh. Ergo it was not safe to have them in government and the voters duly kept them out.

I’m beginning to wonder if the whole concept of polling during something as complex as General Election is fatally flawed

By way of analogy as physics developed from Newtonian mechanics to Quantum mechanics it was realised that you couldn’t know everything about everything .. uncertainty was built in and physicists had to start talking in terms of probabilities.

The idea that there is a magic formula and that you just have to find the correct ” fiddle factor ” is starting to sound all wrong.

maybe pollsters would perhaps be better looking at what characteristics of an election made the difference to them getting it right or wrong and then finding a new language to decribe things in terms of confidence and probabilities.

Haven’t posted for a while, but I have to say, when the Argentian Ambassador describes Jeremy Corbyn as having decisive leadership, I do wonder if there is any more severe kiss of death for a UK politician.

Numbercruncher’s strong argument is that Lib Dem votes going to the Tories and Labour votes going to UKIP was something that the sampling and weighting couldn’t grasp – especially when regional factors were so important and yet missed in the lack of regional sampling.

This explains why the polls were correct in Scotland.

Tory voters in 2015 were more “socially liberal” than had been predicted. Outside of London – and in some cases even there – the shift of Lib Dem voters to the Tories had not been predicted to the extent that clearly happened.

One problem then is that in so many ways the weighting failed to capture the likelihood that varied groups of socially liberal and explicitly left wing voters would not actually vote Labour (some went to the Tories and some went to UKIP).

If this is the case, then it is very tricky to see how future polls can be more accurate unless they capture much more of the English regional variation in voter and non-voter behaviour.

If Numbercruncer is correct then it’s almost, surely, certainly the case that the misleading polls accelerated the tendency for Lib Dem voters to switch to the Tories. The public opinion impact of the polls was to cast into sharp and exaggerated relief, for an economically liberal and well healed squeezed middle, the negative risk of Miliband and a SNP supported minority government.

If this process was present during, say, six weeks of the live campaign, is it any wonder that a significant late swing has not been detected?

I think it’s the same poll as last week, the tables didn’t have VI questions. It was a odd sample very heavily upweighted SNP, downweighted Con ie the sample was very Con. And only 47% voted SNP at the GE as opposed to 50% so slightly too few SNP.

Not sure what the evidence is that Labour votes would have gone to UKIP, as Labour’s share of the vote went from 29% in 2010 to 30.4% in 2015. Conservative share vote went from 36.1% to 36.9%.
The Liberal vote went some where, I expect some to the Greens and the rest split between Labour and Conservatives. It is of course entirely possible some Labour and Conservative voters went to UKIP, as to how they split I think it is difficult to tell.

@NEILJ, The point made by Numbercruncher is that the national sampled stats (and even the polled stats before the election) hid what was happening on a regional basis – the explanation for shifts in a number of key constituencies is a shift away from the Lib Dems that did not benefit Labour and a shift to UKIP that did not benefit Labour (Labour voters).

This is largely what led him to call the result the day before the election. So, yes, the increase in Labour’s share of the vote actually proves the salience of regional patterns that were not picked up by the sampling and weighting.

Tory support proved to be far, far more resilient to concerns about social justice that, it was assumed, would clinch the argument for Labour by Lib Dem supporters.

Where Cable and Ashdown are, it pains to me to say it, correct is that the inaccurate message of the pre-election polls almost certainly had an impact on this significant group of Lib Dem voters. They were much less likely to stick with the assumed futility of voting Lib Dem and instead went for the Tory option.

Where Cable and Ashdown are wrong is in completely failing to notice that the Lib Dem Party itself had eroded the distinction between Tory and Lib Dem, making such a shift much more palatable. It’s unlikely that such a shift would have been possible, on the scale it happened, without the Coalition experience.

Evidence for this was not just the doorstep mood music on the day but, perhaps, the fact that there has been no rebound in Lib Dem support since the election – the reasons for this are no doubt complex (the current Lib Dem leadership is perhaps at least partly responsible) but there has been no loyalty based rebound. The Tory support has been “sticky” – and the Labour Party has certainly not benefitted.

“A post-mortem into the polling industry failing to predict the general election result, published this week, underlined how vociferous online support for Labour and left-wing policies fails to translate into electoral success.”

@Gaz
I see the point you are making now, I rather suspect the reasons why the conservatives did well was that the collapse in the Liberal vote happened mostly in constituencies where the Conservatives were the only real contender, such as the South West. Labour were never going to win there.

Interesting that LibDem coalition with the Tories in England de-toxified the Tories for LibDem voters whereas in Scotland partnership with the Tories toxified Labour in the eyes of Labour voters. Lucky Tories

I also think that people were actually voting for the Coalition, which the rUK were reasonably happy with. Not necessary consciously but it was the Coalition record they were voting on. Unfortunately for the LibDems the Tories managed to claim that record and the LibDems were very much sidelined in the GE.

If the polls had been more accurate showing a Tory largest party/Tory majority tertiary the GE speculation would be whether Tories would get a majority or if they would need LibDem support in another coalition putting LibDems centre stage.

Also, it’s interesting to see that the variation in polls for Holyrood seems to be driven by methodologies rather than movements in VI, because TNS seem to give a significantly higher % for the SNP and lower % for the Tories than anyone else.

A bit off topic, but I saw in the Telegraph that Donald Trump has only spent $2 million of his own money thus far. Very impressive, even for those of us who don’t like him.

that you quoted, I wonder if any analysis has been done on how many voters actually have active Facebook or Twitter accounts? I don’t know anyone who has a Twitter account, and though I and some I know have Facebook Accounts, I only look at mine about every month. Although these users are easy to contact, political parties and pollsters will miss very many voters if they rely on these routes.

I think that I am like most people. If anybody rings me that I don’t know to conduct any sort of “survey” I just hang up. Mobile or landline.

Combine that with the death of the landline and the fact that unscrupulous robots randomly ring all the time make polling by phone and getting any sort of sample difficult. Online is surely self selecting?

Who’d be a pollster? If you add in the dubious motivations and agendas of the people who actually commission the polls (which calls into question just how objective a picture we are getting (“do agree that David Cameron is a patriotic leader?”, “many people say that Ed Milliiband is a tosser who can’t eat a bacon sandwich, do you agree?”) and that’s even before the actual selective reporting, spin and downright lying begins.

I am now inclined to the view that we should ban polls during the actual election period.

The implication for me is that there is something about the demographic of Labour supporters which tends to make them high level users of social media-eg that they are younger, more politically active , more inclined to voice political views.

Given that social media can therefore be an “echo chamber” , or perhaps more unkindly, a place to go & rant-is it a good indicator of broadbased public opinion?

Given the relative preponderance of Labour supporters in cyberspace-a question for Corbyn rather than Cameron.

My impression would be that, given Corbyn’s background of Protest Politics, that analysis of political demographics in Social Media users will be attractive to him . Its in his comfort zone-mobilising ” The People”. …………..but are the Twitterati “The People” ?.

It might also be that ‘the relative preponderance of Labour supporters in cyberspace’ reflects ‘the relative preponderance of Conservative and New Labour views’ in the mainstream media and on the BBC.

For example, Liz Kendall, Jackie Smith and Jess Phillips are invariably the female replacements for Dianne Abbott on the sofa with Andrew Neil’s ‘This Week’… none of whom reflect the views of a majority of LP members.

“I am now inclined to the view that we should ban polls during the actual election period.”

This isn’t as outlandish as some may claim and well established and reputable democracies like France, India, Italy and Spain already do so.

We shall never know what effect the constant, and ultimately misleading, polling had on the May 2015 General Election result, but our politically skewed media does accentuate the danger of them being misused and manipulated during an election campaign. Did they make the running and create the weather? Did they cause us to debate fantasy issues as opposed to genuine ones? How much did they influence the parties campaign strategies? How much did they skew media coverage? How much did they influence voter behaviour? All very interesting questions and ones no doubt debated in France, India, Italy and Spain before they decided on their polling bans. They obviously drew their own conclusions.

@Syzygy

“For example, Liz Kendall, Jackie Smith and Jess Phillips are invariably the female replacements for Dianne Abbott on the sofa with Andrew Neil’s ‘This Week’… none of whom reflect the views of a majority of LP members.”

I’ve seen Tom Watson make the odd appearance in the past, and Alan Johnson too, but you’re right that the female replacements do tend be of the Blairite variety. That said, I wouldn’t take Neil’s evening show too seriously, if I was you. It’s become self parody and no place for either serious or interesting political discussion. It usually ends up with Neil arguing rather peevishly with his guest presenters, certainly if they are, how shall we say, of a slightly different political hue to his own.

Neil’s omnipresence on BBC’s political programming is another reminder of how jaded and dated it has become. Every time I tune in and see Dimbleby and Neil, or listen to Dimbleby’s brother and Humphreys, I keep thinking I’m reprising old 1960s recordings. Maybe Bob McKenzie and Robin Day will make a comeback and appear in a programme entitled “Sunday Politics from Beyond the Grave”. It would certainly beat Neil’s show. :-)

Isn’t it time we moved on from these tired old defunct relics from bygone ages and injected some youth and vitality into it all.

“It might also be that ‘the relative preponderance of Labour supporters in cyberspace’ reflects ‘the relative preponderance of Conservative and New Labour views’ in the mainstream media and on the BBC.”

—————

This would also chime with the density of the Nats online too, given that they say their press is a bit more pro Union. Compensating for an imbalance?

If we’re gonna consider this, the impact or otherwise of social media, it isn’t enough to just note that it may only be a small, somewhat unrepresentative subset chuntering away online.

Because they may have a disproportionate effect in a variety of ways. They may put across ideas that others reading pick up on. Or they may perhaps be opinion formers who have influence in the real world. There is also some research I read about recently which suggests that posting in support of summat does seem to have an effect on others overall.

I’m not saying this is the case, just that it needs to be taken into account. Or to put it another way, given the media situation, it’s possible things could be even worse for Labour were it not for twitterati etc.

Several of my less left-wing friends have retreated from Facebook, or left it completely, on account of the preponderance of “protest politics” memes on there. Myself, I’ve just switched off the news feeds of most of my friends.

I wonder if it’s not just that loud left-wingers are more attracted to social media in the first place, but that their abundance makes it less appealing to more right-wing people and it becomes a self-fulfilling prophecy?

I don’t know what others experience on social media but I know that for myself, it opens up a wealth of information that I would either not know about or be able to access.

Obviously, it can be used to swap cat videos or it can be used confirm previously held positions. However, I like cats… and my experience is that there is usually a variety of differing views being discussed or challenged. Furthermore, there is a wealth of academic input or specialist knowledge which is not being presented in the MSM.

I appreciate the temptation to reduce the online as twitterati, a ‘small, somewhat unrepresentative subset countering away’ to itself but I believe Carfew to be right when he suggests that they ‘may perhaps be opinion formers who have influence in the real world’.

The only party to clearly gain on a substantial swing in support was the SNP who went from 6 to 56 seats.

The Conservatives on a minimalist swing of .4% picked up 26 more seats, 27 from the Liberal Democrats and 8 from Labour, but losing 10 to Labour and 1 to UKIP.

In contrast Labour, in addition lost 40 to SNP and only gained 12 from Liberal Democats.

Very early on I read an article in either the Times or Telegraph on how the Tories intended to smash the Liberal Democrats in the 2015 election.

The logic of this move politically, is that knowing they were going to lose a portion of their vote to UKIP, the Tories focused on the fears of the remaining Lib Dems, who had not already gone to Labour or Green.

I think it is too simplistic to say that Lib Dem voters went to UKIP. Rather what needs to be tracked is how voters moved between 2010 and 2015 – through local government, Scottish, Welsh and European elections.

Further as can be seen from above, % changes in votes do not matter much under FPTP, as what matters is getting the largest number of votes in as many constituencies as possible.

Clearly in 2015 the Conservatives were the masters of that game, and in that sense the pollsters were an irrelevant smoke and mirror backdrop.

I am not convinced that any pollster could have found the correct answers when Lord A did all that constituency polling and could not correctly predict that the LDs were going to have their clocks cleaned by the Tories.

As for Scotland in 2016, again the devil will be in the details and I have not seen any sub-regional polls yet that have any credibility.