Stuff

Today we’ve had the first two polls asking people about whether they’d support The Independent Group were they to stand candidates.

Survation in the Daily Mail asked how people would vote if there was “a new centrist party opposed to Brexit”, producing voting intention figures of CON 39%, LAB 34%, LDEM 6%, “New centrist party” 8%, UKIP 5%. In comparison, the normal voting intention figures in the poll were CON 40%, LAB 36%, LDEM 10%, UKIP 5%, suggesting the new party could take support from both Labour and Conservative, though it would largely take votes from the Liberal Democrats. Tables are here.

SkyData, who do not typically publish voting intention figures, asked how people would vote if the “new Independent Group of former Labour MPs” were standing, and found voting intention figures of CON 32%, LAB 26%, TIG 10%, LDEM 9%, UKIP 6%. We don’t have standard voting intention figures to compare here, but on the face of it, it also looks as if support is coming from both Labour and Conservative, though the level of Lib Dem support appears to be holding up better than in the Survation poll. Note that the lower figures overall appear to be because of an unusually high figure for “others” (possibly because SkyData do not offer respondents the ability to answer don’t know). Tables are here.

These polls are, of course, still rather hypothetical. “The Independent Group” is not a political party yet (assuming, that it ever becomes one). It doesn’t formally have a leader yet, or any policies. We don’t yet know how it will co-exist with the Liberal Democrats. As of Tuesday night it only has former Labour MPs, though the rumourmill expects some Conservative MPs to join sooner rather than later.

Nevertheless, it is more “real” than the typical hypothetical polls asking about imaginary centrist parties. Respondents do at least have some names, faces and context to base it upon, and it gives us a baseline of support. We won’t really know for sure until (and unless) the Independent Group transform into a proper party and is just another option in standard voting intention polls.

Rather than their usual poll for the Times, this week YouGov have a full MRP model of voting intention (that is, the same method that YouGov used for their seat projection at the general election). Topline voting intention figures from the YouGov MRP model are CON 39%, LAB 34%, LDEM 11%, UKIP 5%. The fieldwork was Sun-Thursday last week, with just over 40,000 respondents.

The aim of an MRP model is not really the vote shares though, the whole point of the technique is project shares down to seat level, and project who would win each seat. The model currently has the Conservatives winning 321 seats, Labour 250, the Liberal Democrats 16 and the SNP 39. Compared to the 2017 election the Conservatives would make a net gain of just 4 seats, Labour would lose 12 seats, the Liberal Democrats would gain 4 and the SNP would gain 4. It would leave the Conservatives just shy of an overall majority (though in practice, given Sinn Fein do not take their seats and the Speaker and Deputies don’t vote, they would have a majority of MPs who actually vote in the Commons). Whether an extra four seats would really help that much is a different question.

The five point lead it shows for the Conservatives is a swing of 1.4% to the Conservatives – very small, but on a pure uniform swing it would be enough for the Tories to get a proper overall majority. The reason they don’t here is largely because the model shows Labour outperforming in the ultra-marginal seats they won off the Conservatives at the last election (a well known phenomenon – they gain the personal vote of the new Labour MP, lose any incumbency bonus from the former Tory MP. It is the same reason the Conservatives failed to gain a meaningful number of seats in 2001, despite a small swing in their favour).

For those interested in what MRP actually is, YouGov’s detailed explanation from the 2017 election is here (Ben Lauderdale & Jack Blumenau, who created the model for the 2017 election, also carried out this one). The short version is that it is a technique designed to allow projection of results at smaller geographical levels (in this case, individual constituencies). It works by modelling respondents’ voting intention based on their demographics and the political circumstances in each seat, and then applying the model to the demographics of each of the 632 seats in Great Britain. Crucially, of course, it also called the 2017 election correctly, when most of the traditional polls ended up getting it wrong.

Compared to more conventional polling the Conservative lead is similar to that in YouGov’s recent traditional polls (which have shown Tory leads of between 5-7 points of late), but has both main parties at a lower level. Partly this is because it’s modelling UKIP & Green support in all seats, rather than in just the constituencies they contested in 2017 (when the MRP was done at the last election it was after nominations had closed, so it only modelled the actual parties standing in each seat) – in practice their total level of support would likely be lower.

The Times’s write up of the poll is here, details from YouGov are here and technical details are here

There are two new voting intention polls out today – YouGov for the Times, and Ipsos MORI’s monthly political monitor in the Evening Standard.

Ipsos MORI‘s topline figures are CON 38%(nc), LAB 38%(nc), LDEM 10%(+1), UKIP 4%(nc). Fieldwork was between Friday and Tuesday (1st-5th), and changes are from MORI’s last poll back in December.

YouGov‘s topline figures are CON 41%(+2), LAB 34%(nc), LDEM 10(-1), UKIP 4%(-2). Fieldwork was on Sunday and Monday, and changes are from YouGov’s last poll in mid-January.

This does not, of course, offer us much insight on what is really happening. At the weekend a lot of attention was paid to a poll by Opinium showing a big shift towards the Conservatives and a 7 point Tory lead. Earlier in the week Opinium also published a previously unreleased poll conducted for the People’s Vote campaign the previous week, which showed a four point Tory lead, suggesting their Observer poll was more than just an isolated blip. Today’s polls do little to clatify matters – MORI show no change, with the parties still neck-and-neck. YouGov show the Tories moving to a seven point lead, the same as Opinium, but YouGov has typically shown larger Tory leads anyway of late so it doesn’t reflect quite as large a movement.

I know people look at polls hoping to find some firm evidence – the reality is they cannot always provide it. They are volatile, they have margins of error. Only time will tell for sure whether Labour’s support is dropping as events force them to take a clearer stance on Brexit, or whether we’re just reading too much into noise. As ever, the wisest advice I can give is to resist the natural temptation to assume that the polls you’d like to be accurate are the ones that are correct, and that the others must be wrong.

Opinium’s fortnightly poll in the Observer today has topline voting intention figures of CON 41%(+4), LAB 34%(-6), LDEM 8%(+1), UKIP 7%(nc). Fieldwork was between Wednesday and Friday, and changes are from Opinium’s previous poll in mid-January, conducted straight after May lost her vote on the deal, but won her no confidence vote.

A seven point Conservative lead is the largest since the election. While it is not significantly larger than the 5 or 6 point leads YouGov have been showing this month, it’s a noticable change to Opinium’s previous recent polls, which have tended to show Labour and Conservative roughly neck-and-neck.

As ever, one should be a little cautious about reading too much into a single poll. Survation’s poll for Thursday’s Daily Mail had fieldwork conducted on Wednesday, so actually overlaps the fieldwork period for this poll and showed a one point Labour lead with no meaningful swing from Labour to Conservative. It would be wise to wait and see if subsequent polls confirm whether public opinion has shifted against Labour, or whether this is just an outlier.

Also, be cautious about reading too much into what has caused the change. We really don’t know if there has been a change yet, let alone exactly where it has come from and why (not that it will stop people assuming things). It has been two weeks since Opinium’s last poll, and an awful lot has happened – so one cannot pin the change on any one specific event. Neither can cross-breaks really give much guidance (as Michael Savage notes in the Observer, Labour are down among both remainers and leavers… though discerning any signal from the noise of crossbreaks would be difficult even if the change was all on one side).

Looking across the polls as a whole Conservative support appears to be dropping a little, though polls are still ultimately showing Labour and Conservative very close together in terms of voting intention. As ever there are some differences between companies – YouGov are still showing a small but consistent Tory lead, the most recent polls from BMG, Opinium and MORI had a tie (though Opinium and MORI haven’t released any 2019 polls yet), Kantar, ComRes and Suration all showed a small Labour lead in their most last polls.

Several people have asked me about the reasons for the difference between polling companies figures. There isn’t an easy answer – there rarely is. The reality is that all polling companies want to be right and want to be accurate, so if there were easy explanations for the differences and it was easy to know what the right choices were, they would all rapidly come into line!

There are two real elements that are responsible for house effects between pollsters. The first is the things they do to the voting intention data after it is collected and weighted – primarily that is how do they account for turnout (to what extent do they weight down or filter out people who are unlikely to vote), and what to do they with people who say they don’t know how they’ll vote (do they ignore them, or use squeeze questions or inference to try and estimate how they might end up voting). The good thing about these sort of differences is that they are easily quantifiable – you can look up the polling tables, compare the figures with turnout weighting and without, and see exactly the impact they have.

At the time of the 2017 election these adjustments were responsible for a lot of the difference between polling companies. Some polls were using turnout models that really transformed their topline figures. However, those sort of models also largely turned out to be wrong in 2017, so polling companies are now using much lighter touch turnout models, and little in the way of reallocating don’t knows. There are a few unusual cases (for example, I think ComRes still reallocate don’t knows, which helps Labour at present, but most companies do not. BMG no longer do any weighting or filtering by likelihood to vote, an adjustment which for other companies tends to reduce Labour support by a point or two). These small differences are not, by themselves, enough to explain the differences between polls.

The other big differences between polls are their samples and the weights and quotas they use to make them representative. It is far, far more difficult to quantify the impact of these differences (indeed, without access to raw samples it’s pretty much impossible). Under BPC rules polling companies are supposed to be transparent about what they weight their samples by and to what targets, so we can tell what the differences are, but we can’t with any confidence tell what the impact is.

I believe all the polling companies weight by age, gender and region. Every company except for Ipsos MORI also votes by how people voted at the last election. After that polling companies differ – most vote by EU Ref vote, some companies weight by education (YouGov, Kantar, Survation), some by social class (YouGov, ComRes), income (BMG, Survation), working status (Kantar), level of interest in politics (YouGov), newspaper readership (Ipsos MORI) and so on.

Even if polling companies weight by the same variables, there can be differences. For example, while almost everyone weights by how people voted at the last election, there are differences in the proportion of non-voters they weight to. It makes a difference whether targets are interlocked or not. Companies may use different bands for things like age, education or income weighting. On top of all this, there are questions about when the weighting data is collected, for things like past general election vote and past referendum vote there is a well-known phenomenon of “false recall”, where people do not accurately report how they voted in an election a few years back. Hence weighting by past vote data collected at the time of the election when it was fresh in people’s minds can be very different to weighting by past vote data collected now, at the time of the survey when people may be less accurate.

Given there isn’t presently a huge impact from different approaches to turnout or don’t knows, the difference between polling companies is likely to be down some of these factors which are – fairly evidently – extremely difficult to quantify. All you can really conclude is that the difference is probably down to the different sampling and weighting of the different companies, and that, short of a general election, there is no easy way for either observers (nor pollsters themselves!) to be sure what the right answer is. All I would advise is to avoid the temptation of (a) assuming that the polls you want to be true are correct… that’s just wishful thinking, or (b) assuming that the majority are right. There are plenty of instances (ICM in 1997, or Survation and the YouGov MRP model in 2017), when the odd one out turned out to be the one that was right.