Should we believe opinion polls?

On Facebook, activists regularly suggest that opinion polls should be ignored, because they are politically biased, or because they have proved inaccurate in the past. As I explain below, this is not true.

Opinion polls are a much more systematic measurement of public attitudes than anything else – certainly there is nothing representative about those who answer to door to canvassers, are stopped in the street by journalists, or attend political meetings. This does not mean the polls are absolutely accurate, nor that bias is not introduced by journalists reporting the results.

There is no reason to believe that poll results are biased by the views of the owners of the polling companies, or by the clients who commission the polls. Polling organisations earn their living from a wide range of clients, public and commercial, by telling the truth as far as they can. No commercial advertiser or political party will pay serious money just to have its views or products flattered. For this reason, polling companies have a professional code. They publish their questionnaires and sampling methods and their data. Anyone can look for bias: if you find it, call it out!

Most polls are accurate within their stated limits. All publish their margins of error, which are rarely reported in the press. In the case of the EU referendum YouGov consistently predicted a result within this margin throughout the campaign, and the final result matched this. In the 2017 General Election, YouGov predicted the Conservative vote share (42%) exactly, but underestimated the extent of the swing to Labour during the election campaign, which went on after the last pre-election poll. Of 41 UK election and referendum polls conducted by YouGov up to 2016, their predictions were only twice out by more than 3%, and in 29 cases they were below 2%.

Professional opinion polls should not be confused with “polls” on websites and Facebook posts, or surveys on specific topics. The former tell us little or nothing, since participation is entirely voluntary, and depends on how widely the invitation is circulated. The latter are different, in that they are usually interested in the views of a specific population on a specific issue (like approval for a road scheme). Usually responding is voluntary and response rates are typically low, but they can give a broad idea of the extent of support. By contrast, a professional opinion poll (especially on political attitudes) collects the views of a representative sample of the population. They typically draw on a large panel of volunteers (800,000 in the case of YouGov) they invite responses from a sample selected to be representative of the broad population in gender, age, location, previous voting behaviour and current voting intention. They adjust the results to reflect what is known about the behaviour of “don’t knows” or people who refuse to answer, and these assumptions are updated whenever an event like an election provides a test of the match between polling and the real vote.

There are two particular reasons why polls sometimes get the answers wrong. Firstly, some events (like referenda) are very rare, so there is little historical evidence to test whether key assumptions (like the behaviour of don’t knows) are correct. Secondly, the speed of change in people’s views during an election campaign can vary (as in the Labour swing in 2016). Thirdly, a national poll cannot tell anything about the distribution of votes by constituency. Under first past the post elections, this is critical, since large numbers of votes for a party in a “safe” seat will still only return a single MP, whereas the same number of votes distributed evenly might return two or three, or none. In the 2017 election, the average Conservative MP was elected by 42,000 votes, but 507,000 Green Party votes returned only one, and 550,000 UKIP votes returned none. The same effect can be seen in the 2016 US Presidential Election, where Hillary Clinton won the majority of votes, but lost the Election because the vote was distributed unevenly between States).

Constituency level polling is, of course, extremely expensive. A sample of 1000 has been shown to be reliable for a national poll, but this means less than 2 responses from the median constituency. For this reason YouGov developed their MRP methodology, which combines the known attitudes of particular groups (drawn from large national samples), with the demography of individual constituencies, to predict at Constituency level. Thus we have a fairly clear idea of the proportion of Remain voters among white, 35 year old, university educated, women in high earning groups. The proportion of the electorate in each constituency who fall into that group is known from census and other data. By applying this to all the groups in each constituency it is possible to predict a result for each constituency. In the 2017 general election this predicted a national Conservative lead of 3.5%, against the result on the day of 2.4%.

All this suggests that one should believe polls within their advertised margins of error, but beware of late swings, and how the results may be spun by journalists. It is also worth noting trends: when a single company using the same methodology and questions over time shows a steady change, it is more likely to be real, than a one off poll, particularly when carried out after some major news event.

On this basis, it is worth noting that of over 60 polls on Brexit in the last 18 months, none has shown a leave majority, and the trend has been steadily towards remain, especially in the Labour seats which voted most strongly for leave, they should not be dismissed.