A Note on Methodology for our Constituency Phone Polls

As we have mentioned before, we have conducted a general review of how we do constituency polling over the last year, with the conclusion that we have decided to stop using 2010 vote weighting for all constituency polls we publish. There are several good reasons why past vote weighting might worsen rather than improve accuracy in constituency polls, which we outline below.

Firstly data from ONS implies that approximately 5% of the population of an average constituency might move out of the area each year. With the 2010 general election now three and a half years passed and coupled with new electors coming of age and old ones passing away, that suggests that 15-20% of the population resident in an average constituency today might not have been resident there at the time of the last election, making past vote weighting targets much less accurate on a constituency basis than they are nationally.

Secondly, we have reason to believe that there is a substantial degree of false recall going on in these telephone polls when people are asked who they voted for in the last election. In most constituencies we have polled over the last year, the proportion of people saying they voted UKIP in 2010 was higher than the actual recorded percentage from the last election. I cannot think of a plausible reason why, after having corrected for age, gender and ward, we would actually have over-sampled past UKIP voters so significantly and so consistently. Instead it seems far more likely that these additional “past UKIP” voters, virtually all of whom say they are currently planning to vote UKIP, are either consciously or subconsciously altering their response to make their views sound more consistent, or else are confusing the 2010 general election with a different election, perhaps the last local elections, in which they did actually vote UKIP (in South Thanet for instance UKIP came top in the local elections 2013). To consistently depress the UKIP vote by applying a downweighting to these voters without a plausible hypothesis for why they are being “over-sampled” would seem to be a major mistake.

Finally, there are concerns that in places with a high degree of “refused” or “can’t remember” responses to the past vote question (which tends to be higher on the phone than online polls), there is the risk that these people for whom we don’t have past vote data may be disproportionately from one party compared with another. For instance, as can be seen from our data tables in Newark, these were mainly older people, who we know are far more likely to have voted Conservative than any other party. Weighting the people who actually admitted voting Conservative up to the full Conservative 2010 target would therefore overstate them relative to other parties.