On systematic biases of Canadian polling firms

Now that the federal election campaign is underway, we're being bombarded with polls. One of the questions that preoccupies me these days is the extent to which there are systematic differences between the results produced by the various polling firms. Here is a graph of the reported estimates for the lead of the Conservatives over the Liberals over the past few months, for the four polling firms that have been most active over this period:If I were in the mood to try to model this data - say, in terms of a state-space model - I would have a heck of a time justifying the assumption that the various polling firms were using the same data-generating process. For example, it's not at all clear that Nanos and Angus Reid are talking to the same people; the differences in their estimates are consistently in the 8-10 percentage point range.

But the more I think about the question, the thornier it becomes. I'm going to set aside the hypothesis that Angus Reid and/or Nanos (or any other firm) are torquing the data in favour of one party or another; political polling is a small part of what they do, and their business depends on their reputation for providing honest analyses of the data, not on telling clients what they want to hear.

So where does this difference come from? The way they generate their lists of people to call? Their way of dealing with people on the list that they can't contact? Their way of phrasing questions? What?

Comments

You can follow this conversation by subscribing to the comment feed for this post.

Have you read Pickup and Johnston on this? I think they've done a fairly good job of documenting the various house biases.

If I were looking for causes I would look to two: question phrasing and ordering and the field house under contract. I think the former matter rather substantially for the considerations which are primed before respondents are asked their vote choice. And I think the latter varies rather dramatically in terms of both quality and sampling technique.

Most of it seems to be question order and whether they prompt the respondent with the party name.

One firm during the year asks whether the country is on right track or wrong track before asking voting preference. This creates a difficulty for some people whom answer right track but would have otherwise not expressed support for the incumbent.

Nanos, does not prompt party names for his poll. Others prompt the main four, and now sometimes five on a rotating basis (prompting for greens creates their polling bump that doesn't show up at the ballot box).

Some firms now also use a hybrid online and phone poll, and seem to provide the furthest outliers.

A good reference check for Nanos (when it was SES) is the 2006 election. Their pre-election poll nailed the popular vote. Check it out here:

http://www.sfu.ca/~aheard/elections/results.html#2004

That site has some excellent historical election data, too. One takeaway is that Conservatives are basically at the historical vote, of between 35-40% of the popular vote (even the combined PC and Reform days have a combined vote in that range). What is unusual is the relative collapse of the Liberal party.

A recent EconTalk podcast with Doug Rivers covered how different methodologies could lead to different results--largely due to how the sample is weighed to be representative and how they define "representative." I am not sure how they apply to the Canadian polling companies but it was very informative.

Another variable is whether the questions were part of a dedicated political survey or whether they were part of a regular "omnibus" surveys. If the latter, there is no way to know which questions preceded the political ones. In omnibus surveys, multiple clients buy a few questions, which are assembled together into a single questionnaire. As a respondent in an omnibus study you can be asked about your favourite blue jeans, to how often you ride the bus, and then into "who would you vote for today."