The chart below tracks how each pollster tends to lean when calculating support levels for the various parties, as compared to the average polling results from other pollsters each month. This does not necessarily equate to a deliberate bias, but instead is more reflective of the polling methods used. This is also not a scientific calculation of any kind, but it does give an indication of how each pollster tends to compare to others.

The following chart shows each pollster's average variation from other polling firms. The numbers are the amount of percentage points a particular pollster favours or disfavours that particular party compared to other pollsters over a similar period of time.

Sure but if the XXXX-Reids are using the same basic methodology then the results should be similar and as you point out they basically are. What it looks like to me is several sort of "front" companies running the same infrastructure.

Regarding the idea that the developed world owes the developing world something, haven't we already given them quite a lot? It was the rational enlightenment in the first world that produced things like the ability to synthesize ammonia, thus allowing the production fo fertilizer. That discovery alone is responsible for millions of people in the developing world being alive today.

The list of things that we've developed from which they benefit is staggering. What more do they want?

Éric, please excuse if this has been asked before, but what would account for consistent differences between pollsters? Assuming we can discount lying/bias, what would be the cause? One theory was about whether the questioner asks about parties by name or not. That seemed to be mostly to explain Greens differences, but not Liberals vs. CPC.

Could it be differences in the way they select their samples? Do they publish that? The infamous Dewey defeats Truman comes to mind, where Gallup polled telephone subscribers who tended to be Dewey supporters at the time.

I'm curious to see how the various pollsters compared to the elections of the last few years. It would be a mistake to only look at the last election to determine if a predictable 'house effect' exists.

Regarding Intn'l aid: We don't really owe anybody anything. But as a matter humanitarian assistance and economic development we shouldn't turn from Africa towards S. America for a few reasons.

Sure S. America is closer, For whatever that matters) but they're better off and we don't have much history in development there. Africa --and the Caribbean, don't really need help less and we're more familiar with those regions. We may as well continue there.

Regarding the Congo, I haven't Bthought all that much about it. but if our efforts in Afghanistan are ending, and the UN's efforts in the Congo are so far effective; it seems like an effective move.

this is an interesting assumption. The polls that are taken with an actual ballot are all outliers.

A Poll can be manipulated for any organization that is funding it. The risk is the organization can lose credibility if caught.If Peta, Fraser Institute has a study I would look for a potential bias in its design.

Angus did a recap in 2008 and were most accurate. I have been reading about their investments in technology. Some interesting articles.

I look at the summary included from each pollster and look for language that may hint at bias. A scathing or unkind comment is usually a dead give away.

Eric states

"This is also not a scientific calculation of any kind, but it does give an indication of how each pollster tends to compare to others."

Angus-Reid has been lauded a few times in comments for coming so close to the actual results in the 2008 election. Indeed, the sum of their errors was 4.4% comparing their last survey to the actual election results. The other pollsters' errors were 7.3% for EKOS, 8.8 for Harris-Decima, 9.0 for Nanos, 9.2 for Asking Canadians and 10.8 for Strategic Counsel. (Source: last poll listed from this Wiki page. I encourage somebody to check my arithmetic.)

So Angus-Reid is the best hands-down, right? Well, maybe Angus is. For the 2006 election (from this Wiki page, same request for a sanity check) the results are Nanos 1.5%, Strategic Counsel 6.4%, Ekos 7.2% and Ipsos-Reid 8.4%.

In other words, Ipsos-Reid was the worst of the regular pollsters in 2006. The last shall be first, although the first weren't actually last.

Everyone should entertain the thought that pollsters may be very good at what they do, or they may just be lucky. To investigate further I invite somebody to make the same calculation for 2004. It's far from clear that either the 2006 or 2008 results will be useful predictors.

Oh, and to confound another assertion, in 2006 Ipsos-Reid lowballed the Tories just as much as Ekos and Stategic Counsel. Nanos nailed the number. (Errors in those statements are at most 0.1%.)

Shadow: Pollsters may have a more accurate record for a certain party, say they get the CPC number right every time, but they get the left wing vote wrong.

Or maybe they're bang on for the major parties and get the lesser parties wrong. Or other quirks like that.

Or maybe their errors are just due to statistical variance. I think our society significantly underestimates the amount of noise in a poll. (This is especially true for numbers like Ekos's Saskitoba under-25 demographic which is sometimes based on a single-digit sample; I'd put more faith in a horoscope.)

This is a less troublesome hypothesis than an alternative one: that the polls are accurate and voters' fickle preferences vary by government-changing amounts from day to day. Fortunately, statistics is a mature science and the margin of error statements are the tip of an iceberg of understanding. That "nineteen times out of twenty" statement really does mean that there's a probability tail out there, not a wall. Five percent of the time the prognostications will be seriously out to lunch.

The least likely hypothesis is that pollsters have Hidden Agendas.

I've held forth previously on the difference between voter preferences and predicted election results, so I'll give that dead horse a chance to regrow some skin.

Does anyone have a list of polls just before the 2008 election, and when they were taken? I can't find anything on this.I think the Dion blooper reel of October 9 had a strong effect, that was mainly captured on election day October 14. Being the day after Turkey day, I and probably many others, were not getting their usual news sources due to family activities. Plus I recall otherwise reasonable people telling me not to vote for that Frenchman.

COMMENT MODERATION POLICY - Please be respectful when commenting. If choosing to remain anonymous, please sign your comment with some sort of pseudonym to avoid confusion. Please do not use any derogatory terms for fellow commenters, parties, or politicians. Inflammatory and overly partisan comments will not be posted. PLEASE KEEP DISCUSSION ON TOPIC.

Details on the methodology of the poll aggregation and seat projections are available here and here. Methodology for the forecasting model used during election campaigns is available here.

Projections on this site are subject to the margins of error of the opinion polls included in the model, as well as the unpredictable nature of politics at the riding level. The degree of uncertainty in the projections is also reflected by the projections' high and low ranges, when noted.

ThreeHundredEight.com is a non-partisan site and is committed to reporting on polls responsibly.