Reader discussion

Ontario Premier Dalton McGuinty has ridden his straitlaced, father-knows-best image to back-to-back majority governments and will be looking to secure a fairly rare third straight term in the premier's office this fall. McGuinty serves a hotdog to four-year-old Tirza Aver at a barbeque for forest fire evacuees in Thunder Bay, Ontario on Friday July 22, 2011.
Frank Gunn/The Canadian Press

video

Typically in an election, polls chug along inside a band of movement, jostling up and down, but generally moving along within the trend of conventional wisdom. Sometimes, one comes out that runs against the conventional wisdom enough that it is openly questioned.

Twitter is all abuzz about this Harris/Decima poll, with talk about period in the field and “rouge polls” and sample sizes. One of the arguments thrown around is that the survey falls before the election call, and thus is less important.

Frankly, I find that argument baseless.

The survey period was Aug. 26 to Sept. 6, which is the most recent date out of the field (although it is a long survey period and others were in and out of the field during this period.) Furthermore, the dropping of the writ is now a minor event in a world of fixed election dates. There is an argument that the electorate is still mostly ignoring the conversation about the election, and won’t tune in until the debate or another event draws their attention.

If anything, all the polling until now should be taken with a grain of salt.

There are numerous federal events that are likely colouring the results, including the recent election and Jack Layton’s passing. As attention moves to the provincial scene briefly, opinion will likely shift. Now that the election is beginning, we will start to see numbers that slowly begin to reflect a more alert electorate, but that process takes time.

There is some thinking in election survey research that opinion will revert to the last election when a jurisdiction with low saliency for the electorate (like provincial politics in Ontario) suddenly leaps back to the front page in an election.

Voters who have not paid attention much will start reporting an intention to vote based on the last election, rather than recent events, because the last election is the last time they were thinking about
provincial politics, rather than federal politics or just giving an opinion on the government of the day without the choice of the other parties involved.

This could also be part of that general trend, with some rallying back to the Liberals as a result. We saw a similar effect in 2003, with Ernie Eves rising to lead the polls during the first week of the campaign, after years of the PC Party trailing.

Rouge polls are a funny concept. The jargon at the bottom of most surveys says that the margin of error is X percentage, 19 times out of 20. This is interpreted to mean that the twentieth poll can be downright bizarre and wildly inaccurate.

Actually, most times, the “rouge polls” will have results that are just a shade beyond the margin of error. Probability is distributed normally, meaning that the likelihood of a survey coming back saying 99 per cent of people think Lindsay Lohan is qualified to babysit their children is infinitesimal, not the 5 per cent implied by the misunderstanding of rouge poll.

Surveys can be wrong, and sometimes wildly wrong. But you don’t see them be so wrong that they suddenly show the Greens in first and the PCs with 1 per cent support because when they are outside the margin of error, it’s usually only by a bit.

The issue of sample size requires more unpacking. Polls that run against the conventional wisdom are sometimes dismissed using the “small sample size” argument, because sample size usually proves the rule that knowing a little bit about something is more dangerous than knowing nothing.

First of all, margin of error – which people are typically meaning when they talk about sample size – is still pretty robust at 650 samples. It calculates out at 3.84 per cent, at a 95 per cent confidence interval, which means the race could be as narrow as 3.6 points or as wide as 18.4. But in any event, the Liberals would have a lead regardless of the sample size.

According to this handy “
ballot lead calculator,” there is a 100 per cent chance the Liberals have a lead in this survey.

The Nanos survey taken at a similar time shows 35.4 for the PCs, compared to 31.9 for the Liberals. The sample size was 1,000. Using the calculator, you get a probability of a PC lead of 87 per cent.

This isn’t to say that Harris/Decima is right and Nanos is wrong because there is a 100 per cent chance the Liberals are leading in the one and only an 87 per cent chance the PCs are leading in the other.

My point is that surveys are more useful for their trends than their absolutes. Movement up and down that goes beyond the significant range is more conclusive than fixed points, because surveys are based on probability, not absolute measurement.

At the end of the day, all survey research includes “
house effects.” House effect is the difference between various polls taken at the same time. The current difference between Harris/Decima and Nanos is a good example.

They are caused by the peculiarities of question wording, question order, sampling and interview methodology that result in variance between polls taken at the same time. For instance, one pollster may use bilingual interviewers – which would capture more francophones, who tend to lean away from the Conservatives more than average.

Another cause of house effect is the difference is calculating a likely voter to include in the final results. Some pollsters use a much tighter definition than others, and that cut off point can be crucial.

“House effect” can be thought of as the variance from the average of all the polls out there. So the Harris/Decima survey looks like an outlier because it is different from what we have seen before.

However, there is also “bias” and that is perhaps more important.

Statistical bias is not bias in the pejorative sense of intentionally loading the survey one way or the other. It has nothing to do with ideology or malevolence. It’s a technical term meaning the difference between expected and actual results. An example would be the poll predictions compared to the actual election. The difference between them is statistical bias.

“House effects” matter because a poll varies from the pack. It upsets the horse race coverage because it comes out of left field. A polling company could hit a higher number for one candidate than the pack does. But that survey can actually have less “bias” than the pack, and be more accurately capturing true public opinion at one time.

The Canadian polling blog ThreeHundredeight.com does a good job of
tracking house effects by company on the national scene.

Here you can see a
breakdown of the federal house effects noted in each company. However, these are for the federal scene, so one shouldn’t even begin to apply them to the Ontario polling scene.

Again, they are not about ideology or party. They are about wording and method, so there is no point in trying to translate to a different milieu.

I have no idea what the house effects are for Nanos or Harris/Decima, and am not going to blow a day doing the math. But I can tell you what is likely looking at the polling over the past several years.

Certainly, the long-term trend is that one year ago, the PCs were lead by around 10 points. Today, as the election gets under way, that lead is smaller, possibly down to nothing. The NDP continues to bounce around between the mid-teens and mid-20s.

The picture will sharpen with time, but the idea this election would be a cake-walk for the PCs seems unlikely now. If there were partisans in any party who thought this election was fore-ordained, this survey should serve as a wakeup call.

But no one should be resting on their laurels from any party. As always, politicians should use polls the same way drunks should use lampposts: for illumination, not support.