October 28, 2012

An anonymous pollster on the whole cell-phone, land-line thing

The problem with calling cell phones doesn’t really lie in the cost of calls. For a polling company, calling a cell phone doesn’t cost that much more than calling a landline. The problem is the complexity and cost of employing dual sampling frames when the proportion of cell phone users without a landline is still very low. If the purpose of calling cell phones is to reduce non-coverage of likely voters, then you may actually need to ‘screen out’ those you call on cell phones who also have a landline (because they are already covered by the landline sample frame).

If we assume 6% of eligible voters have cell phones and no landline, that means that 94% of the people you call on a cell phone will not be eligible to take part (again, because they are already covered by the landline sample frame). This is where the cost would really begin to build up – all those interviewer hours required just to screen people out (eek!).

This is not the only way to reduce non-coverage – but it’s actually one of the more straight forward and ‘statistically pure’ ways (ie, you can develop some sort of weighting scheme, but the more you weight, the greater the design effect (which increases the margin or error, and decreased the accuracy of a poll).

To make things more complex:

– Some people have more than one cellphone, meaning that the probability of them being called is higher, so additional weighting would need to be applied to adjust for the probability of selection (you may notice that some polls weight by household size and the number of landlines connected to a house – this is adjusting for the probability of section)

– There are a lot of cell phone numbers that are out of use, but when they are called they still go through to a voice mail. Unlike landlines (which you can ‘ping’ to test the connection), it is very difficult (ie, near impossible) to determine if there is actually an eligible person at the end of a number, so you’ve got no measure of the success rate of your sampling approach (ie, refusal rates, response rates, qualifier rates etc).

– At the moment such a small proportion of New Zealanders have a cell phone with no landline that party support would need to be DRAMATICALLY different among those people for this particular type of non-coverage to influence the poll results for party vote (eg, support for Labour among cell phone only voters may need to be TWICE what it is among landline voters for the party vote result to shift by more than, say, the margin of error).

When the proportion of people with cell phones and no landline is considerably larger than it is today (like it is in some other countries), then it will definitely make sense to employ a dual sampling frame approach. In NZ though (at least in 2011) most pollsters got things pretty close to the election day result so this would suggest non-coverage of cell phone only voters isn’t a big issue just yet. If cell phone plans get cheaper, then polling approaches will probably need to change to keep up.

Like this:

Related

In NZ though (at least in 2011) most pollsters got things pretty close to the election day result

That’s simply not true. Every single public poll in 2011 predicted National would govern alone. Not one poll – even a “rogue” – got “pretty close to the election day result”.

Under MMP, this really matters. Yes, the polls consistently said National would get far more votes than Labour, and so it proved. But National are one MP – or a Peter Dunne hissy fit – away from losing their majority.

If polling can’t cope with minor parties, then it can’t handle our voting system. Sure, it may have nothing to do with landlines, cellphones or whatever, but it’s still a major defect in our democracy.

Unlike landlines (which you can ‘ping’ to test the connection), it is very difficult (ie, near impossible) to determine if there is actually an eligible person at the end of a number

Eh? Apart from getting the telco to run a test, I don’t know any easy way to tell if a POTS (plain old telephone service) line is functional. Even if it is, it might be a fax or modem. Or it might belong to someone who just has a landline for internet and never gives out their number, and so doesn’t answer calls.

Thanks for commenting here Andrew. Given that it’s probably not the mobile vs landline issue, do you have a theory for why some polls (One News and TV3 in particular) consistently overestimate National’s support?

I’m afraid I don’t have a straight forward answer to your question though. I think the reasons for over/under-estimation are probably different for every survey (as an example, the two polls you mention use quite different sampling and selection approaches). The potential sources of error in surveys are endless. To name a few…

Each of these may exert a very small influence on a survey’s results, some may cancel each other our, and (if left unchecked) some may exert a larger influence. In my view it’s the job of a good survey methodologist to follow best practices and (as best they can) try to identify all sources of error and reduce them as much as practically possible. It’s not easy, because changing one thing can influence some other aspect of the survey. The search for survey error needs to be continuous and systematic.

The biggest issue for telephone surveys in New Zealand (again, in my own personal opinion) is declining response rates. Surveys assume that non-respondents are the same as respondents, and this is a big assumption.

Given that it’s probably not the mobile vs landline issue, do you have a theory for why some polls (One News and TV3 in particular) consistently overestimate National’s support?

Another potential issue I’ve seen mentioned is the time calls are made, usually weekday evenings, which could disadvantage families where the adults don’t work “normal” 9 to 5 hours – and I’m sure there are a lot of other seemingly-unremarkable issues which could have a similar effect one way or the other.

Absolutely QoT – that’s another issue to consider, and the reason why it’s SO important to call-back households at different times and on different days (rather than just calling a new household and interviewing someone who happens to be home).

I can’t speak for all polls – but with good fieldwork practices I think you can still achieve 30-35% for an unnamed RDD telephone survey (ie, cold calling) and a short interview duration. That’s also assuming you’re using the AAPOR approach to calculating response rates. Be very weary of anyone who says their response rate is a lot higher, because I’ve seem some ‘interesting’ response rate formulas used in some very public NZ surveys. Response rates can be increased dramatically (ie, 60%+) when calling named individuals, or when the call is preceded by a pre-notification letter.

I think it would be great if the AAPOR approach was mandatory in New Zealand, so that everyone had to use the same response rate formula. Getting a good response rate for a political poll can be particularly challenging because they are carried out over five or so days (leaving less time for call backs).

It sounds like the statisticians running the polls are doing a very good job. My (admittedly simplistic) analyses suggest that their methods give pretty consitent results, which means that the trends are reliable (and that’s the most important thing, imho). It’s a pity that they get let down by their PR people, journalists, and bloggers who aren’t Danyl or Scott.

I’ve got a few more questions, if we haven’t worn down your patience yet:

Are the quoted percentages intended to represent “likely voters”, or “eligible voters”?

How are changes to methodology implemented — incrementally, or in big “step changes”. If it’s the latter, are the changes made after elections (in which case pre and post election polls aren’t comparable)?

Is there and “homework” that you wish poll watchers would do? I’ll try to have a good look at the AAPOR material when I find the time.

But actually that does raise another interesting challenge to employing a dual sample frame – you’d need to know that percentage so that you could determine the number of people within each frame that should be interviewed.

> Every single public poll in 2011 predicted National would govern alone. Not one poll – even a “rogue” – got “pretty close to the election day result”.

I know it isn’t a poll, but iPredict got the results almost exactly right. I’d presume all the punters were watching the polls and doing their own version of what Danyl does, though, correcting for the continuous overestimation of National’s support that seems to come in all these polls.