A Researcher's Perspective on Current Events

February 04, 2010

A Socratic Dialogue: Non-Probability Sampling

Post by Humphrey Taylor, Chairman of The Harris Poll

Introduction

Each year, Harris Interactive publishes many telephone and online polls of public opinion. More than 200 of these published polls are conducted online with people who have joined our panels of cooperative respondents in the United States, Canada, Britain, France, Italy and Spain. We also conduct online surveys in many other countries. In addition to the U.S.-based Harris Poll which is widely distributed in this country, our clients for whom we regularly conduct opinion polls online include such respected media as the BBC, the Financial Times, France 24 and the International Herald Tribune. We have conducted many online polls of the public for The Wall Street Journal Online. In addition, The Economist regularly commissions online polls conducted by YouGov.

When Harris started to publish online polls in 2000, there was some skepticism as to whether our methodology would produce reliable and accurate information. This skepticism largely disappeared after the 2000 U.S. presidential election. However, we are aware of three news media in the United States that still prefer not to publish, and often censor the publication of, the results of our online polls. As far as we know, no media outside the United States have any reluctance to publish good online polls. Neither do the overwhelming majority of U.S. media. Indeed, our online polls are reported frequently in several hundred media in the U.S.

The dialogue that follows is an imaginary conversation between Socrates and an editor of one of the three media that usually block the publication of our online poll results, in which Socrates addresses the criticisms that we hear from those who oppose their publication.

A SOCRATIC DIALOGUE: NON-PROBABILITY SAMPLING

(With Apologies to Plato)A conversation between an editor and Socrates

Q. Do you think the media should or should not report the findings of opinion polls when they think they are newsworthy?A: Yes, of course they should, if they think the polls are reliable.

Q: What about online opinion polls where the samples are drawn from a panel of people that is not a probability sample of the total adult population?A: No. They should not publish them. And my organization does not.

Q: Why not?A: Because these surveys do not use probability samples and are therefore not “scientific.”

Q: How would you define “scientific”?A: By scientific I mean they use methods which allow the calculation of sampling error. For example, with a probability sample of 1,000 interviews one can say with 95% certainty that no errors are greater than ±3 percentage points.

Q: Do you think that most opinion polls that are conducted by telephone are scientific and make it possible to calculate a margin of error?A: Well, not a margin of error but the sampling error.

Q: Some important peer-reviewed journals refuse to publish papers presenting survey results where the response rate is less than 50%. They adopt this policy because it is not possible to calculate the sampling error, and the possibility that there will be a large non-response error since many of the people who were not interviewed might differ from those who were. Do you think that the media should follow these rules? A: No. If they did so, they would not publish any opinion polls.

Q: Why do you think the media should publish the results of telephone polls with low response rates where peer-reviewed academic journals are often unwilling to do so?A: Well, I guess the media’s standards are lower, but they trust the telephone polls in general because they have a long history of producing accurate results.

Q: What do you mean by “accurate results”?A: Generally speaking, the ability to predict elections reasonably accurately. It is the best test we have.

Q. If the polls had a record of producing inaccurate results do you think the media would have continued to publish them?A. Probably not. Their track record is important.

Q: Do you think it is possible to calculate the margin of error, or a theoretical sampling error for a typical telephone survey with, say, a 20% response rate?A: I’m not sure . . . but I guess not.

Q: What do you think are the major sources of error that cannot be calculated?A: There are several. The main ones are non-response error, interviewer bias, questionnaire design, question order, and inaccurate responses (for example, saying that they will vote when they do not).

Q: Is it possible to calculate the probability of errors for any of these sources of error? A: Not as far as I know.

Q: What about sampling error? Is it possible to calculate a sampling error with a known probability (e.g., ± 3% with 95% confidence) for a typical telephone survey with a 20% response rate?A: No. Those calculations only apply to random samples with 100% response rates.

Q: Do you think that polling errors due to factors other than sampling errors such as interviewer bias, non-response bias, inaccurate responses and questionnaire design may be larger than the errors due to sampling errors for a particular sample size?A: Yes, I think that is very possible but you can’t make any estimates of the size with these errors.

Q. So why do the media report them?A. Because of their track record. Polls are not perfect but they are better than all the other ways of estimating public opinion.

Q: Should the media report the results of the Consumer Confidence Index which is published by the Conference Board?A: Yes, of course.

Q: Are you aware that it is also based on an opt-in panel that has volunteered to participate as panel members?A: No, I was not aware of that.

Q: Given that the Conference Board’s Consumer Confidence Index are based on opt-in panels, do you still think that the media should publish their results when they are newsworthy?A: Yes, I think they should because these are well established and well respected measures that have been widely used for many years. And, frankly, we would be at a competitive disadvantage if we did not do so when other media did.

Q: So you think that the media should report these because they are widely accepted as important even though they are based on volunteer panels?A: Yes.

Q: Do you think the media were right to report the results of the Gallup Poll predictions in the 1930s and 1940s, including the famous Gallup Poll that predicted Roosevelt’s reelection in 1936, when the Literary Digest predicted that he would lose?A: Yes, absolutely. The 1936 Gallup Poll established the credibility of scientific polling.

Q: Are you aware that all of the Gallup Polls used to predict elections in the 1930s and 1940s used quota sampling rather than probability sampling – where the interviewers were asked to choose respondents who fitted the demographic quotas?A: No, I was not aware of that.

Q: Knowing that, do you still think that the media were right to publish the Gallup Poll's election forecast in the 1930s and 1940s?A: Maybe not. I’m not sure. I would have to think about it.

Q: In medical research used to justify the approval of new pharmaceuticals it is almost always necessary to use “double blind clinical trials” comparing the new drugs to a placebo or another drug. Why do you think this is?A: Because that is the gold standard and because people participating in the trials are assigned randomly to one sample or the other. This makes it possible to calculate possible sampling error.

Q: Do you think that smoking causes cancer, heart disease, emphysema and other diseases?A: Yes, of course.

Q: Why do you believe this?A: Because there has been a huge amount of medical research showing a high correlation between smoking and different diseases.

Q: Do you know if anyone has done a “double blind” randomized trial of smokers and non-smokers to measure the health effect of smoking?A: No, that would not be possible.

Q: Do you know how scientists determined that smoking causes many diseases?A: They studied populations of smokers and non-smokers and used propensity score matching (or related techniques) to adjust for as many relevant variables as they could. They then found that smokers were more likely to have these diseases.

Q. Are you aware that the theoretical underpinning of this method is the same as that used in panel-based online polls that also use propensity score weighting?A. No, I was not aware of that.

Q: Given that there have been no randomized trials of smoking maybe the tobacco industry had a good case when they said that nobody had proved that smoking cause diseases?A: No, I don’t think so.

Q: Given that the medical research showing a strong link between smoking and disease were never based on randomized trials, do you think the media were right to publish the results of this research? A: Yes, and they have been shown to be right over time.

Q: So this is another case where you think that the media should publish the results of surveys that do not use probability sampling.A: I guess so.

Q: Are there any other reasons why you oppose the publication of survey results that are not based on probability sampling?A: Yes, there is one. They may produce accurate results, but there is no theory to explain why their results are accurate. With probability sampling, there is a theoretical underpinning.

Q: Leaving aside, of course, all of the potential sources of error, including non-response bias, that cannot be quantified?A: Yes, leaving those aside.

Q: When Isaac Newton wrote Principia Mathematica, and his laws of gravity, do you know if he or anyone else had a theory to explain gravity and why the sun attracts planets or why the apple fell from the tree?A: No, he had no such theories. His theories just explained what happened.

Q: So why did people believe his theories? A: His laws were credible because they worked, that is, they explained how gravity influenced the planets and the apple. They worked in practice, even though there was no theory to explain why they worked.

Q: When Copernicus wrote that the earth revolved around the sun and Kepler wrote that the planets had elliptical orbits with the sun as one focus of the ellipse, did they know how gravity worked and why the sun attracted the planets?A: No, I don’t think they did.

Q: So why do we believe that Copernicus, Kepler and Newton, and for that matter Galileo, were essentially correct?A: Because their theories were used to predict the movements of the planets with great accuracy.

Q: So the proof of their laws such as Newton’s laws of motion and gravity was that they worked in practice to predict the future, rather than to explain gravity or how and why objects attract each other?A: Yes, I guess so.

Q: In the 20th century, Einstein’s theories of special and general relativity were accepted as better than Newton’s laws of gravity and motion even though most people have great difficulty understanding the concepts of space and time used by Einstein. Why do think this is?A: Because Einstein’s theories were used to predict events more accurately than Newton’s laws.

Q: So the reason why we believe Einstein’s theories is empirical, that they work in practice and can be used to make accurate predictions?A: Yes.

Q: Many people define the scientific method as being the development of hypotheses which are then tested by experiments. If enough different experiments validate the hypothesis, we generally accept the hypothesis as being scientific until a better hypothesis – that is one that works better in practice – comes along. Do you accept this definition of scientific?A: Yes.

Q: Before there were telephone polls and polls were conducted face-to-face, the overwhelming majority of opinion polls outside North America that were conducted throughout the world used quota sampling rather than probability sampling. Do you know whether they were generally accurate or not?A: My understanding is these polls worked pretty well for many years – at least in their ability to predict election results in many countries.

Q: So do you think the media were wrong to publish the results of these opinion polls based on quota sampling? A: No. I think they were right to do so because they had a long track record of reasonably accurate predictions.

Q: So, if online polls using non-probability samples could be shown to produce accurate election predictions over many elections, do you think the media ought to start reporting their results?A: Yes, I guess so.

Q: How many elections do you think you would need before you would have enough confidence to publish the results of these non-probability surveys?A: Perhaps 50?

Q: Are you aware that Harris Interactive has used their methods to predict almost 80 elections, and that in more than 50 of these there were telephone surveys whose results could be compared with those of the Harris Poll?A: No. I didn’t realize that they have done that many.

Q: In the more than 50 elections where the predictions of the Harris Poll using non-probability samples can be compared with telephone surveys, the real margin of error (i.e., the difference between the forecast and the results) was significantly lower for the Harris Polls than for the telephone polls. Given this, do you still think that the media should or should not publish the results of online polls using non-probability sampling?A: No, I still oppose their publication.

Q: Why?A: Because my mind is made up and nothing you can say will change it.

Comments

You can follow this conversation by subscribing to the comment feed for this post.

I've always tried to base decisions on research design on whether they are fit for purpose - usually that's a case of whether the design will give a commercial client sufficiently good information to base a business decision.

Which leads to a further question (though not is Socratic form!): to what extent does private polling make use of online samples. either for political parties, lobby groups or businesses and financial institutions who may wish to be planning around future governments and policy decisions?

By definition, most polling for candidates and parties is proprietary and so we do not know a lot of what is being done. And of course Harris, as a matter of policy, does not do this work. However gossip and anecdotal evidence suggests that there has been a substantial increase in the use of online methods – both qualitative and quantitative -- in this field. But it’s likely that most proprietary polls are still done on the phone.