The first question I always get from clients interested in conducting a survey is about sample size. Many confuse sample size with representativeness. They are related, but not the same, particularly if convenience samples are used.

In random samples, as we increase sample size the chance each member of the target population has of being selected increases and consequently more segments of the population are likely to be represented. This is based on the assumption that we have a list with all the population members (population frame) and know their probability of being chosen. This could be the case of a customer database/list, if that’s our population of interest.

In convenience samples, the population frame becomes the pool of individuals in the sample source (e.g. online panels), which may not include all segments in the target population or only have a few members of certain segments, depending on how the sample source is built. In this case sample quotas, weighting schemes, and mixed mode data collection methods (online/phone/intercepts) are often used in an effort to reach representativeness.

Assuming that we are able to pull a representative sample of the target population by whatever affordable means are available to us, we need to give serious consideration to sample size. This is a case where size matters (pun intended). Why?

It is all about precision, tolerance for risk and cost. For samples smaller than 1000, we always have to think about how confident we want to be that estimates are within a particular range (level of confidence and risk), and how small we want that range to be (level of precision). Unfortunately, they go in opposite directions. Higher levels of confidence require greater ranges (margins of error) in small sample sizes.

For instance, we can be 95% confident that the true estimate for a variable in a sample of 400 is within +/-4.9%, however, if we want a smaller margin of error, in an attempt to gain more precision, with the same sample size, we have to sacrifice certainty and may need to accept a 90% confidence level to get a +/-4.1% margin of error. At the 95% confidence level you are more certain, but less precise as you expand the range to make sure the true value falls in it. At the 90%, you are more precise, but less certain.

If you want to get more precise estimates without sacrificing certainty in the results, then you have to increase sample size, which in turn increases research costs. As the table below shows, as sample increases the differences in margin of error across the different confidence intervals become smaller.

At the end of the day, when it comes to sample size, you need to decide what it is more important to you, certainty or precision, and what your tolerance for risk is, especially if your market research budget is small.

I recently got an inquiry from a SurveyGizmo user asking about what response rate he could expect from using this online survey tool. Fortunately for any online survey tool, including SurveyGizmo, response rates to online surveys don’t depend on the survey tool you use.

First let’s distinguish between response rates, incidence rates, completion rates and non-response. They are related, but not the same, and some clients use these concepts interchangeably, which lead to confusion in sample size and cost estimations.

Response rates are usually calculated based on the number of respondents who attempt to participate in a survey, even if they are disqualified after they have been screened with certain questions. If we send a survey invitation to a sample size of 100 people and only 5 attempt to take the survey, then the response rate would be 5%. Response rates have been used for years as indicators of data accuracy, however recent research has indicated that lower response rates don’t necessarily mean low quality data.

Response rates are affected by:

Survey topic relevancy: People will not dedicate time to participate in surveys that are perceived as irrelevant.

Incentives: Sometimes an incentive is needed to motivate respondents, but careful consideration needs to be given to this. Incentives are a tricky subject since we may attract only certain types of respondents and insert selection bias in the sample.

Survey invitation: Survey invitations should be personalized and provide compelling reasons to participate in the survey. A poorly written invitation can drive respondents away or not catch their attention. Use appealing subject taglines and make the invitation short, clear and persuasive.

Type of relationship with target survey audience: Depending on the level of relationship respondents have with the brand, organization or company sponsoring the project they will be more or less motivated to participate. For example, customer surveys tend to have higher response rates than those targeted at non-customers. For more on this, check Survey Response Rate Directly Proportional to Strength of Relationship by Jeffrey Henning.

Privacy protection concerns: People are not comfortable sharing information if they don’t know how it is going to be used. Communication about privacy policy and data security should be clear.

Reminders: These may be needed to reach busy people or those not available within a certain time frame when the first invitation is sent out.

Incidence ratesare based on the number of respondents that qualify for a study based on certain screening criteria. For example, if we need a sample of females in the general population without any other requirements, the incidence rate is expected to be 50% since half of the population are women. Incidence rates will vary depending on who we are targeting with the study.

Response rates are often used to indicate the number of completed surveys, but I think it is worth to make the distinction between response rates and completion rates since this has methodological and cost implications ( e.g. when we need to purchase sample from online panel providers).

Completion rates indicate how many people who qualified for the study completed the survey. If they enter the survey, answer some questions and then abandon the survey, they will be counted as incompletes and are usually excluded from the final data. The number of incompletes increases when:

The survey is too long

Survey flow is confusing

There are skip logic errors that show irrelevant questions to respondents who can’t answer them

Questions are poorly worded and instructions are unclear

Questions are complex and requite a lot of mental effort from the respondent

The respondent is not rewarded accordingly based on survey length and amount of effort required

The topic and survey format can’t hold the respondent’s interest

Privacy protection is unclear or lacking

Non-response occurs when we fail to get a response from the total sample either because respondents refuse to participate in the survey or they start but never complete it. If non-responses follow a pattern that systematically excludes a particular segment of the sample, they introduce what it calls selection bias, which will prevent us from getting a representative sample of opinions in the population of interest. Nonrespondents are often different from respondents, so their absence in the final sample can make it difficult to generalize the results to the overall target population.

In short, regardless of the survey tool you use, you can improve response rates and completion rates if you avoid most of the problems mentioned above.

I meet many clients who worry about sample size trying to ensure they get an enough large sample so that statistically significant differences can be found and inferences to a larger population can be made, but they often don’t know that these statistical tests were meant to work within the probability sampling theory framework.

Since the advent of online panels and the increase of online surveys using panel-provided samples, the issue of testing for significant differences using standard parametric tests has become a moot point in many research studies.

Nowadays many of the surveys conducted online use samples provided by online panels, but these are mostly convenience samples (non-probability). The populations of online panels include respondents who are willing to participate in studies, excluding those unwilling to be part of the panel who may be members of the target population we are after.

In probability sampling, each possible respondent from the target population has a known probability to be chosen. Probability sampling helps us to avoid some of the selection biases that can make a sample not representative of the target population. For more on this read Does A Large Sample Size Guarantee A Representative Sample?

A single probability sample doesn’t guarantee to be representative of a target population, but we can quantify how often samples will meet some criterion of representativeness. This is the notion behind confidence intervals. The probability sampling procedure guarantees that each unit in the population of interest could appear in the sample.

By taking into account all possible random samples that can be taken from a population, we can estimate how often the true value of an estimate can be expected to be within a specific range of values. So, when we talk about a 95% confidence interval, this really means that the true value of a particular variable is expected to fall within an interval of values 95 out of 100 times we repeat the procedure. When an opinion poll indicates that 50% of people are in favor of a political decision with a +/-3% margin of error at a 95% confidence interval, it is really saying that we can expect that between 47% and 53% of people will be in favor of the decision 95 out 100 times, if we were to repeat the poll. When we test for significant differences, we are looking to see if the value falls outside that range.

Unfortunately, taking a probability sample is hard and costly. For most consumer research studies and social behavior studies, we really don’t know the size of the actual population of consumers behaving in certain ways or consuming certain products, and trying to find out would make the research prohibitively expensive. This is why we often have to settle for convenience samples like the ones offered by online panels. They still can offer valuable insights if designed with care, but again doing statistical testing in a convenience sample is pointless since the assumptions about probability sampling are violated.

Online panels are here to stay, and they will continue to be a source for affordable sample for market research. Research using convenience sample is often better than not research at all if the survey is well designed and screening criteria are used to define the target population.

A more appropriate case for testing statistically significant differences are random samples taken from a customer database, since this is essentially the population frame where we can count all members and estimate their probability to be chosen.

However, if you don’t have a customer database or are interested in surveying non-customers, then use a convenience sample, if that is what your research budget can afford or there is no other way to get to the actual population frame (list to pull the sample from), but don’t fret about testing for significant differences. You may feel more confidence if you are able to replicate the results in repeated surveys, but be always cautious about inferences made from convenience samples since there could be a hidden systematic bias in the data.

It is always important that whenever you use convenience samples you consider the following when analyzing the results:

1. Who is systematically excluded from the sample?

2. What groups are over- or underrepresented in the sample?

3. Have the results been replicated with different samples and data collection methods?

If testing for significant difference gives you peace of mind, even when using convenience samples, do it to confirm the “direction” of the data, but restrain yourself from doing inferences to a larger population.

I often get asked “What sample size do I need to get a representative sample?” The problem is that this question is not formulated correctly.

Sample size and representativeness are two related, but different issues. The sheer size of a sample is not a guarantee of its ability to accurately represent a target population. Large unrepresentative samples can perform as badly as small unrepresentative samples.

A survey sample’s ability to represent a population has to do with the sampling frame; that is the list from which the sample is selected. When some parts of the target population are not included in the sampled population, we are faced with selection bias, which prevent us from claiming that the sample is representative of the target population. Selection bias can occur in different ways:

Convenience sample: This includes respondents who are easier to select or who are most likely to respond. This sample will not be representative of harder-to-select individuals. Samples from online panels are a good example of convenience samples. These panels are composed by individuals who have expressed interest in participating in surveys, leaving out individuals who may be part of the target population, but are not available for interviewing through the panel.

Undercoverage: This happens when we fail to include all the target population in the sampling frame. Many online panels work hard at avoiding undercoverage bias, but the fact remains that certain demographics are underrepresented. For example, it is difficult to field online studies targeted at the total Hispanic population in the US without using a hybrid data collection approach that allows us to reach unacculturated Hispanics, who are usually underrepresented in most online panels. Coverage bias is also found in phone surveys that use telephone list sampling frames that exclude households without landline access. As more households substitute cell phones for their landlines, obtaining representative samples of certain demographic groups will soon be difficult without including cell phone lists in the sampling frame.

Nonresponse: Selection bias also takes place when we fail to obtain responses from all respondents in the selected sample. Nonrespondents tend to differ from respondents, so their absence in the final sample makes it difficult to generalize the results to the overall target population. This is why the design of a survey is far more important than the absolute sample size to get a representative sample of the target population.

Judgment sample: This is a sample selected based on “representative” criteria based on prior knowledge of the topic or target population. An example would be a study looking for a sample of teenagers, and trying to intercept them at a cross-section near a high school.

Misspecification of target population: This happens when we use intentionally or unintentionally screening criteria that leave out important subgroups of the population.

Poor data collection quality: This can introduce selection bias when there are poor quality controls to ensure that we interview the designated members of the sample. An example of this include allowing whoever is available in the household to take the survey instead of the intended member based on certain screening criteria.

So when it comes to getting a representative sample, sample source is more important than sample size. If you want a representative sample of a particular population, you need to ensure that:

The sample source includes the whole target population

The selected data collection method (online, phone, paper, in person) can reach individuals, with characteristics typical of those possessed by the population of interest

The screening criteria truly reflect the target population

You can minimize nonresponse bias with good survey design, incentives and the appropriate contact method

There are quality controls in place during the data collection process to guarantee that designated members of the sample are reached.

Determining the sample size is one of the early steps that must be taken in the planning of a survey. Unfortunately, there is no magic formula that will tell us what the perfect sample is since there are several factors we need to think about:

ANALYTICAL PLAN: The research objectives and planned analytical approach should be the first factor to consider when making the decision on sample size. For instance, there are statistical procedures (e.g. regression analysis) that require a certain number of observations per variable. Moreover, if comparative analysis between subgroups in the sample is expected, the sample size should be adjusted for it to be able to identify statistically significant differences between the groups.

POPULATION VARIABILITY: This refers to the target population’s diversity. If the target population exhibits large variability in the behaviors and attitudes of interest being researched, a large sample is needed. If 20% or 80% of the population behaves in certain way, this indicates less variability than if 50% would do so. To be conservative, it is standard practice to use 50% (0.5) as the event probability in sample size calculations since it represents the highest variability that can be expected in the population.

LEVEL OF CONFIDENCE: This is the level of risk we are willing to tolerate usually expressed as a percentage (e.g. 95% confidence level). Although survey results are reported as point estimates (e.g. 75% of respondents like this product), the fact is that since we are working with a sample of the target population, we can only be confident that the true value of the estimate in that population falls within a particular range or what is called confidence interval. The level of confidence indicates the probability that the true value of the estimate in fact will fall within the boundaries of the confidence interval. How confident can you be? As confident as your tolerance for risk allows you to, knowing that the confidence level is inversely proportional to estimate accuracy or margin of error. The more confident you want to be, the larger the confidence interval that is needed, which leads to lower levels of precision.

MARGIN OF ERROR: Also known as sampling error, indicates the desired level of precision of the estimate. You have probably seen poll results quoted in the media, saying that the margin of error was plus or minus a particular percentage (e.g. +/-3%). This percentage defines the lower and upper bounds of the confidence interval likely to include the parameter estimate, and it is a measure of its reliability. The larger the sample, the smaller the margin of error and the greater the estimate precision.

Below is a table illustrating how the margin of error and level of confidence interact with sample size. To get the same level of precision (e.g. +/-3.2%), larger samples are needed as the confidence level increases. For example, if we want to be certain that in 95 out of 100 times the survey is repeated the estimate will be +/- 3.2%, we need a sample of 950.

COST: Sample size cost is often one of the largest items in the budget for market research studies, especially if the target sample includes low-incidence segments or the response rates is low. Many times, our clients have to make a tradeoff between statistical accuracy and research cost. Recently, I received a call from a client who wanted to conduct an online survey with a sample of 1,000 respondents, which would give a statistical accuracy of +/-3.1% at the 95% confidence level, but would cost $8,000 based on certain screening criteria. At the same time, a sample of 400 respondents would give a statistical accuracy of +/-4.9% and cost $3,400. In this case, a 135% increase in sample cost would only yield a 60% gain in statistical accuracy. The client decided to conduct the study on the smaller sample.

POPULATION SIZE: Most of the time, the size of the total target population is unknown, and it is assumed to be large ( >100,000), but in studies where the sample is a large fraction of the population of interest, some adjustments may be needed.

SAMPLE SIZE CALCULATION CHECK LIST

As a summary, to determine the sample size needed in a survey, we need to answer the following questions:

What type of data of data analysis will be conducted? Will subgroups be compared?

What is the probability of the event occurring? – If not previous data exists, use 50% for a conservative sample size estimate.

How much error is tolerable (confidence interval)? How much precision do we need?

How confident do we need to be that the true population value falls within the confidence interval?

What is the research budget? Can we afford the desired sample?

What is the population size? Large? Small/Finite? If unknown, assume it to be large ( >100,000)

So the answer to the question “What is the right sample size for a survey?” is: It depends. I hope I gave you some guidance in choosing sample size, but the final decision is up to you. To calculate sample size and margin of error, use our Sample Size and Margin of Error Calculators.

SubscribeTo Our Blog

Read market research articles with zero fluff!

Our Clients Say...

Red Mango needed first class consumer insight on a startup company budget. And we needed it fast. Relevant Insights was able to handle study design, execution, field work, and analysis in a way that met all of our needs.

Jim Notarnicola, CMO

Red Mango USA

Certifications

Minority Business Enterprise (MBE), National Minority Supplier Development Council (NMSDC)

Women Business Enterprise (WBE), Women's Business Enterprise National Council (WBENC)