Sampling is that part of statistical practice concerned with the selection of individual observations intended to yield some knowledge about a population of concern, especially for the purposes of statistical inference.
Each observation measures one or more properties (weight, location, etc.) of an observable entity enumerated to distinguish objects or individuals. Survey weights often need to be applied to the data to adjust for the sample design. Results from probability theory and statistical theory are employed to guide practice.

Contents

Population definition

Successful statistical practice is based on focused problem definition. Typically, we seek to take action on some population, for example when a batch of material from production must be released to the customer or sentenced for scrap or rework.

Alternatively, we seek knowledge about the cause system of which the population is an outcome, for example when a researcher performs an experiment on rats with the intention of gaining insights into biochemistry that can be applied for the benefit of humans. In the latter case, the population of concern can be difficult to specify, as it is in the case of measuring some physical characteristic such as the electrical conductivity of copper.

However, in all cases, time spent in making the population of concern precise is often well spent, often because it raises many issues, ambiguities and questions that would otherwise have been overlooked at this stage.

Sampling frame

In the most straightforward case, such as the sentencing of a batch of material from production (acceptance sampling by lots), it is possible to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not possible. There is no way to identify all rats in the set of all rats. There is no way to identify every voter at a forthcoming election (in advance of the election).

These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.

As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. For example, in an opinion poll, possible sampling frames include:

Shoppers in Anytown, High Street on the Monday afternoon before the election.

The sampling frame must be representative of the population and this is a question outside the scope of statistical theory demanding the judgment of experts in the particular subject matter being studied. All the above frames omit some people who will vote at the next election and contain some people who will not. People not in the frame have no prospect of being sampled. Statistical theory tells us about the uncertainties in extrapolating from a sample to the frame. In extrapolating from frame to population, its role is motivational and suggestive.

There is, however, a strong but unnoticed division of views about the acceptability of representative sampling across different domains of study. To the philosopher or doctor, the representative sampling procedure has no justification whatsoever because it is not how truth is pursued in philosophy. "To the scientist, however, representative sampling is the only justified procedure for choosing individual objects for use as the basis of generalization, and is therefore usually the only acceptable basis for ascertaining truth." (Andrew A. Marino) [1]. It is important to understand this difference to steer clear of confusing prescriptions found in many web pages.

In defining the frame, practical, economic, ethical, and technical issues need to be addressed. The need to obtain timely results may prevent extending the frame far into the future.

The difficulties can be extreme when the population and frame are disjoint. This is a particular problem in forecasting where inferences about the future are made from historical data. In fact, in 1703, when Jacob Bernoulli proposed to Gottfried Leibniz the possibility of using historical mortality data to predict the probability of early death of a living man, Gottfried Leibniz recognized the problem in replying:

"Nature has established patterns originating in the return of events but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary."

Having established the frame, there are a number of ways for organizing it to improve efficiency and effectiveness.

It is at this stage that the researcher should decide whether the sample is in fact to be the whole population and would therefore be a census.

Sampling method

Within any of the types of frame identified above, a variety of sampling methods can be employed, individually or in combination.
sampling is divided in two categories
1. Probability Sampling
2. Nonprobability Sampling

Simple random sampling

In a simple random sample of a given size, all such subsets of the frame are given an equal probability. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. It is possible that the sample will not be completely random.

Systematic sampling

Selecting (say) every 10th name from the telephone directory is called an every 10th sample, which is an example of systematic sampling. It is a type of probability sampling unless the directory itself is not randomized. It is easy to implement and the stratification induced can make it efficient, but it is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple of 10, then bias will result. It is important that the first name chosen is not simply the first in the list, but is chosen to be (say) the 7th, where 7 is a random integer in the range 1,...,10-1. Every 10th sampling is especially useful for efficient sampling from databases.

Stratified sampling

Where the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." A sample is then selected from each "stratum" separately, producing a stratified sample. The two main reasons for using a stratified sampling design are [1] to ensure that particular groups within a population are adequately represented in the sample, and [2] to improve efficiency by gaining greater control on the composition of the sample. In the second case, major gains in efficiency (either lower sample sizes or higher precision) can be achieved by varying the sampling fraction from stratum to stratum. The sample size is usually proportional to the relative size of the strata. However, if variances differ significantly across strata, sample sizes should be made proportional to the stratum standard deviation. Disproportionate stratification can provide better precision than proportionate stratification. Typically, strata should be chosen to:

Cluster sampling

Sometimes it is cheaper to 'cluster' the sample in some way e.g. by selecting respondents from certain areas only, or certain time-periods only. (Nearly all samples are in some sense 'clustered' in time - although this is rarely taken into account in the analysis.)

This can reduce travel and other administrative costs. It also means that one does not need a sampling frame for the entire population, but only for the selected clusters.
Cluster sampling generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between themselves, as compared with the within-cluster variation.

Matched random sampling

A method of assigning participants to groups in which pairs of participants are first matched on some characteristic and then individually assigned randomly to groups. (Brown, Cozby, Kee, & Worden, 1999, p.371).

The Procedure for Matched random sampling can be briefed with the following contexts,

a) Two samples in which the members are clearly paired, or are matched explicitly by the researcher. For example, IQ measurements or pairs of identical twins.

b) Those samples in which the same attribute, or variable, is measured twice on each subject, under different circumstances. Commonly called repeated measures. Examples include the times of a group of athletes for 1500m before and after a week of special training; the milk yields of cows before and after being fed a particular diet.

Quota sampling

In quota sampling, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgment is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.

It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for many years.

Mechanical sampling

Care is needed in ensuring that the sample is representative of the frame. Much work in this area was developed by Pierre Gy.

Convenience sampling

Sometimes called grab or opportunity sampling, this is the method of choosing items arbitrarily and in an unstructured manner from the frame. Though almost impossible to treat rigorously, it is the method most commonly employed in many practical situations. In social science research, snowball sampling is a similar technique, where existing study subjects are used to recruit more subjects into the sample.

Line-intercept sampling

Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a “transect”, intersects the element.

Sample size

Where the frame and population are identical, statistical theory yields exact recommendations on sample size.[1] However, where it is not straightforward to define a frame representative of the population, it is more important to understand the cause system of which the population are outcomes and to ensure that all sources of variation are embraced in the frame. Large number of observations are of no value if major sources of variation are neglected in the study. In other words, it is taking a sample group that matches the survey category and is easy to survey. Bartlett, Kotrlik, and Higgins (2001) published a paper titled Organizational Research: Determining Appropriate Sample Size in Survey Research Information Technology, Learning, and Performance Journal that provides an explanation of Cochran’s (1977) formulas. A discussion and illustration of sample size formulas, including the formula for adjusting the sample size for smaller populations, is included. A table is provided that can be used to select the sample size for a research problem based on three alpha levels and a set error rate.

Types of data

Categorical and numerical

There are two types of random variables: categorical and numerical. Categorical random variables yield responses such as 'yes' or 'no'. Categorical variables can yield more than two possible responses. For example: 'Which day of the week are you most likely to wash clothes?' Numerical random variables yield numerical responses, such as your height in centimeters.

There are two types of numerical variables: discrete and continuous. Discrete random variables produce numerical responses from a counting process. An example is 'how many times do you visit the cash machine in a typical month?' Continuous random variables produce responses from a measuring process. Height is an example of a continuous variable because the response takes on a value from an interval. Precision of the measurement instrument(s) may lead to tied observations. A tied observation occurs when the measuring device is not sensitive or sophisticated enough to detect incremental differences in the experimental or survey data.

Generally continuous random variable requires less samples than of discrete random variable. This can be justified by referring to the Central Limit Theorem

Sampling and data collection

Good data collection involves:

Following the defined sampling process

Keeping the data in time order

Noting comments and other contextual events

Recording non-responses

Most sampling books and papers written by non-statisticians focus only in the data collection aspect, which is just a small part of the sampling process.

Review of sampling process

After sampling, a review should be held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis. A particular problem is that of non-responses.

Non-response

In survey sampling, many of the individuals identified as part of the sample may be unwilling to participate or impossible to contact. In this case, there is a risk of differences, between (say) the willing and unwilling, leading to selection bias in conclusions. This is often addressed by follow-up studies which make a repeated attempt to contact the unresponsive and to characterize their similarities and differences with the rest of the frame. The effects can also be mitigated by weighting the data when population benchmarks are available. Nonresponse is particularly a problem in internet sampling. One of the main reasons for this problem could be that people may hold multiple e-mail adresses, which they don't use anymore or don't check them regurlarly.

Survey weights

In many situations the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.

More generally, data should usually be weighted if the sample design does not give each individual an equal chance of being selected. For instance, when households have equal selection probabilities but one person is interviewed from within each household, this gives people from large households a smaller chance of being interviewed. This can be accounted for using survey weights. Similarly, households with more than one telephone line have a greater chance of being selected in a random digit dialing sample, and weights can adjust for this.

Weights can also serve other purposes, such as helping to correct for non-response.

History

Random sampling by using lots is an old idea, mentioned several times in the Bible. In 1786 Pierre Simon Laplace estimated the population of France by using a sample, along with ratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modern confidence intervals but as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates used Bayes' theorem with a uniform prior probability and it assumed his sample was random. The theory of small-sample statistics developed by William Sealy Gossett put the subject on a more rigorous basis in the 20th century. However, the importance of random sampling was not universally appreciated and in the USA the 1936 Literary Digest prediction of a Republican win in the presidential election went badly awry, due to severe bias [Experimental:Sex]].com/public/article/SB115974322285279370-_rk13XDUHmIcnA8DYs5VUscZG94_20071001.html?mod=rss_free]. A sample size of one million was obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.