In this Concept, you will learn how to calculate the range, interquartile range, and the standard deviation for a population and a sample. You will learn to distinguish between the variance and the standard deviation, as well as calculate and apply Chebyshev’s Theorem to any set of data.

Guidance

In the previous Concepts, we studied measures of central tendency. Another important feature that can help us understand more about a data set is the manner in which the data are distributed, or spread. Variation and dispersion are words that are also commonly used to describe this feature. There are several commonly used statistical measures of spread that we will investigate in this lesson.

Range

One measure of spread is the range. The
range
is simply the difference between the largest value (maximum) and the smallest value (minimum) in the data.

Example A

Return to the data set used in the previous lesson, which is shown below:

75, 80, 90, 94, 96

The range of this data set is
. This is telling us the distance between the maximum and minimum values in the data set.

The range is useful because it requires very little calculation, and therefore, gives a quick and easy snapshot of how the data are spread. However, it is limited, because it only involves two values in the data set, and it is not resistant to outliers.

Interquartile Range

The
interquartile range
is the difference between the
and
, and it is abbreviated
. Thus,
. The
gives information about how the middle 50% of the data are spread. Fifty percent of the data values are always between
and
.

In this example, the range tells us that there is a difference of 44 inches of rainfall between the wettest and driest years in Mobile. The
shows that there is a difference of 22 inches of rainfall, even in the middle 50% of the data. It appears that Mobile experiences wide fluctuations in yearly rainfall totals, which might be explained by its position near the Gulf of Mexico and its exposure to tropical storms and hurricanes.

Standard Deviation

The standard deviation is an extremely important measure of spread that is based on the mean. Recall that the mean is the numerical balancing point of the data. One way to measure how the data are spread is to look at how far away each of the values is from the mean. The difference between a data value and the mean is called the
deviation
. Written symbolically, it would be as follows:

Let’s take the simple data set of three randomly selected individuals’ shoe sizes shown below:

9.5, 11.5, 12

The mean of this data set is 11. The deviations are as follows:

Table of Deviations

9.5

11.5

12

Notice that if a data value is less than the mean, the deviation of that value is negative. Points that are above the mean have positive deviations.

The
standard deviation
is a measure of the typical, or average, deviation for all of the data points from the mean. However, the very property that makes the mean so special also makes it tricky to calculate a standard deviation. Because the mean is the balancing point of the data, when you add the deviations, they always sum to 0.

Table of Deviations, Including the Sum.

Observed Data

Deviations

9.5

11.5

12

Sum of deviations

Therefore, we need all the deviations to be positive before we add them up. One way to do this would be to make them positive by taking their absolute values. This is a technique we use for a similar measure called the
mean absolute deviation
. For the standard deviation, though, we square all the deviations. The square of any real number is always positive.

Observed Data

Deviation

9.5

11.5

0.5

12

1

1

We want to find the average of the squared deviations. Usually, to find an average, you divide by the number of terms in your sum. In finding the standard deviation, however, we divide by
. In this example, since
, we divide by 2. The result, which is called the
variance
, is 1.75. The variance of a sample is denoted by
and is a measure of how closely the data are clustered around the mean. Because we squared the deviations before we added them, the units we were working in were also squared. To return to the original units, we must take the square root of our result:
. This quantity is the sample standard deviation and is denoted by
. The number indicates that in our sample, the typical data value is approximately 1.32 units away from the mean. It is a measure of how closely the data are clustered around the mean. A small standard deviation means that the data points are clustered close to the mean, while a large standard deviation means that the data points are spread out from the mean.

Example C

The following are scores for two different students on two quizzes:

Student 1:

Student 2:

Note that the mean score for each of these students is 50.

Student 1: Deviations:

Squared deviations:

Variance

Standard Deviation

Student 2: Deviations:

Squared Deviations:

Variance

Standard Deviation

Student 2 has scores that are tightly clustered around the mean. In fact, the standard deviation of zero indicates that there is no variability. The student is absolutely consistent.

So, while the average of each of these students is the same (50), one of them is consistent in the work he/she does, and the other is not. This raises questions: Why did student 1 get a zero on the second quiz when he/she had a perfect paper on the first quiz? Was the student sick? Did the student forget about the quiz and not study? Or was the second quiz indicative of the work the student can do, and was the first quiz the one that was questionable? Did the student cheat on the first quiz?

There is one more question that we haven't answered regarding standard deviation, and that is, "Why
?" Dividing by
is only necessary for the calculation of the standard deviation of a sample. When you are calculating the standard deviation of a population, you divide by
, the number of data points in your population. When you have a sample, you are not getting data for the entire population, and there is bound to be random variation due to sampling (remember that this is called sampling error).

When we claim to have the standard deviation, we are making the following statement:

“The typical distance of a point from the mean is ...”

But we might be off by a little from using a sample, so it would be better to overestimate
to represent the standard deviation.

Formulas

Sample Standard Deviation:

where:

is the
data value.

is the mean of the sample.

is the sample size.

Variance of a sample:

where:

is the
data value.

is the mean of the sample.

is the sample size.

Chebyshev’s Theorem

Pafnuty Chebyshev was a
Century Russian mathematician. The theorem named for him gives us information about how many elements of a data set are within a certain number of standard deviations of the mean.

The formal statement for
Chebyshev’s Theorem
is as follows:

The proportion of data points that lie within
standard deviations of the mean is at least:

Example D

Given a group of data with mean 60 and standard deviation 15, at least what percent of the data will fall between 15 and 105?

15 is three standard deviations below the mean of 60, and 105 is 3 standard deviations above the mean of 60. Chebyshev’s Theorem tells us that at least
of the data will fall between 15 and 105.

On the Web

The following links discuss various issues related to measures of spread, including 1) why the population standard deviation is calculated by dividing by the entire population N, while the standard deviation of a sample is calculated by dividing by the total sample N minus 1; and 2) Why the standard deviation is calculated by a rather complex process of summing the squares of the differences between the data points and the mean, averaging these differences, and then taking the square root of the average, rather than simply averaging the nonsquared absolute differences.

Vocabulary

When examining a set of data, we use
descriptive statistics
to provide information about how the data are spread out.

The
range
is a measure of the difference between the smallest and largest numbers in a data set.

The
interquartile range
is the difference between the upper and lower quartiles.

A more informative measure of spread is based on the mean. We can look at how individual points vary from the mean by subtracting the mean from the data value. This is called the
deviation
. The
standard deviation
is a measure of the average deviation for the entire data set. Because the deviations always sum to zero, we find the standard deviation by adding the squared deviations. When we have the entire population, the sum of the squared deviations is divided by the population size. This value is called the
variance
. Taking the square root of the variance gives the standard deviation. For a population, the standard deviation is denoted by
. Because a sample is prone to
random variation (sampling error)
, we adjust the sample standard deviation to make it a little larger by dividing the sum of the squared deviations by one less than the number of observations. The result of that division is the sample variance, and the square root of the sample variance is the sample standard deviation, usually notated as
.

Chebyshev’s Theorem
gives us information about the minimum percentage of data that falls within a certain number of standard deviations of the mean, and it applies to any population or sample, regardless of how that data set is distributed.

Guided Practice

Return to the rainfall data from Mobile. The mean yearly rainfall amount is 69.3, and the sample standard deviation is about 14.4. Use this information to answer the following questions:

a) What percentage of the data is within two standard deviations of the mean?

b) Is the following answer significant?

c) What is the main advantage of Chebyshev's Theorem?

Solutions:

a) Chebyshev’s Theorem tells us about the proportion of data within
standard deviations of the mean. If we replace
with 2, the result is as shown:

So the theorem predicts that at least 75% of the data is within 2 standard deviations of the mean.

b) According to the drawing above, Chebyshev’s Theorem states that at least 75% of the data is between 40.5 and 98.1. This doesn’t seem too significant in this example, because all of the data falls within that range.

c) The advantage of Chebyshev’s Theorem is that it applies to any sample or population, no matter how it is distributed.