The variance is also equivalent to the second cumulant of the probability distribution for X. The variance is typically designated as Var(X), σX2{\displaystyle \scriptstyle \sigma _{X}^{2}}, or simply σ2 (pronounced "sigma squared"). The expression for the variance can be expanded:

If a continuous distribution does not have an expected value, as is the case for the Cauchy distribution, it does not have a variance either. Many other distributions for which the expected value does exist also do not have a finite variance because the integral in the variance definition diverges. An example is a Pareto distribution whose indexk satisfies 1 < k ≤ 2.

The binomial distribution with p=0.5{\displaystyle p=0.5} describes the probability of getting k{\displaystyle k} heads in n{\displaystyle n} tosses. Thus the expected value of the number of heads is n2{\displaystyle {\frac {n}{2}}}, and the variance is n4{\displaystyle {\frac {n}{4}}}.

they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables X1,…,XN{\displaystyle X_{1},\dots ,X_{N}} are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:

Since independent random variables are always uncorrelated, the equation above holds in particular when the random variables X1,…,Xn{\displaystyle X_{1},\dots ,X_{n}} are independent. Thus independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.

This statement is called the Bienaymé formula[2] and was discovered in 1853.[未記出處或無根據] It is often made with the stronger condition that the variables are independent, but uncorrelatedness suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is

That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.

(Note: The second equality comes from the fact that Cov(Xi,Xi) = Var(Xi).)

Here Cov is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. This formula is used in the theory of Cronbach's alpha in classical test theory.

So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is

This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have

Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does generally not converge to the population mean, even though the Law of large numbers states that the sample mean will converge for independent variables.

This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y.

The expression above can be extended to a weighted sum of multiple variables:

The general formula for variance decomposition or the law of total variance is: If X{\displaystyle X} and Y{\displaystyle Y} are two random variables, and the variance of X{\displaystyle X} exists, then

Here, E⁡(X|Y){\displaystyle \operatorname {E} (X|Y)} is the conditional expectation of X{\displaystyle X} given Y{\displaystyle Y}, and Var⁡(X|Y){\displaystyle \operatorname {Var} (X|Y)} is the conditional variance of X{\displaystyle X} given Y{\displaystyle Y}. (A more intuitive explanation is that given a particular value of Y{\displaystyle Y}, then X{\displaystyle X} follows a distribution with mean E⁡(X|Y){\displaystyle \operatorname {E} (X|Y)} and variance Var⁡(X|Y){\displaystyle \operatorname {Var} (X|Y)}. The above formula tells how to find Var⁡(X){\displaystyle \operatorname {Var} (X)} based on the distributions of these two quantities when Y{\displaystyle Y} is allowed to vary.) This formula is often applied in analysis of variance, where the corresponding formula is

This will be useful when it is possible to derive formulae for the expected value and for the expected value of the square.

This formula is also sometimes used in connection with the sample variance. While useful for hand calculations, it is not advised for computer calculations as it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude and floating point arithmetic is used. This is discussed in the article Algorithms for calculating variance.

The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. argminmE((X−m)2)=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} ((X-m)^{2})=\mathrm {E} (X)\,}. Conversely, if a continuous function φ{\displaystyle \varphi } satisfies argminmE(φ(X−m))=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} (\varphi (X-m))=\mathrm {E} (X)\,} for all random variables X, then it is necessarily of the form φ(x)=ax2+b{\displaystyle \varphi (x)=ax^{2}+b}, where a > 0. This also holds in the multidimensional case.[5]

Let's define X{\displaystyle X} as a column vector of n random variables X1,...,Xn{\displaystyle X_{1},...,X_{n}}, and c as a column vector of N scalars c1,...,cn{\displaystyle c_{1},...,c_{n}}. Therefore cTX{\displaystyle c^{T}X} is a linear combination of these random variables, where cT{\displaystyle c^{T}} denotes the transpose of vector c{\displaystyle c}. Let also be Σ{\displaystyle \Sigma } the variance-covariance matrix of the vector X. The variance of cTX{\displaystyle c^{T}X} is given by:[6]

Unlike expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in square meters. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is √2.9 ≈ 1.7, slightly larger than the expected absolute deviation of 1.5.

The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.

Real-world distributions such as the distribution of yesterday's rain throughout the day are typically not fully known, unlike the behavior of perfect dice or an ideal distribution such as the normal distribution, because it is impractical to account for every raindrop. Instead one estimates the mean and variance of the whole distribution by using an estimator, a function of the sample of nobservations drawn suitably randomly from the whole sample space, in this example the set of all measurements of yesterday's rainfall in all available rain gauges. The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance as the variance of the sample is close to optimal in general, but can be improved in two incompatible ways. The sample variance is computed as an average of squared deviations about the (sample) mean, most simply dividing by n. However, using other values than n improves the estimator in various ways. Four common values for the denominator are n,n − 1, n + 1, and n − 1.5: n is the simplest (population variance of the sample), n − 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution.

Firstly, if the mean is unknown (and is computed as the sample mean), then the sample variance is a biased estimator: it underestimates the variance by a factor of (n − 1) / n; correcting by this factor (dividing by n − 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the true variance. If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (already known) mean.

Secondly, the sample variance does not generally minimize mean squared error, and correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance), and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1), and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n − 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation.

is the population mean. The population variance therefore is the variance of the underlying probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.

In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population.[7] Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.

We take a sample with replacement of n values y1, ..., yn from the population, where n < N, and estimate the variance on the basis of this sample.[8] Directly taking the variance of the sample gives the average of the squared deviations:

Since the yi are selected randomly, both y¯{\displaystyle \scriptstyle {\overline {y}}} and σy2{\displaystyle \scriptstyle \sigma _{y}^{2}} are random variables. Their expected values can be evaluated by summing over the ensemble of all possible samples {yi} from the population. For σy2{\displaystyle \scriptstyle \sigma _{y}^{2}} this gives:

Hence σy2{\displaystyle \scriptstyle \sigma _{y}^{2}} gives an estimate of the population variance that is biased by a factor of (n-1)/n. For this reason, σy2{\displaystyle \scriptstyle \sigma _{y}^{2}} is referred to as the biased sample variance. Correcting for this bias yields the unbiased sample variance:

Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.

Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[12] Values must lie within the limits y¯±σy(n−1)1/2.{\displaystyle {\bar {y}}\pm \sigma _{y}(n-1)^{1/2}.}

If X{\displaystyle X} is a vector-valued random variable, with values in Rn{\displaystyle \mathbb {R} ^{n}}, and thought of as a column vector, then the natural generalization of variance is E⁡((X−μ)(X−μ)T){\displaystyle \operatorname {E} ((X-\mu )(X-\mu )^{\operatorname {T} })}, where μ=E⁡(X){\displaystyle \mu =\operatorname {E} (X)} and XT{\displaystyle X^{\operatorname {T} }} is the transpose of X{\displaystyle X}, and so is a row vector. This variance is a positive semi-definite square matrix, commonly referred to as the covariance matrix.

If X{\displaystyle X} is a complex-valued random variable, with values in C{\displaystyle \mathbb {C} }, then its variance is E⁡((X−μ)(X−μ)†){\displaystyle \operatorname {E} ((X-\mu )(X-\mu )^{\dagger })}, where X†{\displaystyle X^{\dagger }} is the conjugate transpose of X{\displaystyle X}. This variance is also a positive semi-definite square matrix.

Testing for the equality of two or more variances is difficult. The F test and chi square tests are both adversely affected by non-normality and are not recommended for this purpose.

Several non parametric tests have been proposed: these include the Barton-David-Ansari-Fruend-Siegel-Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton-David-Ansari-Fruend-Siegel-Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.

The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations σ1{\displaystyle \sigma _{1}} and σ2{\displaystyle \sigma _{2}}, it is found that the distribution, when both causes act together, has a standard deviation σ12+σ22{\displaystyle {\sqrt {\sigma _{1}^{2}+\sigma _{2}^{2}}}}. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...

This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like