Pattern Classification (2nd ed.

Richard O. Duda, Peter E. Hart and David G. Stork

September 3, 1997

NOT FOR GENERAL DISTRIBUTION; for use only by students of designated

faculty. This is a pre-publication print of material to appear in Duda, Hart and Stork: Pattern Classification and Scene Analysis: Part I Pattern Classification, to be published in 1998 by John Wiley & Sons, Inc. This is apreliminary version and may contain errors; comments and suggestions are heartily encouraged.

ur goal here is to present the basic results and definitions from linear algebra,O probability theory, information theory and computational complexity that serveas the mathematical foundations for the pattern recognition techniques discussedthroughout this book. We will try to give intuition whenever appropriate, but wedo not attempt to prove these results; systematic expositions can be found in thereferences.

A.1 NotationHere are the terms and notation used throughout the book. In addition, there arenumerous specialized variables and functions whose definitions are usage should beclear from the text.

variables, symbols and operations

' approximately equal to ≈ approximately equal to (in an expansion) ≡ equivalent to (or defined to be) ∝ proportional to ∞ infinity x→a x approaches a t←t+1 in an algorithm: assign to variable t the new value t + 1 lim f (x) the value of f (x) in the limit x approaching a x→a arg max f (x) the value of x that leads to the maximum value of f (x) x arg min f (x) the value of x that leads to the minimum value of f (x) x ln(x) logarithm base e, or natural logarithm of x log(x) logarithm base 10 of x log2 (x) logarithm base 2 of x exp[x] or ex exponential of x ∂f (x)/∂x partial derivative Rb f (x)dx the integral of f (x) between a and b. If no limits are written, the a full space is assumed Q.E.D., quod erat demonstrandum (“which was to be proved ”) — used to signal the end of a proof

sets A, B, C, D, ... “Calligraphic” font generally denotes sets or lists, e.g., data set D = {x1 , ..., xn } x∈D x is an element of set D x∈/D x is not an element of set D A∪B union of two sets, i.e., the set containing all elements of A and B |D| the cardinality of set D, i.e., the number of (possibly non-distinct) elements in it max[D] the x value in set D that is maximum x

In other words, the ij th entry of Mt is the jith entry of M.

A square (d × d) matrix is called symmetric if its entries obey mij = mji ; it is called skew-symmetric (or anti-symmetric) if mij = −mji . An general matrix is called non-negative matrix if mij ≥ 0 for all i and j. A particularly important matrix is theidentity identity matrix, I — a d × d (square) matrix whose diagonal entries are 1’s, and allmatrix other entries 0. The Kronecker delta function or Kronecker symbol, defined as ½Kronecker 1 if i = j δij = (4)delta 0 otherwise, can function as an identity matrix. A general diagonal matrix (i.e., one having 0 for all off diagonal entries) is denoted diag(m11 , m22 , ..., mdd ), the entries being the successive elements m11 , m22 , . . . , mdd . Addition of vectors and of matrices is component by component.A.2. LINEAR ALGEBRA 9

Note that if M is not square, the dimensionality of y differs from that of x.

The inner product of two vectors having the same dimensionality will be denoted innerhere as xt y and yields a scalar: product

X d xt y = xi yi = yt x. (7) i=1

It is sometimes also called the scalar product or dot product and denoted x • y. TheEuclidean norm or length of the vector is denoted Euclidean √ norm kxk = xt x; (8)we call a vector “normalized” if kxk = 1. The angle between two d-dimensionalvectors obeys

xt y cosθ = , (9) ||x|| ||y||and thus the inner product is a measure of the colinearity of two vectors — a naturalindication of their similarity. In particular, if xt y = 0, then the vectors are orthogonal,and if |xt y| = ||x|| ||y||, the vectors are colinear. From Eq. 9, we have immediatelythe Cauchy-Schwarz inequality, which states

kxt yk ≤ ||x|| ||y||. (10)

We say a set of vectors {x1 , x2 , . . . , xn } is linearly independent if no vector in the linearset can be written as a linear combination of any of the others. Informally, a set of d independ-linearly independent vectors spans an d-dimensional vector space, i.e., any vector in encethat space can be written as a linear combination of such spanning vectors.

A.2.2 Outer product

The outer product (sometimes called matrix product) of two column vectors yields a matrixmatrix product

If this matrix is square, its determinant (Sect. A.2.4) is called simply the Jacobian. If the entries of M depend upon a scalar parameter θ, we can take the derivative of Mmponent by component, to get another matrix, as  ∂m11 ∂m12  ∂θ ∂θ . . . ∂m ∂θ 1d

A.2.4 Determinant and trace

The determinant of a d × d (square) matrix is a scalar, denoted |M|. If M is itself ascalar (i.e., a 1 × 1 matrix M), then |M | = M . If M is 2 × 2, then |M| = m11 m22 −m21 m12 . The determinant of a general square matrix can be computed by a methodcalled expansion by minors, and this leads to a recursive definition. If M is our d × d expansionmatrix, we define Mi|j to be the (d − 1) × (d − 1) matrix obtained by deleting the ith by minorsrow and the j th column of M:

|M| = m11 |M1|1 | − m21 |M2|1 | + m31 |M3|1 | − · · · ± md1 |Md|1 |, (23)where the signs alternate. This process can be applied recursively to the successive(smaller) matrixes in Eq. 23. For a 3 × 3 matrix, this determinant calculation can be represented by “sweeping”the matrix — taking the sum of the products of matrix terms along a diagonal, whereproducts from upper-left to lower-right are added with a positive sign, and those fromthe lower-left to upper-right with a minus sign. That is, 12 CONTENTS

For two square matrices M and N, we have |MN| = |M| |N|, and furthermore |M| = |Mt |. The determinant of any matrix is a measure of the d-dimensional hypervolume it “subtends.” For the particular case of a covariance matrix Σ (Sect. A.4.10), |Σ| is a measure of the hypervolume of the data taht yielded Σ. The trace of a d × d (square) matrix, denoted tr[M], is the sum of its diagonal elements:

X d tr[M] = mii . (25) i=1

Both the determinant and trace of a matrix are invariant with respect to rotations of the coordinate system.

A.2.5 Eigenvectors and eigenvalues

Given a d × d matrix M, a very important class of linear equations is of the form

or equivalently the cofactor of the i, j entry of M. As defined in Eq. 22, Mi|j is the(d − 1) × (d − 1) matrix formed by deleting the ith row and j th column of M. Theadjoint of M, written Adj[M], is the matrix whose i, j entry is the j, i cofactor of M. adjointGiven these definitions, we can write the inverse of a matrix as

A.3 Lagrange optimization

Suppose we seek the position x0 of an extremum of a scalar-valued function f (x),subject to some constraint. For the following method to work, such a constraintmust be expressible in the form g(x) = 0. To find the extremum, we first form theLagrangian function

and solve the resulting equations for λ and x0 — the position of the extremum.

A.4 Probability Theory

A.4.1 Discrete random variablesLet x be a random variable that can assume only a finite number m of different valuesin the set X = {v1 , v2 , . . . , vm }. We denote pi as the probability that x assumes thevalue vi :

Sometimes it is more convenient to express the set of probabilities {p1 , p2 , . . . , pm }

in terms of the probability mass function P (x). To distinguish between a random probability variable and the values that it can assume, it is sometime convenient to use an upper- mass case letter for the random variable and the corresponding lower-case letter for the function value. The mass function would then be written PX (x). While this avoids the possible confusion in Eq. 37 and elsewhere (where x denotes a value, not a random variable), it also significantly complicates our the notation. Since it is usually clear from context whether one is referring to a random variable or its value, we will use the simpler notation as whenever possible. The probability mass function must satisfy the following two conditions:

P (x) ≥ 0 and X P (x) = 1. (37) x∈X

A.4.2 Expected values

mean The mean or expected value or average of x is defined by

X X m E[x] = µ = xP (x) = vi pi . (38) x∈X i=1

If one thinks of the probability mass function as defining a set of point masses, with pi being the mass concentrated at x = vi , then the expected value µ is just the center of mass. Alternatively, we can interpret µ as the arithmetic average of the values in a large random sample. More generally, if f (x) is any function of x, the expected value of f is defined by

X E[f (x)] = f (x)P (x). (39) x∈X

Note that the process of forming an expected value is linear, in that if α1 and α2 are arbitrary constants,

E[α1 f1 (x) + α2 f2 (x)] = α1 E[f1 (x)] + α2 E[f2 (x)]. (40)

It is sometimes convenient to think of E as an operator — the (linear) expectation

standard where σ is the standard deviation of x. Physically, if we think of x as a random

deviation signal, the second moment is its total average power and the variance is its AC power.A.4. PROBABILITY THEORY 15

Alternatively, the variance can be viewed as the moment of inertia of the probabilitymass function. The variance is never negative, and is zero if and only if all of theprobability mass is concentrated at one point. The standard deviation is a simple but valuable measure of how far values of xare likely to depart from the mean. Its very name suggests that it is the standardor typical amount one should expect a randomly drawn value for x to deviate ordiffer from µ. Chebyshev’s inequality provides a mathematical relation between the Chebyshev’sstandard deviation and |x − µ|: inequality

1 Pr{|x − µ| > nσ} ≤ . (43) n2This inequality is not a tight bound (and it is useless for n < 1); a more practical ruleof thumb, which strictly speaking is true only for the normal distribution, is that 68%of the values will lie within one, 95% within two, and 99.7% within three standarddeviations of the mean (Fig. A.1). Nevertheless, Chebyshev’s inequality shows thestrong link between the standard deviation and the spread of the distribution. Inaddition, it suggests that |x−µ|/σ is a meaningful normalized measure of the distancefrom x to the mean (cf. Sect. A.4.12). By expanding the quadratic in Eq. 42, it is easy to prove the useful formula

Var[x] = E[x2 ] − (E[x])2 . (44)

Note that, unlike the mean, the variance is not linear. In particular, if y = αx, whereα is a constant, then Var[y] = α2 Var[x]. Moreover, the variance of the sum of tworandom variables is usually not the sum of their variances. However, as we shall seebelow, variances do add when the variables involved are statistically independent. In the simple but important special case in which x is binary valued (say, v1 = 0and v2 = 1), we can obtain simple formulas for µ and σ. If we let p = Pr{x = 1},then it is easy to show that

As mentioned above, although the notation is more precise when we use subscriptsas in Eq. 47, it is common to omit them and write simply P (x) and P (y) wheneverthe context makes it clear that these are in fact two different functions — rather thanthe same function merely evaluated with different variables.

A.4.4 Statistical independence

Variables x and y are said to be statistically independent if and only if

P (x, y) = Px (x)Py (y). (48)

We can understand such independence as follows. Suppose that pi = Pr{x = vi } isthe fraction of the time that x = vi , and qj = Pr{y = wj } is the fraction of the timethat y = wj . Consider those situations where x = vi . If it is still true that the fractionof those situations in which y = wj is the same value qj , it follows that knowing thevalue of x did not give us any additional knowledge about the possible values of y;in that sense y is independent of x. Finally, if x and y are statistically independent,it is clear that the fraction of the time that the specific pair of values (vi , wj ) occursmust be the product of the fractions pi qj = Px (vi )Py (wj ).

We can summarize Eqs. 51 & 52 using vector notation as:

X µ = E[x] = xP (x) (53) x∈{X Y} Σ = E[(x − µ)(x − µ)t ], (54)

where Σ is the covariance matrix (cf., Sect. A.4.9).

The covariance is one measure of the degree of statistical dependence between x and y. If x and y are statistically independent, then σxy = 0. If α is a constant and y = αx, which is a case of strong statistical dependence, it is also easy to show that σxy = ασx2 . Thus, the covariance is positive if x and y both increase or decrease together, and is negative if y decreases when x increases. If σxy = 0, the variables x and y are said to be uncorrelated. It does not follow that uncorrelated variables must be uncorre- statistically independent — covariance is just one measure of independence. However, lated it is a fact that uncorrelated variables are statistically independent if they have a multivariate normal distribution, and in practice statisticians often treat uncorrelated variables as if they were statistically independent. There is an important Cauchy-Schwarz inequality for the variances σx and σy and Cauchy- the covariance σxy . It can be derived by observing that the variance of a random Schwarz variable is never negative, and thus the variance of λx + y must be non-negative no inequality matter what the value of the scalar λ. This leads to the famous inequality

is a normalized covariance, and must always be between −1 and +1. If ρ = +1,

then x and y are maximally positively correlated, while if ρ = −1, they are maxi- mally negatively correlated. If ρ = 0, the variables are uncorrelated. It is common for statisticians to consider variables to be uncorrelated for practical purposes if the mag- nitude of their correlation coefficient is below some threshold, such as .05, although the threshold that makes sense does depend on the actual situation. If x and y are statistically independent, then for any two functions f and g

E[f (x)g(y)] = E[f (x)]E[g(y)], (57)

a result which follows from the definition of statistical independence and expectation. Note that if f (x) = x − µx and g(y) = y − µy , this theorem again shows that σxy = E[(x − µx )(y − µy )] is zero if x and y are statistically independent.18 CONTENTS

A.4.6 Conditional probability

When two variables are statistically dependent, knowing the value of one of themlets us get a better estimate of the value of the other one. This is expressed by thefollowing definition of the conditional probability of x given y:

P (x, y) P (x|y) = . (59) Py (y)Note that if x and y are statistically independent, this gives P (x|y) = Px (x). Thatis, when x and y are independent, knowing the value of y gives you no informationabout x that you didn’t already know from its marginal distribution Px (x). To gain intuition about this definition of conditional probability, consider a simpletwo-variable binary case where both x and y are either 0 or 1. Suppose that a largenumber n of pairs of xy-values are randomly produced. Let nij be the number ofpairs in which we find x = i and y = j, i.e., we see the (0, 0) pair n00 times, the (0, 1)pair n01 times, and so on, where n00 + n01 + n10 + n11 = n. Suppose we pull outthose pairs where y = 1, i.e., the (0, 1) pairs and the (1, 1) pairs. Clearly, the fractionof those cases in which x is also 1 is

n11 n11 /n = . (60) n01 + n11 (n01 + n11 )/nIntuitively, this is what we would like to get for P (x|y) when y = 1 and n is large.And, indeed, this is what we do get, because n11 /n is approximately P (x, y) and n11 /n(n01 +n11 )/n is approximately Py (y) for large n.

A.4.7 The Law of Total Probability and Bayes’ rule

The expression X Py (y) = P (x, y) (61) x∈X

is an instance of the Law of Total Probability. This law says that if an event A canoccur in m different ways A1 , A2 , . . . , Am , and if these m subevents are mutuallyexclusive — that is, cannot occur at the same time — then the probability of Aoccurring is the sum of the probabilities of the subevents Ai . In particular, the randomvariable y can assume the value y in m different ways — with x = v1 , with x = v2 , . . .,and with x = vm . Because these possibilities are mutually exclusive, it follows fromthe Law of Total Probability that Py (y) is the sum of the joint probability P (x, y)over all possible values for x. But from the definition of the conditional probabilityP (y|x) we have

P (x, y) = P (y|x)Px (x), (62)

and thus, we obtain

A.4. PROBABILITY THEORY 19

P (y|x)Px (x) P (x|y) = X , (63) P (y|x)Px (x) x∈X

or in words,

likelihood × prior posterior = , evidencewhere these terms are discussed more fully in Chapt. ??. Equation 63 is usually called Bayes’ rule. Note that the denominator, which isjust Py (y), is obtained by summing the numerator over all x values. By writingthe denominator in this form we emphasize the fact that everything on the right-hand side of the equation is conditioned on x. If we think of x as the importantvariable, then we can say that the shape of the distribution P (x|y) depends only onthe numerator P (y|x)Px (x); the denominator is just a normalizing factor, sometimescalled the evidence, needed to insure that P (x|y) sums to one. evidence The standard interpretation of Bayes’ rule is that it “inverts” statistical connec-tions, turning P (y|x) into P (x|y). Suppose that we think of x as a “cause” and yas an “effect” of that cause. That is, we assume that if the cause x is present, it iseasy to determine the probability of the effect y being observed, where the conditionalprobability function P (y|x) — the likelihood — specifies this probability explicitly. If likelihoodwe observe the effect y, it might not be so easy to determine the cause x, becausethere might be several different causes, each of which could produce the same ob-served effect. However, Bayes’ rule makes it easy to determine P (x|y), provided thatwe know both P (y|x) and the so-called prior probability Px (x), the probability of x priorbefore we make any observations about y. Said slightly differently, Bayes’ rule showshow the probability distribution for x changes from the prior distribution Px (x) beforeanything is observed about y to the posterior P (x|y) once we have observed the value posteriorof y.

A.4.8 Vector random variables

To extend these results from two variables x and y to d variables x1 , x2 , . . . , xd , itis convenient to employ Pvector notation. The joint probability mass function P (x)satisfies P (x) ≥ 0 and P (x) = 1 (Eq. 46), where the sum extends over all possiblevalues for the vector x. Note that P (x) is a function of d variables, x1 , x2 , . . . , xd ,and can be a very complicated, multi-dimensional function. However, if the randomvariables xi are statistically independent, it reduces to the product

P (x) = Px1 (x1 )Px2 (x2 ) · · · Pxd (xd )

Y d = Pxi (xi ). (64) i=1

Here the separate marginal distributions Pxi (xi ) can be obtained by summing the jointdistribution over the other variables. In addition to these univariate marginals, othermarginal distributions can be obtained by this use of the Law of Total Probability.For example, suppose that we have P (x1 , x2 , x3 , x4 , x5 ) and we want P (x1 , x4 ), wemerely calculate 20 CONTENTS

XXX P (x1 , x4 ) = P (x1 , x2 , x3 , x4 , x5 ). (65) x2 x3 x5

One can define many different conditional distributions, such as P (x1 , x2 |x3 ) or P (x2 |x1 , x4 , x5 ). For example,

P (x1 , x2 , x3 ) P (x1 , x2 |x3 ) = , (66) P (x3 ) where all of the joint distributions can be obtained from P (x) by summing out the un- wanted variables. If instead of scalars we have vector variables, then these conditional distributions can also be written as

We can use the vector product (x − µ)(x − µ)t , to write the covariance matrix as

Σ = E[(x − µ)(x − µ)t ]. (74)

Thus, the diagonal elements of Σ are just the variances of the individual elementsof x, which can never be negative; the off-diagonal elements are the covariances,which can be positive or negative. If the variables are statistically independent, thecovariances are zero, and the covariance matrix is diagonal. The analog to the Cauchy-Schwarz inequality comes from recognizing that if w is any d-dimensional vector, thenthe variance of wt x can never be negative. This leads to the requirement that thequadratic form wt Σw never be negative. Matrices for which this is true are said to bepositive semi-definite; thus, the covariance matrix Σ must be positive semi-definite.It can be shown that this is equivalent to the requirement that none of the eigenvaluesof Σ can ever be negative.

A.4.10 Continuous random variables

When the random variable x can take values in the continuum, it no longer makessense to talk about the probability that x has a particular value, such as 2.5136,because the probability of any particular exact value will almost always be zero.Rather, we talk about the probability that x falls in some interval (a, b); instead ofhaving a probability mass function P (x) we have a probability mass density function massp(x). The mass density has the property that density

Zb Pr{x ∈ (a, b)} = p(x) dx. (75) a

The name density comes by analogy with material density. If we consider a smallinterval (a, a + ∆x) over which p(x) is essentially constant, having value p(a), we seethat p(a) = Pr{x ∈ (a, a + ∆x)}/∆x. That is, the probability mass density at x = ais the probability mass Pr{x ∈ (a, a + ∆x)} per unit distance. It follows that theprobability density function must satisfy

and for the particular d-dimensional functions as above, we have

If the components of x are statistically independent, then the joint probability densityfunction factors as

Y d p(x) = pi (xi ) (81) i=1

and the covariance matrix is diagonal.

Conditional probability density functions are defined just as conditional mass func-tions. Thus, for example, the density for x given y is given byA.4. PROBABILITY THEORY 23

p(x, y) p(x|y) = (82) py (y)and Bayes’ rule for density functions is

p(y|x)px (x) p(x|y) = Z∞ , (83) p(y|x)px (x) dx −∞

and likewise for the vector case.

Occassionally we will need to take the expectation with respect to a subset of thevariables, and in that case we must show this as a subscript, for instance R∞ Ex1 [f (x1 , x2 )] = f (x1 , x2 )p(x1 ) dx1 .(83) −∞

A.4.11 Distributions of sums of independent random variables

It frequently happens that we know the distributions for two independent randomvariables x and y, and we need to know the distribution for their sum z = x + y. Itis easy to obtain the mean and the variance of the sum:

µz = E[z] = E[x + y] = E[x] + E[y] = µx + µy ,

where we have used the fact that the cross-term factors into E[x − µx ]E[y − µy ] whenx and y are independent; in this case the product is manifestly zero, since each ofthe expectations vanishes. Thus, in words, the mean of the sum of two independentrandom variables is the sum of their means, and the variance of their sum is the sumof their variances. If the variables are random yet not independent — for instancey = -x, where x is randomly distribution — then the variance is not the sum of thecomponent variances. It is only slightly more difficult to work out the exact probability density functionfor z = x + y from the separate density functions for x and y. The probability that zis between ζ and ζ + ∆z can be found by integrating the joint density p(x, y) =px (x)py (y) over the thin strip in the xy-plane between the lines x + y = ζ andx + y = ζ + ∆z. It follows that, for small ∆z, ( Z∞ ) Pr{ζ < z < ζ + ∆z} = px (x)py (ζ − x) dx ∆z, (85) −∞

As one would expect, these results generalize. It is not hard to show that: • The mean of the sum of d independent random variables x1 , x2 , . . . , xd is the sum of their means. (In fact the variables need not be independent for this to hold.) • The variance of the sum is the sum of their variances. • The probability density function for the sum is the convolution of the separate density functions:

pz (z) = px1 ? px2 ? . . . ? pxd . (87)

A.4.12 Univariate normal density

Central One of the most important results of probability theory is the Central Limit Theorem,Limit which states that, under various conditions, the distribution for the sum of d inde-Theorem pendent random variables approaches a particular limiting form known as the normal distribution. As such, the normal or Gaussian probability density function is veryGaussian important, both for theoretical and practical reasons. In one dimension, it is defined by µ ¶2 1 x−µ 1 − p(x) = √ e 2 σ . (88) 2πσ The normal density is traditionally described as a “bell-shaped curve”; it is com- pletely determined by the numerical values for two parameters, the mean µ and the variance σ 2 . This is often emphasized by writing p(x) ∼ N (µ, σ 2 ), which is read as “x is distributed normally with mean µ and variance σ 2 .” The distribution is sym- metrical about the mean, the peak occurring at x = µ and the width of the “bell” is proportional to the standard deviation σ. The normal density satisfies the following equations:

A natural measure of the distance from x to the mean µ is the distance |x − µ|

measured in units of standard deviations:

|x − µ| r= , (91) σthe Mahalanobis distance from x to µ. Thus, the probability is .95 that the Maha- Mahalanobislanobis distance from x to µ will be less than 2. If a random variable x is modified distanceby (a) subtracting its mean and (b) dividing by its standard deviation, it is said tobe standardized. Clearly, a standardized normal random variable u = (x − µ)/σ has standardizedzero mean and unit standard deviation, that is, 1 p(u) = √ e−u /2 , 2 (92) 2πwhich can be written as p(u) ∼ N (0, 1).

A.5 Gaussian derivatives and integrals

Because of the prevalence of Gaussian functions throughout pattern recognition, weoften have occassion to integrate and differentiate them. The first three derivativesof a one-dimensional (normalized) Gaussian are

and are shown in Fig. A.2.

Figure A.2: A one-dimensional Gaussian distribution and its first three derivatives, shown for f (x) ∼ N (0, 1).

error An improtant finite integral of the Gaussian is the so-called error function, definedfunction as

Zu 2 e−x 2 erf(u) = √ /2 dx. (94) π 0

Note especially the pre-factor of 2 and the lower limit of integration. As can be seen from Fig. A.1, erf(0) = 0, erf(1) = .68 and lim erf(x) = 1. There is no closed x→∞ analytic form for the error function, and thus we typically use tables, approximations or numerical integration for its evaluation (Fig. A.3).

1 1-erf(u) erf(u) 0.8

0.6

0.4

1/u2 0.2

u 1 2 3 4

Figure A.3: The error function is the corresponds to the area under a standardized Gaussian (Eq. 94) between −u and u, i.e., is the probability that a sample is chosen from the Gaussian |x| ≤ u. Thus, the complementary probability, 1 − erf(u) is the probability that a sample is chosen with |x| > u for the sandardized Gaussian. Cheby- shev’s inequality states that for an arbitrary distribution having standard deviation = 1, this latter probability is bounded by 1/u2 . As shown, this bound is quite loose for a Gaussian.

In calculating moments of Gaussians, we need the general integral of powers of x

gamma weighted by a Gaussian. Recall first the definition of a gamma functionfunctionA.5. GAUSSIAN DERIVATIVES AND INTEGRALS 27

where again we have used a pre-factor of 2 and lower integration limit of 0 in ordergive non-trivial (i.e., non-vanishing) results for odd n.

A.5.1 Multivariate normal densities

Normal random variables have many desirable theoretical properties. For example, itturns out that the convolution of two Gaussian functions is again a Gaussian function,and thus the distribution for the sum of two independent normal random variables isagain normal. In fact, sums of dependent normal random variables also have normaldistributions. Suppose that each of the d random variables xi is normally distributed,each with its own mean and variance: p(xi ) ∼ N (µi , σi2 ). If these variables areindependent, their joint density has the form

just as one would expect. Multivariate normal data tend to cluster about the meanvector, µ, falling in an ellipsoidally-shaped cloud whose principal axes are the eigen-vectors of the covariance matrix. The natural measure of the distance from x to themean µ is provided by the quantity

r2 = (x − µ)t Σ−1 (x − µ), (104)

which is the square of the Mahalanobis distance from x to µ. It is not as easyto standardize a vector random variable (reduce it to zero mean and unit covariancematrix) as it is in the univariate case. The expression analogous to u = (x−µ)/σ is u =Σ−1/2 (x−µ), which involves the “square root” of the inverse of the covariance matrix.The process of obtaining Σ−1/2 requires finding the eigenvalues and eigenvectors ofΣ, and is just a bit beyond the scope of this Appendix.

A.5.2 Bivariate normal densities

It is illuminating to look at the so-called bivariate normal density, that is, the caseof two Gaussian random variables x1 and x2 . In this case, it is convenient to defineσ12 = σ11 , σ22 = σ22 , and to introduce the correlation coefficient ρ defined by σ12 ρ= . (105) σ1 σ2With this notation, that the covariance matrix becomesA.5. GAUSSIAN DERIVATIVES AND INTEGRALS 29

Thus, the general bivariate normal density has the form

As we can see from Fig. A.4, p(x1 , x2 ) is a hill-shaped surface over the x1 x2 plane.The peak of the hill occurs at the point (x1 , x2 ) = (µ1 , µ2 ), i.e., at the mean vector µ.The shape of the hump depends on the two variances σ12 and σ22 , and the correlationcoefficient ρ. If we slice the surface with horizontal planes parallel to the x1 x2 plane,we obtain the so-called level curves, defined by the locus of points where the quadraticform ³ x − µ ´2 ³ x − µ ´³ x − µ ´ ³ x − µ ´2 1 1 1 1 2 2 2 2 − 2ρ + (111) σ1 σ1 σ2 σ2is constant. It is not hard to show that |ρ| ≤ 1, and that this implies that the levelcurves are ellipses. The x and y extent of these ellipses are determined by the variancesσ12 and σ22 , and their eccentricity is determined by ρ. More specifically, the principal √ direction of the eigenvectors ei of Σ, and the differentaxes of the ellipse are in the principalwidths in these directions λi . For instance, if ρ = 0, the principal axes of the ellipses axesare parallel to the coordinate axes, and the variables are statistically independent. Inthe special cases where ρ = 1 or ρ = −1, the ellipses collapse to straight lines. Indeed, 30 p(x) CONTENTS

Σ. If the value on one variable is known, for instance x1 = x̂1 , the distribution over the other variable is Gaussian with mean µ2|1 .

the joint density becomes singular in this situation, because there is really only one independent variable. We shall avoid this degeneracy by assuming that |ρ| < 1. One of the important properties of the multivariate normal density is that all conditional and marginal probabilities are also normal. To find such a density expli- cilty, which we deonte px2 |x1 (x2 |x1 ), we substitute our formulas for px1 x2 (x1 , x2 ) and px1 (x1 ) in the defining equation

Thus, we have verified that the conditional density px1 |x2 (x1 |x2 ) is a normal distri-conditional bution. Moreover, we have explicit formulas for the conditional mean µ2|1 and the 2mean conditional variance σ2|1 :

as illustrated in Fig. A.4.

These formulas provide some insight into the question of how knowledge of thevalue of x1 helps us to estimate x2 . Suppose that we know the value of x1 . Thena natural estimate for x2 is the conditional mean, µ2|1 . In general, µ2|1 is a linearfunction of x1 ; if the correlation coefficient ρ is positive, the larger the value of x1 ,the larger the value of µ2|1 . If it happens that x1 is the mean value µ1 , then the bestwe can do is to guess that x2 is equal to µ2 . Also, if there is no correlation betweenx1 and x2 , we ignore the value of x1 , whatever it is, and we always estimate x2 byµ2 . Note that in that case the variance of x2 , given that we know x1 , is the same 2as the variance for the marginal distribution, i.e., σ2|1 = σ22 . If there is correlation,knowledge of the value of x1 , whatever the value is, reduces the variance. Indeed,with 100% correlation there is no variance left in x2 when the value of x1 is known.

A.6 Information theory

A.6.1 Entropy and informationAssume we have a discrete set of symbols {v1 v2 . . . vm } with associated probabili-ties pi . The entropy of the discrete distribution — a measure of the randomness orunpredictability of a sequence of symbols drawn from it — is

X m H=− Pi log2 Pi , (114) i=1

where here we use the logarithm is base 2. In case any of the probabilities vanish, weuse the relation 0 log 0 = 0. (For continuous distributions, we often use logarithmbase e, denoted ln.) If we recall the expectation operator (cf. Eq. 39), we can writeH = E[log 1/P ], where we think of P as being a random variable whose possiblevalues are p1 , p2 , . . . , pm . Note that the entropy does not depend on the symbols, butjust on their probabilities. The entropy is non-negative and measured in bits when bitthe base of the logarithm is 2. One bit corresponds to the uncertainty that can beresolved by the answer to a single yes/no question. For a given number of symbolsm, the uniform distribution in which each symbol is equally likely, is the maximumentropy distribution (and H = log2 m bits) — we have the maximum uncertaintyabout the identity of each symbol that will be chosen. Conversely, if all the pi are 0except one, we have the minimum entropy distribution (H = 0 bits) — we are certainas to the symbol that will appear. For a continuous distribution, the entropy is

Z∞ H=− p(x)log p(x)dx, (115) −∞

and again H = E[log 1/p]. It is worth mentioning that among all continuous density 2functions having a given mean µ and √ variance σ , it is the Gaussian that has themaximum entropy (H = .5 + log2 ( 2πσ) bits). We can let σ approach zero to findthat a probability density in the form of a Dirac delta function, i.e., Dirac delta ½ 0 if x 6= a δ(x − a) = with ∞ if x = a, 32 CONTENTS

Z∞ δ(x)dx = 1, (116) −∞

has the minimum entropy (H = −∞ bits). For a Dirac function, we are sure that the value a will be selected each time. Our use of entropy in continuous functions, such as in Eq. 115, belies some sub- tle issues which are worth pointing out. If x had units, such as meters, then the probability density p(x) would have to have units of 1/x. There is something funda- mentally wrong in taking the logarithm of p(x), since the argument of any nonlinear function has to be dimensionless. What we should really be dealing with is a dimen- sionless quantity, say p(x)/p0 (x), where p0 (x) is some reference density function (cf., Sect. A.6.2). One of the key properties of the entropy of a discrete distribution is that it is invariant to “shuffling” the event labels; no such property is evident for continuous variables. The related question with continuous variables concerns what happens when one makes a change of variables. In general, if we make a change of variables, such R as y = x3 or even y = 10x, we will get a different value for the integral of q(y)log q(y) dy, where q is the induced density for y. If entropy is supposed to measure the intrinsic disorganization, it doesn’t make sense that y would have a different amount of intrinsic disorganization than x. Fortunately, in practice these concerns do not present important stumbling blocks since relative entropy and differences in entropy are more fundamental than H taken by itself. Nevertheless, questions of the foundations of entropy measures for continu- ous variables are addressed in books listed in Bibliographical Remarks.

A.6.2 Relative entropy

Suppose we have two discrete distributions over the same variable x, p(x) and q(x).Kullback- The relative entropy or Kullback-Leibler distance is a measure of the “distance” be-Leibler tween these distributions:distance X q(x) DKL (p(x), q(x)) = q(x)ln . (117) x p(x)

The continuous version is

Z∞ q(x) DKL (p(x), q(x)) = q(x)ln dx. (118) p(x) −∞

Although DKL (p(·), q(·)) ≥ 0 and DKL (p(·), q(·)) = 0 if and only if p(·) = q(·), the relative entropy is not a true metric, since DKL is not necessarily symmetric in the interchange p ↔ q and furthermore the triangle inequality need not be satisfied.

A.6.3 Mutual information

Now suppose we have two distributions over possibly different random variables, e.g., p(x) and q(y). The mutual information is the reduction in uncertainty about one variable due to the knowledge of the other variableA.7. COMPUTATIONAL COMPLEXITY 33

X r(x, y) I(p; q) = H(p) − H(p|q) = r(x, y)log , (119) x,y p(x)q(y)

where r(x, y) is the probability of finding value x and y. Mutual information is simplythe relative entropy between the joint distribution r(x, y) and the product distributionp(x)q(y) and as such it measures how much the distributions of the variables differfrom statistical independence. Mutual information does not obey all the properties ofa metric. In particular, the metric requirement that if p(x) = q(y) then I(x; y) = 0need not hold, in general. As an example, suppose we have two binary randomvariables with r(0, 0) = r(1, 1) = 1/2, so r(0, 1) = r(1, 0) = 0. According to Eq. 119,the mutual information between p(x) and q(y) is log 2 = 1. The relationships among the entropy, relative entropy and mutual informationare summarized in Fig. A.5. The figure shows, for instance, that the joint entropyH(p, q) is generally larger than individual entropies H(p) and H(q); that H(p) =H(p|q) + I(p; q), and so on.

H(p,q)

H(p|q) I(p;q) H(q|p)

H(p) H(q)

Figure A.5: The relationship among the entropy of distributions p and q, mutualinformation I(p, q), and conditional entropies H(p|q) and H(q|p). From this figure onecan quickly see relationships among the information functions, for instance I(p; p) =H(p); that if I(p; q) = 0 then H(q|p) = H(q), and so forth.

A.7 Computational complexity

In analyzing and describing the difficulty of problems and the algorithms designed tosolve such problems, we turn now to computational complexity. For instance, calcu-lating the standard deviation of a distribution is somehow “harder” than calculatingits mean. Furthermore, some algorithms for computing some function may be fasteror take less memory, than another algorithm. How can we specify such differences,independent of the current computer hardware (which is always changing anyway)? To this end we use the concept of the order of a function and asymptotic nota-tion and “big oh,” “big omega,” and “big theta” asymptotic notations. The threeasymptotic bounds most often used are: 34 CONTENTS

big oh Consider the asymptotic upper bound. We say that f (x) is “of the big oh order of g(x)” (written f (x) = O(g(x)) if there exist constants c0 and x0 such that f (x) ≤ c0 g(x) for all x > x0 . We shall assume that all our functions are positive and dispense with taking absolute values. This means simply that for sufficiently large x, an upper bound on f (x) grows no worse than g(x). For instance, if f (x) = a + bx + cx2 then f (x) = O(x2 ) because for sufficiently large x, the constant, linear and quadratic terms can be “overcome” by proper choice of c0 and x0 . The generalization to functions of two or more variables is straightforward. It should be clear that by the definition above, the (big oh) order of a function is not unique. For instance, we can describe our particular f (x) as being O(x2 ), O(x3 ), O(x4 ), O(x2 ln x), and so forth. Welittle oh write the tightest asymptotic upper bound f (x) = o(g(x)), read “little oh of g(x)” for the minimum in the class O(g(x)). Thus for instance if f (x) = ax2 + bx + c, then f (x) = o(x2 ). Conversely, we use big omega notation, Ω(·), for lower bounds, and little omega, ω(·), for the tightest lower bound. Of these, the big oh notation has proven to be most useful since we generally want an upper bound on the resources needed to solve a problem; it is frequently too difficult to determine the little oh complexity. Such a rough analysis does not tell us the constants c and x0 . For a finite size problem it is possible (though not likely) that a particular O(x3 ) algorithm is simpler than a particular O(x2 ) algorithm, and it is occasionally necessary for us to determine these constants to find which of several implemementations is the simplest. Never- theless, for our purposes the big oh notation as just described is generally the best way to describe the computational complexity of an algorithm. Suppose we have a set of n vectors, each of which is d-dimensional and we want to calculate the mean vector. Clearly, this requires O(nd) multiplications. Sometimes we stress space and time complexities, which are particularly relevant when contemplat- ing parallel hardware implementations. For instance, the d-dimensional sample meanA.7. BIBLIOGRAPHICAL REMARKS 35

could be calculated with d separate processors, each adding n sample values. Thuswe can describe this implementation as O(d) in space (i.e., the amount of memory spaceor possibly the number of processors) and O(n) in time (i.e., number of sequential complexitysteps). Of course for any particular algorithm there may be a number of time-spacetradeoffs. time complexity

Bibliographical RemarksThere are several good books on linear system, such as [13], and matrix computations[9]. Lagrange optimization and related techniques are covered in the definitive book[2]. While [12] is of historic interest and significance, readers seeking clear presen-tations of the central ideas in probability are [11, 8, 6, 18]. Another book treatingthe foundations is [3]. A handy reference to terms in probability and statistics is[17]. The definitive collection of papers on information theory is [7], and an excellenttextbook, at the level of this one, is [5]; readers seeking a more abstract and formaltreatment should consult [10]. The multi-volume [14, 15, 16] contains a descriptionof computational complexity, the big oh and other asymptotic notations. Somewhatmore accessible treatments can be found in [4] and [1].36 CONTENTSBibliography