* '''Missing data''': it is worth noting that a big piece of information is lacking here. We aim at finding the parameters defining the mixture, but we don't know from which cluster each observation is coming! That's why it is useful to introduce the following N [http://en.wikipedia.org/wiki/Latent_variable latent variables] <math>Z_1,...,Z_i,...,Z_N</math>, one for each observation, such that <math>Z_i=k</math> means that observation <math>x_i</math> belongs to cluster <math>k</math> ([http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29 indicators]). This is called the "missing data formulation" of the mixture model. Thanks to this, we can reinterpret the mixture weights: <math>w_k = P(Z_i=k/\theta)</math>. Moreover, we can now define the membership probabilities, one for each observation:

+

* '''Missing data''': it is worth noting that a big piece of information is lacking here. We aim at finding the parameters defining the mixture, but we don't know from which cluster each observation is coming! That's why it is useful to introduce the following N [http://en.wikipedia.org/wiki/Latent_variable latent variables] <math>Z_1,...,Z_i,...,Z_N</math> (also called hidden or allocation variables), one for each observation, such that <math>Z_i=k</math> means that observation <math>x_i</math> belongs to cluster <math>k</math> ([http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29 indicators]). This is called the "missing data formulation" of the mixture model. Thanks to this, we can reinterpret the mixture weights: <math>w_k = P(Z_i=k/\theta)</math>. Moreover, we can now define the membership probabilities, one for each observation:

Revision as of 18:26, 11 January 2012

Learn about mixture models and the EM algorithm

(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)

Motivation: a large part of any scientific activity is about measuring things, in other words collecting data, and it is not infrequent to collect heterogeneous data. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is thus to recover from the data, ie. to infer, (i) how many clusters there are, (ii) what are the features of these clusters, and (ii) from which cluster each sample comes from.

Data: we have N observations, noted X = (x1,x2,...,xN). For the moment, we suppose that each observation xi is univariate, ie. each corresponds to only one number.

Hypothesis: let's assume that the data are heterogeneous and that they can be partitioned into K clusters (in this document, we suppose that K is known). This means that we expect a subset of the observations to come from cluster k = 1, another subset to come from cluster k = 2, and so on.

Model: technically, we say that the observations were generated according to a density functionf. More precisely, this density is itself a mixture of densities, one per cluster. In our case, we will assume that each cluster k corresponds to a Normal distribution, which density is here noted g, with mean μk and standard deviation σk. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight wk (also called mixing proportion) to be the probability that any given observation comes from cluster k. As a result, we have the following list of parameters: θ = (w1,...,wK,μ1,...μK,σ1,...,σK). Finally, for a given observation xi, we can write the model:

The constraints are:
and

Missing data: it is worth noting that a big piece of information is lacking here. We aim at finding the parameters defining the mixture, but we don't know from which cluster each observation is coming! That's why it is useful to introduce the following N latent variablesZ1,...,Zi,...,ZN (also called hidden or allocation variables), one for each observation, such that Zi = k means that observation xi belongs to cluster k (indicators). This is called the "missing data formulation" of the mixture model. Thanks to this, we can reinterpret the mixture weights: wk = P(Zi = k / θ). Moreover, we can now define the membership probabilities, one for each observation:

We can now write the complete likelihood, ie. the likelihood of the augmented model, assuming all observations are independent:
.

And also the incomplete (or marginal) likelihood:

ML estimation: we want to find the values of the parameters that maximize the likelihood. This reduces to (i) differentiating the log-likelihood with respect to each parameter, and then (ii) finding the value at which each partial derivative is zero. Instead of maximizing the likelihood, we maximize its logarithm, noted l(θ). It gives the same solution because the log is monotonically increasing, but it's easier to derive the log-likelihood than the likelihood. Here is the whole formula for the (incomplete) log-likelihood:

MLE analytical formulae: a few important rules are required to write down the analytical formulae of the MLEs, but only from a high-school level (see here). Let's start by finding the maximum-likelihood estimates of the mean of each cluster:

As we derive with respect to μk, all the others means μl with are constant, and thus disappear:

And finally:

Once we put all together, we end up with:

By convention, we note the maximum-likelihood estimate of μk:

Therefore, we finally obtain:

By doing the same kind of algebra, we derive the log-likelihood w.r.t. σk:

And then we obtain the ML estimates for the standard deviation of each cluster: