Category: Statistics

Deep learning reigns undisputed as the new de-facto method for image classification. However, at least until the beginning of 2016, it was not yet quite clear whether deep learning methods were undisputedly better than more traditional approaches based on handcrafted features for the specific task of video classification. While most research interest is certainly being directed towards deep learning, we thought it would be fun to play on the handcrafted features side for a while and we actually made some pretty interesting discoveries along the way.

Disclaimer: this post is directed towards beginners in computer vision and may contain very loose descriptions or simplifications of some topics in order to bring interest about the field to unexperienced readers – for a succinct and peer-reviewed discussion about the topic, the reader is strongly advised to refer to https://arxiv.org/abs/1608.07138 instead.

Three (non-exhaustive) approaches for classification

In the following paragraphs, we will explore a little three different ways of performing video classification: the task of determining to which of a finite set of labels a video must belong to.

Handcrafted features: designing image features yourself

Initial video classification systems were based on handcrafted features, or, in other words, based on techniques for extracting useful information from a video that have been discovered or invented based on the expertise of the researcher.

For example, a very basic approach would be to consider that whatever could be considered as a “corner” in an image (see link for an example) would be quite relevant for determining what was inside this image. We could then present this collection of points (which would be of much reduced size than the image itself) to a classifier and hope it would be able to determine its contents from this simplified representation of the data.

Deep learning: letting networks figure out features for you

Well, some could actually think that using corners or other features to try to guess what is in an image or video is not straightforward at all – why not simply use the image or video itself as a whole and present this to the classifier to check out what it would come up with? Well the truth is that this had been tried, but until a few years ago, it simply wouldn’t work except for simple problems. For some time we didn’t know how to make classifiers that could handle extremely large amounts of data, and yet extract anything useful from it.

This started to change in 2006, when the interest on training large neural networks started to raise. In 2012, a deep convolutional network with 8 layers managed to win the ImageNet challenge, far outperforming other approaches based on Fisher Vector encodings of handcrafted features. Since then, many works have shown how deep nets could perform way better than other methods in image classification and other domains.

However, while deep nets have certainly been shown to be the undisputed winners in action classification, the same could not yet be said about video classification, at least not in the beginning of 2016. Moreover, training deep neural networks for video can also be extremely costly, both in terms of computational power needed as well as the number and sheer size of examples needed to train those networks.

Hybrid models: getting the best out of the both worlds 😉

So, could we design classification architectures that could be both powerful and easy to train? In the sense that it wouldn’t be necessary to rely on huge amounts of labelled data in order to learn even the most basic, pixel-level aspects of a video, but instead in a way that we could leverage this knowledge already from techniques that are already known to work fairly well?

Fisher and a deep net – although not quite the same variety found in video classification.

Well, indeed, the answer seems to be yes – as long as you pay attention to some to some details that can actually make a huge difference.

This is shown in the paper “Sympathy for Details: Hybrid Classification Architectures for Action Recognition.” This paper is mostly centered about two things: a) showing that it is possible to make standard methods perform as good as deep nets for video; and b) showing that by combining Fisher Vectors of traditional handcrafted video features with a multi-layer architecture can actually perform better than most deep nets for video, achieving state-of-the-art results at the date of submission.

The paper, co-written with colleagues and advisors from Xerox Research Centre and the Computer Vision Center of the Autonomous University of Barcelona, has been accepted for publication in the 14th European Conference on Computer Vision (ECCV’16), to be held in Amsterdam, The Netherlands, this October, and can be found here.

The Accord.NET Framework is not only an image processing and computer vision framework, but also a machine learning framework for .NET. One of its features is to encompass the exact same algorithms that can be found in other libraries, such as LIBLINEAR, but offer them in .NET withing a common interface ready to be incorporated in your application.

Hypothesis testing

What does hypothesis testing means in statistics (and should also mean everywhere, for that matter)? You may recall from Karl Popper’s theory of falsiability that good theories can rarely be accurately proven, but you may gain a considerable confidence on them by constantly challenging and failing to refute them.

This comes from the fact that it is often easier to falsify something than to prove it. Consider for instance the white-swan/black-swan example: Let’s say a theory states that all swans are white. This is a very strong claim; it does not applies to one or a few particular observations of a swan, but all of them. It would be rather difficult to verify if all of the swans in Earth are indeed white. It is thus almost impossible to prove this theory directly.

However, the catch is that it will take only a single contrary example to refute it. If
we find a single swan that is black, the entire theory should be rejected, so alternate theories could be raised. It should be fairly easy to attempt to prove a theory wrong. If a theory continuously survives those attempts to be proven wrong, it becomes stronger. This does not necessarily means it is correct, only that it is very unlikely to be wrong.

This is pretty much how the science method works; it also provides a solution to the demarcation problem originally proposed by Kant: the problem of separating the sciences from the pseudo-sciences (i.e. astronomy from astrology). A “good” theory should be easy to attack so we can try to refute it; and by constantly challenging it and failing to prove it wrong, we gain further confidence that this theory may,
indeed, be right. In short:

Often the most interesting theories can’t be proven right, they can only be proven wrong. By continuously refuting alternatives, a theory becomes stronger (but most likely never reaching the ‘truth’).

Answering the question in the first phrase of this section, hypothesis testing means verifying if a theory holds even when confronted with alternative theories. In statistical hypothesis testing, this often means checking if a hypothesis holds even when confronted with the fact that it may have just happened to be true by pure chance or plain luck.

Statistical hypothesis testing

Fisher (1925) also noted that we can’t always prove a theory but we can attempt to refute it. Therefore, statistical hypothesis testing includes stating a hypothesis, which is the hypothesis we are trying to invalidade; and check if we can confidently reject it by confronting it with data from a test or experiment. This hypothesis to be pickpocketed is often called the null hypothesis (commonly denoted H0). It receives this name as it is usually the hypothesis of no change: there is no difference, nothing changed after the experiment, there is no effect.

The hypotheses verified by statistical hypothesis tests are often theories about whether or not a random sample from a population comes from a given probability distribution. This seems weird, but several problems can be cast in this way. Suppose, for example, we would like to determine if students from a classroom have significantly different grades than students from another room. Any difference could
possibly be attributed to chance, as some students may just perform better on a exam because of luck.

An exam was applied to both classrooms. The exam results (by each student) are written below:

Results for classroom A are not significantly different from the results from classroom B. Any difference in means could have been explained due to chance alone.

Results are indeed different. The apparent differences are very unlikely to have occurred by chance.

Since we have less than 30 samples, we will be using a Two-Sample T-Test to test the hypothesis that the population’s mean values of the two samples are not equal. Besides, we will not be assuming equal variances. So let’s we create our test object:

Default

1

2

3

4

5

6

using Accord.Statistics;

using Accord.Statistics.Testing;

using Accord.Statistics.Testing.Power;

vartest=newTwoSampleTTest(A,B,

hypothesis:TwoSampleHypothesis.ValuesAreNotEqual);

And now we can query it:

Default

1

2

// Check whether the test was significant (returns true)

Console.WriteLine("Significant: "+test.Significant);

Which reveals the test is indeed significant. And now we have at least two problems to address…

Problems

So the test was significant. But would this mean the difference itself is significant?
Would this mean there any serious problem with the school teaching method?

No – but it doesn’t mean the contrary either. It is impossible to tell just by looking at the p-level.

The test only said there was a difference, but it can not tell the importance of this difference. Besides the two classes having performed so differently they could trigger statistical significance, we don’t know if this difference has any practical significance. A statistical test being significant is not a proof; it is just an evidence to be balanced together with other pieces of information in order to drawn a conclusion.

Suppose a large clinical trial is carried out to compare a new medical treatment with a standard one. The statistical analysis shows a statistically significant difference in lifespan when using the new treatment compared to the old one. But the increase in lifespan is at most three days, with average increase less than 24 hours, and with poor quality of life during the period of extended life. Most people would not consider the improvement practically significant.

In our classroom example, the difference in means is about 0.46 points. If principals believe a difference of less than 0.50 in a scale from 0.00 to 10.00 is not that critical, there may be no need to force students from the room with lower grades to start taking extra lessons after school. In other words, statistical hypothesis testing does not lead to automatic decision making. A statistically significant test is just another evidence which should be balanced with other clues in order to take a decision or
draw a conclusion.

Problem 2: Powerless tests can be misleading

The p-level reported by the significance test is the probability of the extreme data we found be occurring given the null hypothesis is correct. Often, this is not the case. We
must also know the probability that the test will reject the null hypothesis when the null hypothesis is false. To do so, we must compute the power of our test, and, better yet, we should have used this information to conclude how many samples we would need to achieve a more informative test before we conducted our experiment. The power is then the probability of detecting a difference if this difference indeed exists (Smith, 2011). So let’s see:

Default

1

2

// Check the actual power of the test...

Console.WriteLine("Power: "+test.Analysis.Power);// gives about 0.50

The test we performed had astonishingly small power; so if the null hypothesis is false (and there is actually a difference between the classrooms) we have only about 50% chance of correctly rejecting it. Therefore, this would also lead to a 50% chance of producing a false negative – incorrectly saying there is no difference when actually there is. The table below exemplifies the different errors we can get by rejecting or not the null hypothesis.

Null hypothesis is true

Null hypothesis is false

Fail to reject the
null hypothesis

Correct
True negative

Type II error (beta)
False negative

Reject the null
hypothesis

Type I error (alpha)
False positive

Correct outcome
True positive

Tests with little statistical power are often inconsistent in the literature. Suppose, for example, that the score from the first student from classroom B had earned a 7.52 instead of 7.50. Due to the low power of the test, this little change would already be sufficient to render the test nonsignificant, and we will not be able to reject the null hypothesis that the two population means aren’t different anymore. Due to the low power of the test, we can’t distinguish between a correct true negative and a type II error. This is why powerless tests can be misleading and should never be relied upon for decision making (Smith, 2011b).

The power of a test increases with the sample size. To obtain a power of at least 80%, let’s see how many samples should have been collected:

Default

1

2

3

4

5

6

7

8

9

10

// Create a post-hoc analysis to determine sample size

varanalysis=newTwoSampleTTestPowerAnalysis(test)

{

Power=0.80

};

analysis.ComputeSamples();

Console.WriteLine("Samples in each group:"

+Math.Ceiling(analysis.Samples1));// gives 57

So, we would actually need 57 students in each classroom to confidently affirm whether there was an difference or not. Pretty disappointing, since in the real world we wouldn’t be able to enroll more students and wait years until we could perform another exam just to adjust the sample size. On those situations, the power for the test can be increased by increasing the significance threshold (Thomas, Juanes, 1996), although clearly sacrificing our ability to detect false positives (Type I errors) in the process.

My test turned insignificant. Should I accept the null hypothesis?

The short answer is ‘only if you have enough power‘. Otherwise, definitively no.

If you have reason to believe the test you performed had enough power to detect a difference within the given Type-II error rate, and it didn’t, then accepting the null would most likely be acceptable. The acceptance should also be accompanied of an analysis of confidence intervals or effect sizes. Consider, for example, that some actual scientific discoveries were indeed made by accepting the null hypothesis rather than by contradicting it; one notable example being the discovery of the X-Ray (Yu, 2012).

Further criticism

Much of the criticism associated with statistical hypothesis testing is often related not to the use of statistical hypothesis testing per se, but on how the significance outcomes from such tests are interpreted. Often it boils down to incorrectly believing a p-value is the probability of a null hypothesis being true, when, in fact, it is only the probability of obtaining a test statistic as extreme as the one calculated from the data, always within the assumptions of the test under question.

Moreover, another problem may arise when we chose a null hypothesis which is obviously false. There is no point on hypothesis testing when the null hypothesis couldn’t possibly be true. For instance, it is very difficult to believe a parameter of a continuous distribution is exactly equal to an hypothesized value, such as zero. Given enough samples, it will always be possible to find a difference, as small as it gets, and the test will invariably turn significant. Under those specific circumstances, statistical testing can be useless as it has relationship to practical significance. That is why analyzing the effect size is important in order to determine the practical significance of an hypothesis test. Useful hypothesis would also need to be probable, plausible and falsifiable (Beaulieu-Prévost, 2005).

The following links also summarize much of the criticism in statistical hypothesis testing. The last one includes very interesting (if not enlightening) comments (in the comment section) on common criticisms of the hypothesis testing method.

The Accord.NET Framework

Now that we presented the statistical hypothesis testing framework, and now that the reader is aware of its drawbacks, we can start talking about performing those tests with the aid of a computer. The Accord.NET Framework is a framework for scientific computing with wide support for statistical and power analysis tests, without entering the merit if they are valid or no. In short, it provides some scissors;
feel free to run with them.

Tests available in the framework (at the time of this writing)

As it may already have been noticed, the sample code included in the previous section was C# code using the framework. In the aforementioned example, we created a T-Test for comparing the population means of two samples drawn from Normal distributions. The framework, nevertheless, includes many other tests, some with support for power analysis. Those include:

Framework usage should be rather simple. In order to illustrate it, the next section brings some example problems and their solution using the hypothesis testing module.

Example problems and solutions

Problem 1. Clairvoyant card game.

This is the second example from Wikipedia’s page on hypothesis testing. In this example, a person is tested for clairvoyance (ability of gaining information about something through extra sensory perception; detecting something without using the known human senses.

Default

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

// A person is shown the reverse of a playing card 25 times and is

// asked which of the four suits the card belongs to. Every time

// the person correctly guesses the suit of the card, we count this

// result as a correct answer. Let's suppose the person obtained 13

// correctly answers out of the 25 cards.

// Since each suit appears 1/4 of the time in the card deck, we

// would assume the probability of producing a correct answer by

// chance alone would be of 1/4.

// And finally, we must consider we are interested in which the

// subject performs better than what would expected by chance.

// In other words, that the person's probability of predicting

// a card is higher than the chance hypothesized value of 1/4.

BinomialTest test=newBinomialTest(

successes:13,trials:25,

hypothesizedProbability:1.0/4.0,

alternate:OneSampleHypothesis.ValueIsGreaterThanHypothesis);

Console.WriteLine("Test p-Value: "+test.PValue);// ~ 0.003

Console.WriteLine("Significant? "+test.Significant);// True.

Problem 2. Worried insurance company

This is a common example with variations given by many sources. Some of them can be found here and here.

Default

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

// An insurance company is reviewing its current policy rates. When the

// company initially set its rates, they believed the average claim amount

// was about $1,800. Now that the company is already up and running, the

// executive directors want to know whether the mean is greater than $1,800.

doublehypothesizedMean=1800;

// Now we have two hypothesis. The null hypothesis (H0) is that there is no

// difference between the initial set value of $1,800 and the average claim

// amount of the population. The alternate hypothesis is that the average

Problem 3. Differences among multiple groups (ANOVA)

This example comes from Wikipedia’s page on the F-test. Suppose we would like to study the effect of three different levels of a factor ona response (such as, for example, three levels of a fertilizer on plant growth. We have made 6 observations for each of the three levels a1, a2 and a3, and have written the results as in the table below.

Default

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

double[][]outcomes=newdouble[,]

{

// a1 a2 a3

{6,8,13},

{8,12,9},

{4,9,11},

{5,11,8},

{3,6,7},

{4,8,12},

}

.Transpose().ToArray();

// Now we can create an ANOVA for the outcomes

OneWayAnova anova=newOneWayAnova(outcomes);

// Retrieve the F-test

FTest test=anova.FTest;

Console.WriteLine("Test p-value: "+test.PValue);// ~0.002

Console.WriteLine("Significant? "+test.Significant);// true

// Show the ANOVA table

DataGridBox.Show(anova.Table);

The last line in the example shows the ANOVA table using the framework’s DataGridBox object. The DataGridBox is a convenience class for displaying DataGridViews just as one would display a message using MessageBox.
The table is shown below:

Problem 4. Biased bees

This example comes from the stats page of the College of Saint Benedict and Saint John’s University (Kirkman, 1996). It is a very interesting example
as it shows a case in which a t-test fails to see a difference between the samples because of the non-normality of of the sample’s distributions. The Kolmogorov-Smirnov nonparametric test, on the other hand, succeeds.

The example deals with the preference of bees between two nearby blooming trees in an empty field. The experimenter has colelcted data measurinbg how much time does a bee spents near a particular tree. The time starts to be measured when a bee first touches the tree, and is stopped when the bee moves more than 1 meter far from it. The samples below represents the measured time, in seconds, of the observed
bees for each of the trees.

Default

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

double[]redwell=

{

23.4,30.9,18.8,23.0,21.4,1,24.6,23.8,24.1,18.7,16.3,20.3,

14.9,35.4,21.6,21.2,21.0,15.0,15.6,24.0,34.6,40.9,30.7,

24.5,16.6,1,21.7,1,23.6,1,25.7,19.3,46.9,23.3,21.8,33.3,

24.9,24.4,1,19.8,17.2,21.5,25.5,23.3,18.6,22.0,29.8,33.3,

1,21.3,18.6,26.8,19.4,21.1,21.2,20.5,19.8,26.3,39.3,21.4,

22.6,1,35.3,7.0,19.3,21.3,10.1,20.2,1,36.2,16.7,21.1,39.1,

19.9,32.1,23.1,21.8,30.4,19.62,15.5

};

double[]whitney=

{

16.5,1,22.6,25.3,23.7,1,23.3,23.9,16.2,23.0,21.6,10.8,12.2,

23.6,10.1,24.4,16.4,11.7,17.7,34.3,24.3,18.7,27.5,25.8,22.5,

14.2,21.7,1,31.2,13.8,29.7,23.1,26.1,25.1,23.4,21.7,24.4,13.2,

22.1,26.7,22.7,1,18.2,28.7,29.1,27.4,22.3,13.2,22.5,25.0,1,

6.6,23.7,23.5,17.3,24.6,27.8,29.7,25.3,19.9,18.2,26.2,20.4,

23.3,26.7,26.0,1,25.1,33.1,35.0,25.3,23.6,23.2,20.2,24.7,22.6,

39.1,26.5,22.7

};

// Create a t-test as a first attempt.

vart=newTwoSampleTTest(redwell,whitney);

Console.WriteLine("T-Test");

Console.WriteLine("Test p-value: "+t.PValue);// ~0.837

Console.WriteLine("Significant? "+t.Significant);// false

// Create a non-parametric Kolmogovor Smirnov test

varks=newTwoSampleKolmogorovSmirnovTest(redwell,whitney);

Console.WriteLine("KS-Test");

Console.WriteLine("Test p-value: "+ks.PValue);// ~0.038

Console.WriteLine("Significant? "+ks.Significant);// true

Problem 5. Comparing classifier performances

The last example comes from (E. Ientilucci, 2006) and deals with comparing the performance of two different raters (classifiers) to see if their performance are significantly different.

Suppose an experimenter has two classification systems, both trained to classify observations into one of 4 mutually exclusive categories. In order to measure the performance of each classifier, the experimenter confronted their classification labels with the ground truth for a testing dataset, writing the respective results in the form of contingency tables.

The hypothesis to be tested is that the performance of the two classifiers are the same.

Default

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

// Create the confusion matrix for the first sytem.

vara=newGeneralConfusionMatrix(newint[,]]]

{

{317,23,0,0},

{61,120,0,0},

{2,4,60,0},

{35,29,0,8},

});

// Create the confusion matrix for the second system.

varb=newGeneralConfusionMatrix(newint[,]

{

{377,79,0,0},

{2,72,0,0},

{33,5,60,0},

{3,20,0,8},

});

vartest=newTwoMatrixKappaTest(a,b);

Console.WriteLine("Test p-value: "+test.PValue);// ~0.628

Console.WriteLine("Significant? "+test.Significant);// false

In this case, the test didn’t show enough evidence to confidently reject the null hypothesis. Therefore, one should restrain from affirming anything about differences between the two systems, unless the power for the test is known.

Unfortunately I could not find a clear indication in the literature about the power of a two matrix Kappa test. However, since the test statistic is asymptotically normal, one would try checking the power for this test by analysis the power of the underlying Z-test. If there is enough power, one could possibly accept the null hypothesis that there are no large differences between the two systems.

Suggestions

As always, I expect the above discussion and examples could be useful for interested readers and users. However, if you believe you have found a flaw or would like to discuss any portion of this post, please feel free to do so by posting on the comments section.

PS: The classroom example uses a T-test to test for differences in populations means. The T-Test assumes a normal distribution. The data, however, it not exactly normal, since it is crippled between 0 and 10. Suggestions for a better example would also be appreciated!

Jordan, one of the readers of the blog, wrote to point out some cool references for machine learning, mathematics and artificial intelligence.

Thanks again for all your contributions. I’m a .NET programmer coming from a background of studying politics and Latin American studies so when the moment came that I realized I was interested in A.I., I didn’t have many resources to turn to.

I have a collection of books and links that I found very very helpful when I was trying to learn about these concepts. I was thinking they could be useful to some of your readers who were in my shoes two years ago. So without further adieu:

Handbook of Normal Frames and Coordinates

“This can be a heavy text, but if you understand the concept, this can take you to the next level – I would categorize this as Topology but has applications in quantum physics, theoretical mathematics, theoretical physics etc…- in context of your blogs – granted this is a dedicated read – I think readers will be able to appreciate and truly understand what a Hilbert Space is after reading this book.”

Linear Algebra

“I liked this one because it was free and is still highly rated – I have also heard great reviews about David Poole’s book on linear algebra, but I couldn’t get myself to shell out the money when this was free.”

Bio-Inspired Artificial Intelligence

“Excellent reference – fairly in depth for the number of topics it covers, lots of illustrations (illustrations are always a good thing 🙂 and I’ve also found it to be a useful source for inspiration. I bought this book while I was looking into fuzzy swarm intelligence – it doesn’t have all the answers, but I am simply in love with this book.”

Video lectures on Machine Learning

“A collection of video lectures featuring very brilliant people – let’s face it… if you’re doing or are interested in mathematics this complex… you probably don’t know too many people who you can talk to / learn from on the subject unless you’re in a University studying this kind of thing – these are some great lectures on machine learning – I just recently found this site but wish I had found it sooner – it’s great if you’re just exploring machine learning or are very well versed in it – however, I fall somewhere in the middle of that distribution so take it for what it’s worth!”

Zero: The Biography of a Dangerous Idea

This one is more historical and conceptual than technical but it’s a captivating read and will help get you through those hard times when you want to put down that book on K-Dimensional Manifolds, but still need to satisfy your mathematical mind (some say it’s never at rest once you catch that "learning bug").

Like K-Means, Gaussian Mixture Models (GMM) can be regarded as a type of unsupervised learning or clustering methods. They are among the most statistically mature methods for clustering. But unlike K-Means, GMMs are able to build soft clustering boundaries, i.e., points in space can belong to any class with a given probability.

The code presented here is part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications in .NET. To use the framework in your projects, install it by typing Install-Package Accord.MachineLearning in your IDE’s NuGet package manager.

Contents

Introduction

In statistics, a mixture model is a probabilistic model which assumes the underlying data to belong to a mixture distribution. In a mixture distribution, its density function is just a convex combination (a linear combination in which all coefficients or weights sum to one) of other probability density functions:

The individual pi(x) density functions that are combined to make the mixture density p(x) are called the mixture components, and the weights w1, w2, …, wn associated with each component are called the mixture weights or mixture coefficients.

The most common mixture distribution is the Gaussian (Normal) density function, in which each of the mixture components are Gaussian distributions, each with their own mean and variance parameters.

To give a more solid and concrete understanding of the Gaussian mixture models, we will be jumping directly on how to represent those abstractions in source code through the use of class diagrams. Hopefully class diagrams may give a better and direct overview on the subject than a lengthy theoretical discussion.

Source code

First we will discuss about the characteristics of any probability distribution. The IDistribution interface (depicted below) says any probability distribution may have a probability function and a distribution function. It also says that probability distributions can also be fitted to sets of observations through a parameter estimation method.

Since distribution functions can be either univariate or multivariate, the methods above accept any number of scalar values as input through the use of the params keyword. The Fit method for parameter estimation also accepts a general System.Array because we could be given either an array of univariate observations in the form of a double[] or an array of multivariate observations in the form of a jagged double[][] array. Since each observation may have a different weight in the estimation process, the weights parameter can be used to inform those weights to the fitting method.

Probability distributions may also have some associated measures, mainly the Expectation and the Variance of the modeled random variable. Depending if the distribution is univariate or multivariate, the expected values and variances can be either a single scalar or a vector of scalars. For multivariate distributions the variance can be also represented by a symmetric, positive-definite matrix known as the variance-covariance matrix.

Moreover, probability distributions can also be continuous or discrete. In continuous distributions the probability function is known as the Probability Density Function (pdf), and in discrete distributions the probability function is known as the Probability Mass Function (pmf). However, since our distribution of interest, the Gaussian, is a continuous distribution, be focusing only on continuous multivariate distributions:

The Normal (Gaussian) distribution and the mixture distributions fall under the multivariate continuous distributions category and are implemented as such.

Normal distributions

The Normal (or Gaussian) multivariate distribution is a multivariate distribution whose parameters are the mean vector μ and a variance-covariance matrix Σ. Its probability density function is given by:

The more detailed class diagram for the normal distribution shows the multivariate characteristics of most of its methods, in particular of the Fit method for estimating Gaussian parameters from a multivariate set of data. Each sample, given in the form of a double[], may also have different associated weights. In this case, the weight associated with each of the samples can be passed through the weights parameter. The source code for the Gaussian distribution can be found in the NormalDistribution.cs class.

Mixture distributions

Mixture distributions are just convex combinations of other probability distributions. Thus said, mixture distributions have an array of component distributions and a coefficient array which contains the weights of the each of the component probability distributions.

In the generic implementation above, T is the type of the distributions used in the mixture. The most common mixture distribution, the Gaussian Mixture Distribution, could then be created by instantiating a Mixture object passing the initial Normal distributions as constructor parameters.

Alternatively, the parameters of the mixture distributions could be estimated from a set of observations by calling the Fit method. To estimate the parameters of a mixture distribution we will be using a common technique known as the Expectation-Maximization algorithm.

Initialization

Initialize the distribution parameters, such as the means, covariances and mixing coefficients and evaluate the initial value of the log-likelihood (the goodness of fit of the current distribution against the observation dataset)’;

Expectation

Evaluate the responsibilities (i.e. weight factors of each sample) using the current parameter values;

Maximization

Re-estimate the parameters using the responsibilities found in the previous step;

Repeat

Re-evaluate the log-likelihood and check if it has changed; if it has changed less than a given threshold, the algorithm has converged.

Below is the actual realization of the algorithm as implemented in the Fit method of the Mixture<T> class.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

/// &lt;summary&gt;

/// Fits the underlying distribution to a given set of observations.

/// &lt;/summary&gt;

///

/// &lt;param name="observations"&gt;The array of observations to fit the model against. The array

/// elements can be either of type double (for univariate data) or

/// type double[] (for multivariate data).&lt;/param&gt;

/// &lt;param name="weights"&gt;The weight vector containing the weight for each of the samples.&lt;/param&gt;

/// &lt;param name="options"&gt;Optional arguments which may be used during fitting, such

Gaussian Mixture Models

Gaussian mixture models are among the most commonly used examples of mixture distributions. The GaussianMixtureModel class encompasses a Mixture<NormalDistribution> object and provides methods to learn from data and to perform actual classification through a simplified interface.

Moreover, a common problem which rises in mixture model fitting through E-M is the proper initialization of the mixture parameters. One popular approach is to initialize the mixture parameters using the centroids detected by the K-Means algorithm. The code below shows how the mixture parameters are computed by first creating a KMeans object and then passing the clusters to the mixture model for further processing.

Using the code

Code usage is rather simple. First, instantiate a new GaussianMixtureModel object passing the desired number of components in the Gaussian mixture. Then, call the Compute(double[][], double) method passing the learning samples and a desired convergence threshold.

C#

1

2

3

4

5

6

7

8

// Create a new Gaussian Mixture Model with 2 components

GaussianMixtureModel gmm=newGaussianMixtureModel(2);

// Compute the model (estimate)

gmm.Compute(samples,0.0001);

// Classify a single sample

intc=gmm.Gaussians.Nearest(sample);

After training has been completed, a new sample can be classified by calling the Classify(double[]) method.

Sample application

The accompanying sample application demonstrates how Gaussian Mixture Models can be used to cluster normally-distributed samples of data. By clicking the “Create random data” button, a given number of Normal distributions will be created in the two-dimensional space using random mean and covariance matrices.

After the data has been created, you can use the “Fit a Gaussian Mixture Model” button to fit a mixture of Gaussians to the data. Each of the Gaussians will receive a random color and the samples which have the greatest probability of belonging to any of the Gaussians will be colored accordingly.

Left: a random dataset containing three Gaussian clusters. Right: the clusters as identified by the Gaussian Mixture Model.

Conclusion

In this article we found how Gaussian Mixture Models can be successfully used to create soft clustering boundaries around data. Those soft boundaries are possible because in a mixture model each sample is said to belong to a cluster only within certain probability.

A main drawback of GMMs is that the number of Gaussian mixture components, as in K-Means, is assumed known as prior, so it cannot be considered as totally unsupervised clustering method. To aid in this situation, one could use additional algorithms such as the Minimum Message Length (MML) criteria.

Another problem arises with the correct initialization of the mixture components. A poor choice of initial parameters will invariably lead to poor results, as the E-M algorithm converges only to a local optimization point. A commonly used solution is initialization by randomly sampling in the mixture data. Other common solution, covered by the implementation presented here, is to use K-Means to perform initialization. However, K-Means itself may also converge to a poor solution and impact negatively in the mixture model fitting.

In statistics and machine learning, k-means clustering is a method of cluster analysis which aims to partitionn observations into k clusters in which each observation belongs to the cluster with the nearest mean [4]. In its most common form, the algorithm is an iterative greedy algorithm which converges to a local optimum after a certain number of iterations.

Illustration of the K-Means algorithm.

The algorithm works by first selecting k locations at random to be the initial centroids for the clusters. Each observation is then assigned to the cluster which has the nearest centroid, and the centroids are recalculated using the mean value of assigned values. The algorithm then repeats this process until the cluster centroids do not change anymore, or until the change less than a given threshold.

There are other refinements and extensions of the algorithm. The version depicted above is its most common form, also referred as Lloyd’s algorithm.

The source code has been implemented using Accord.NET and is now part of the framework. In the current version (2.1.2), the following classes related to K-Means are contained inside the Accord.MachineLearning namespace. The source code available in this page contains only the parts of the framework actually needed to support the algorithm.

Class diagram for the K-Means algorithm.

The KMeans class is the main class representing the K-Means algorithm. The algorithm itself is implemented in the Compute(double[][] data, double threshold) method, which accepts a set of observations and a convergence threshold to determine when the method should stop.

<span>// The algorithm stops when there is no further change in</span>

<span>// the centroids (difference is less than the threshold).</span>

<span>if</span>(centroids.IsEqual(newCentroids,threshold))

<span>break</span>;

<span>// go to next generation</span>

centroids=newCentroids;

}

<span>while</span>(<span>true</span>);

<span>// Compute cluster information (optional)</span>

<span>for</span>(<span>int</span>i=0;i&lt;k;i++)

{

<span>// Extract the data for the current cluster</span>

<span>double</span>[][]sub=data.Submatrix(labels.Find(x=&gt;x==i));

<span>// Compute the current cluster variance</span>

covariances[i]=Statistics.Tools.Covariance(sub,centroids[i]);

<span>// Compute the proportion of samples in the cluster</span>

proportions[i]=(<span>double</span>)sub.Length/data.Length;

}

<span>// Return the classification result</span>

<span>return</span>labels;

}

The implementation is quite straightforward and does not use additional techniques to avoid convergence problems. More refined techniques may be added to the implementation in the future, so please make sure to download the latest version of Accord.NET Framework for the most up-to-date revision of the code.

To use the code, create a new instance of the KMeans class passing the desired number of clusters to its constructor. Additionally, you may also pass a distance function to be used as a distance metric during clustering. The default is to use the square Euclidean distance.

The k-means clustering algorithm is commonly used in computer vision as a form of image segmentation. The results of the segmentation are often used to aid border detection and object recognition. The sample application performs image segmentation using the standard squared Euclidean distance over RGB pixel color space. There are, however, better distance metrics to be used for image segmentation, such as weighted distances and other color spaces, which will not be addressed in this example.

To perform image segmentation, we will first translate our image into an array of pixel values. The single image will be read, pixel by pixel, into a jagged array where each element is a double array of length 3. Each element of those double array will contain one of the three RGB values scaled to the interval [–1,1].

After we perform clustering on this array of pixel values, each pixel will have an associated cluster label. Each of these values will then be swapped by its corresponding cluster centroid. The source code below is called when one clicks the Run button in the application.

K-Means is a very simple and popular approach to clustering. The implementation presented here is the same implementation used in Accord.NET. As it can be seem, the method can be easily extended with custom distance functions through delegates or lambda expressions, and can be used in different contexts, such as image segmentation, without further modifications. As a suggestion for improvement, the method can be further speeded up by using the triangle inequality as suggested on the paper "Using the triangle inequality to accelerate k-means", by Charles Elkan.

In the next article, we will see how we can use K-Means to initialize the Expectation-Maximization algorithm for estimating Gaussian Mixture densities in Gaussian Mixture Models. Those articles will form the basis for Continuous density Hidden Markov Models.

The code uses the SVM library in Accord.NET Framework. The framework, which runs on .NET and is written mostly in C#, supports standard or multiclass support vector machines for either classification or regression, having more than 20 kernel functions available to choose from.

In the article, the same set and the same amount of training and testing samples have been used as in the previous Kernel Discriminant Analysis article for comparison purposes. The experimental results shows that SVMs can outperform KDA both in terms of efficiency and accuracy, requiring much less processing time and memory available while still producing robust results.

RANSAC is an iterative method to build robust estimates for parameters of a mathematical model from a set of observed data which is known to contain outliers. The RANSAC algorithm is often used in computer vision, e.g., to simultaneously solve the correspondence problem and estimate the fundamental matrix related to a pair of stereo cameras.

The code presented here is part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications in .NET. To use the framework in your projects, install it by typing Install-Package Accord.MachineLearning in your IDE’s NuGet package manager.

Introduction

RANSAC is an abbreviation for “RANdom SAmple Consensus“. It is an iterative method to estimate parameters of a mathematical model from a set of observed data which may contains outliers. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles in 1981.

The basic assumption is that the data consists of “inliers”, i.e., data whose distribution can be explained by some mathematical model, and “outliers” which are data that do not fit the model. Outliers could be considered points which come from noise, erroneous measurements or simply incorrect data. RANSAC also assumes that, given a set of inliers, there exists a procedure which can estimate the parameters of a model that optimally explains or fits this data.

Example: Fitting a simple linear regression

We can use RANSAC to robustly fit a linear regression model using noisy data. Consider the example below, in which we have a cloud of points that seems to belong to a line. These are the inliers of the data. The other points, which can be seem as measurement errors or extreme noise values, are points expected to be considered outliers.

Linear structure contained in noisy data.

RANSAC is able to automatically distinguish the inliers from the outliers through the evaluation of the linear regression model. To do so, it randomly selects subsets from the data and attempts to fit linear regression models using them. The model which best explains most of the data will then be returned as the most probably correct model fit.

The image below shows the result of fitting a linear regression directly (as shown by the red line) and using RANSAC (as shown by the blue line). We can see that the red line represents poorly the data structure because it considers all points in order to fit the regression model. The blue line seems to be a much better representation of the linear relationship hidden inside the overall noisy data.

Hidden linear structure inside noisy data. The red line shows the fitting of a linear regression model directly considering all data points. The blue line shows the same result using RANSAC.

Source code

The code below implements RANSAC using a generic approach. Models are considered to be of the reference type TModel and the type of data manipulated by this model is considered to be of the type TData. This approach allows for the creation of a general purpose RANSAC algorithm which can be used in very different contexts, be it the fitting of linear regression models or the estimation of homography matrices from pair of points in different images.

This code is available in the class RANSAC of the Accord.NET Framework (source).

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

/// &lt;summary&gt;

/// Computes the model using the RANSAC algorithm.

/// &lt;/summary&gt;

publicTModel Compute(TData[]points,outint[]inliers)

{

// We are going to find the best model (which fits

// the maximum number of inlier points as possible).

TModel bestModel=null;

int[]bestInliers=null;

intmaxInliers=0;

// For this we are going to search for random samples

// of the original points which contains no outliers.

intcount=0;// Total number of trials performed

doubleN=maxEvaluations;// Estimative of number of trials needed.

// While the number of trials is less than our estimative,

// and we have not surpassed the maximum number of trials

while(count&lt;N&amp;&amp;count&lt;maxEvaluations)

{

int[]idx;

TModel model=null;

intsamplings=0;

// While the number of samples attempted is less

// than the maximum limit of attempts

while(samplings&lt;maxSamplings)

{

// Select at random s datapoints to form a trial model.

idx=Statistics.Tools.Random(points.Length,s);

TData[]sample=points.Submatrix(idx);

// If the sampled points are not in a degenerate configuration,

if(!degenerate(sample))

{

// Fit model using the random selection of points

model=fitting(sample);

break;// Exit the while loop.

}

samplings++;// Increase the samplings counter

}

// Now, evaluate the distances between total points and the model returning the

// indices of the points that are inliers (according to a distance threshold t).

idx=distances(model,points,t);

// Check if the model was the model which highest number of inliers:

if(idx.Length&gt;maxInliers)

{

// Yes, this model has the highest number of inliers.

maxInliers=idx.Length;// Set the new maximum,

bestModel=model;// This is the best model found so far,

bestInliers=idx;// Store the indices of the current inliers.

// Update estimate of N, the number of trials to ensure we pick,

// with probability p, a data set with no outliers.

doublepInlier=(double)idx.Length/(double)points.Length;

doublepNoOutliers=1.0-System.Math.Pow(pInlier,s);

N=System.Math.Log(1.0-probability)/System.Math.Log(pNoOutliers);

}

count++;// Increase the trial counter.

}

inliers=bestInliers;

returnbestModel;

}

Besides the generic parameters, the class utilizes three delegated functions during execution.

The Fitting function, which should accept a subset of the data and use it to fit a model of the chosen type, which should be returned by the function;

The Degenerate function, which should check if a subset of the training data is already known to result in a poor model, to avoid unnecessary computations; and

The Distance function, which should accept a model and a subset of the training data to compute the distance between the model prediction and the expected value for a given point. It should return the indices of the points only whose predicted and expected values are within a given threshold of tolerance apart.

Using the code

In the following example, we will fit a simple linear regression of the form x→y using RANSAC. The first step is to create a RANSAC algorithm passing the generic type parameters of the model to be build, i.e. SimpleLinearRegression and of the data to be fitted, i.e. a double array.

In this case we will be using a double array because the first position will hold the values for the input variable x. The second position will be holding the values for the output variables y. If you are already using .NET 4 it is possible to use the Tuple type instead.

After the creation of the RANSAC algorithm, we should set the delegate functions which will tell RANSAC how to fit a model, how to tell if a set of samples is degenerate and how to check for inliers in data.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

// Set the RANSAC functions to evaluate and test the model

ransac.Fitting=// Define a fitting function

delegate(double[][]sample)

{

// Retrieve the training data

double[]inputs=sample.GetColumn(0);

double[]outputs=sample.GetColumn(1);

// Build a Simple Linear Regression model

varr=newSimpleLinearRegression();

r.Regress(inputs,outputs);

returnr;

};

ransac.Degenerate=// Define a check for degenerate samples

delegate(double[][]sample)

{

// In this case, we will not be performing such checkings.

returnfalse;

};

ransac.Distances=// Define a inlier detector function

delegate(SimpleLinearRegressionr,double[][]sample,doublethreshold)

{

List&lt;int&gt;inliers=newList&lt;int&gt;();

for(inti=0;i&lt;sample.Length;i++)

{

// Compute error for each point

doubleinput=sample[i][0];

doubleoutput=sample[i][1];

doubleerror=r.Compute(input)-output;

// If the squared error is below the given threshold,

// the point is considered to be an inlier.

if(error*error&lt;threshold)

inliers.Add(i);

}

returninliers.ToArray();

};

Finally, all we have to do is call the Compute method passing the data. The best model found will be returned by the function, while the given set of inliers indices for this model will be returned as an out parameter.

C#

1

2

3

// Finally, try to fit the regression model using RANSAC

int[]inlierIndices;// indices for inlier points in the data set

SimpleLinearRegression rlr=ransac.Compute(data,outinlierIndices);

Sample application

The accompanying source application demonstrates the fitting of the simple linear regression model with and without using RANSAC. The application accepts Excel worksheets containing the independent values in the first column and the dependent variables in the second column.

Today I have just released the first version of Accord.NET. The Accord.NET Framework is a C# framework I have developed over the years while working on several areas of artificial intelligence, machine learning and statistics.

The Accord.NET Framework extends the excellent AForge.NET Framework with new tools and libraries. In order to use Accord.NET, you must have AForge.NET already installed. The first version number of Accord.NET will be used to indicate the compatibility status with AForge.NET versions, thus the first version will be starting at 2.0.0.

The framework is comprised by the set of libraries and sample applications, which demonstrate their features:

Accord.Statistics – library with statistical analysis and other tools;

Accord.Imaging – extension to the AForge.NET Imaging library with new filters and routines;

Accord.Neuro – extension to the AForge.NET Neuro library with other learning algorithms;

The Framework has just been released, so be ready to expect bugs and unpolished/unfinished areas. The code is released under a LGPL license. For additional help, issues, and discussions, please refer to the recently created forums.

The Code Project is a free repository of source code, tutorials and development resources. It is also home of a large and ever-growing community of software developers, architects and other technology professionals. It has fairly active forums, and is a reasonably good resource for resolving difficult software development issues.