Classifiers usually return some sort of an instance score with their classifications. These scores can be used as probabilities in various calculations, but first they need to be calibrated. Naive Bayes, for example, is a very useful classifier, but the scores it produces are usually "bunched" around 0 and 1, making these scores poor probability estimates. Support vector machines have a similar problem. Both classifier types should be calibrated before their scores are used as probability estimates.

This module calibrates classifier scores using a method called the Pool Adjacent Violators (PAV) algorithm. After you train a classifier, you take a (usually separate) set of test instances and run them through the classifier, collecting the scores assigned to each. You then supply this set of instances to the calibrate function defined here, and it will return a set of ranges mapping from a score range to a probability estimate.

For example, assume you have the following set of instance results from your classifier. Each result is of the form [ASSIGNED_SCORE, TRUE_CLASS]:

$sorted is boolean (0 by default) indicating whether the data are already sorted by score. Unless this is set to 1, calibrate() will sort the data itself.

Calibrate returns a reference to an ordered list of references:

[ [score, prob], [score, prob], [score, prob] ... ]

Scores will be in descending numerical order. See the DESCRIPTION section for how this structure is interpreted. You can pass this structure to the score_prob function, along with a new score, to get a probability.

The PAV algorithm is conceptually straightforward. Given a set of training cases ordered by the scores assigned by the classifier, it first assigns a probability of one to each positive instance and a probability of zero to each negative instance, and puts each instance in its own group. It then looks, at each iteration, for adjacent violators: adjacent groups whose probabilities locally increase rather than decrease. When it finds such groups, it pools them and replaces their probability estimates with the average of the group's values. It continues this process of averaging and replacement until the entire sequence is monotonically decreasing. The result is a sequence of instances, each of which has a score and an associated probability estimate, which can then be used to map scores into probability estimates.

For further information on the PAV algorithm, you can read the section in my paper referenced below.

None known. This implementation is straightforward but inefficient (its time is O(n^2) in the length of the data series). A linear time algorithm is known, and in a later version of this module I'll probably implement it.

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available.