You are here:

Fusion of Supervised Classifiers using Theory of Evidence

In the field of pattern recognition, more specifically in the area of supervised and feature-vector-based classifications, various classification methods exist, but none of them is flawless given any data sources. Each classifier behaves differently, with its own strengths and weaknesses. Some are more efficient than others in particular situations. Performance of these individual classifiers can be improved by combining them into one multiple classifier. In order to make more realistic decisions, a multiple classifier may use some measures generated by each classifier and use some a priori knowledge, such as probability distributions, reliability rates and confusion matrices. Individual classifiers studied in this project are the Bayes, the k-nearest neighbors and the neural network classifiers. They are combined using Dempster-Shafer's theory. The problem simplifies in finding weights that best represent the evidences of individual classifier. We suggest basic probability assignments (BPAs) based on some measures which precede the decision step. Following the study of some classical multiple classifiers in the literacy, we compare them with our approach that separates in two distinct multiple classifiers. Tests are made on three different databases that are infrared images of ships, handwritten digits and satellite images of terrains. One of the suggested multiple classifier gives better results than all other classical multiple classifiers tested in this work.