The More you Learn, the Less you Store: Memory-Controlled Incremental SVM

This paper presents a novel SVM-based algorithm for visual object recognition, capable of learning model representations incrementally. We combine an incremental extension of SVMs with a method which reduces the number of support vectors needed to build the decision function without any loss in performance. The resulting algorithm is guaranteed to achieve the same recognition performance as the original incremental method while reducing the memory requirements. We benchmarked the novel technique against the batch method and the original version of incremental SVM. Experiments were performed in two domains, material categorization and indoor place recognition. In both applications, results show that the two incremental methods preserve the performance of the batch algorithm, but only our new technique consistently achieves a statistically significant reduction of the memory requirements. We then propose an extension to the part of the algorithm controlling the number of support vectors to be stored. We introduce a parameter which permits a user-set trade-off between performance and memory reduction. This property is potentially useful in applications like indoor place recognition for multi-sensory topological mapping, where the memory size of the visual models must be kept under control.