Approximate Maximum Margin Algorithms with Rules Controlled by the Number of Mistakes

Download

Description/Abstract

We present a family of Perceptron-like algorithms with margin in which both the “effective” learning rate, defined as the ratio of the learning rate to the length of the weight vector, and the misclassification condition are independent of the length of the weight vector but, instead, are entirely controlled by rules involving (powers of) the number of mistakes. We examine the convergence of such algorithms in a finite number of steps and show that under some rather mild assumptions there exists a limit of the parameters involved in which convergence leads to classification with maximum margin. Very encouraging experimental results obtained using algorithms which belong to this family are also presented.