Three adaptive version of the Ho-Kashyap perceptron training algorithm are derived based on gradient descent strategies. These adaptive Ho-Kashyap (AHK) training rules are comparable in their complexity to the LMS and perceptron training rules and are capable of adaptively forming linear discriminant surfaces which guarantee linear separability and of positioning such surfaces for maximal classification robustness. In particular, a derived version called AHK II is capable of adaptively identifying critical input vectors lying close to class boundaries in linearly separable problems. We extend this algorithm as AHK III, which adds the capability of fast convergence to linear discriminant surfaces that are 'good' approximations for nonlinearly separable problems.