Abstract

In this paper, a condition for the boundedness of weighted coefficients of the perceptron with arbitrary initial weights and an arbitrary set of bounded training feature vectors is given and proved. Based on this derived condition, conditions for the global convergence of the output of the perceptron with a set of nonlinearly separable bounded training feature vectors to limit cycles are given and proved, and the maximum number of updates of the weighted coefficients of the perceptron before the output of the perceptron reaches the limit cycles is given. Finally, the perceptron with periodically time varying weighted coefficients is investigated. An optimization approach is proposed for the design of this perceptron. Numerical computer simulation results show that the perceptron with periodically time varying weighted coefficients could achieve better recognition performances compared to that with only one set of weighted coefficients.

Item Type:

Conference or Workshop Item (Paper)

Additional Information:

In this paper, a condition for the boundedness of weighted coefficients of the perceptron with arbitrary initial weights and an arbitrary set of bounded training feature vectors is given and proved. Based on this derived condition, conditions for the global convergence of the output of the perceptron with a set of nonlinearly separable bounded training feature vectors to limit cycles are given and proved, and the maximum number of updates of the weighted coefficients of the perceptron before the output of the perceptron reaches the limit cycles is given. Finally, the perceptron with periodically time varying weighted coefficients is investigated. An optimization approach is proposed for the design of this perceptron. Numerical computer simulation results show that the perceptron with periodically time varying weighted coefficients could achieve better recognition performances compared to that with only one set of weighted coefficients.