Abstract

A feedforward network composed of units of teams of parameterized learning automata is considered as a model of a reinforcement teaming system. The internal state vector of each learning automaton is updated using an algorithm consisting of a gradient following term and a random perturbation term. It is shown that the algorithm weakly converges to a solution of the Langevin equation implying that the algorithm globally maximizes an appropriate function. The algorithm is decentralized, and the units do not have any information exchange during updating. Simulation results on common payoff games and pattern recognition problems show that reasonable rates of convergence can be obtained.

Item Type:

Journal Article

Additional Information:

Copyright 1995 IEEE. Personal use of this material is permitted.However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.