Abstract

After outlining the drawbacks of classical approaches to robot path finding, a prototypical system overcoming some of them and demonstrating the feasibility of reinforcement connectionist learning approaches to the problem is presented. The simulations show that finding feasible paths is not as computationally expensive as is usually assumed for a reinforcement learning system. A mechanism is incorporated into the system to "stabilise" learning once an acceptable path has been found.