The mechanisms by which humans and animals use visually-acquired landmarks to find their way around have proved fascinating. Considerable evidence suggests that animals navigate not only on the basis of the overall geometry of the space but also on the basis of a configural representation of the cues. In contrast to earlier linear models of elemental feature representation, configural representation requires individual stimulus be represented in the context of other stimuli and is typified by non-linear learning tasks such as the transverse patterning problem.

This paper explores the suitability of configural representation for automatic scene recognition in robot navigation by conducting experiments designed to infer semantic prediction of a scene from different configurations of its stimuli. The main contribution of this work is that it provides a methodology for automatic landmark-based scene identification with the aid of a reinforcement learning based software package, called the working memory toolkit (WMtk), which allows reward associations between a target location and the conjunctive representations of its stimuli. Experimental results obtained with two different target locations are presented and compared with those of two other classification mechanisms, a support vector machine approach and a simple linear two-class classifier, perceptron.