Learning with incomplete information - and the mathematical structure behind it

Learning and the ability to learn are important factors in development and evolutionary processes [1]. Depending on the level, the complexity of learning can strongly vary. While associative learning can explain simple learning behaviour [1,2] much more sophisticated strategies seem to be involved in complex learning tasks. This is particularly evident in machine learning theory [3] (reinforcement learning [4], statistical learning [5]), but it equally shows up in trying to model natural learning behaviour [2]. A general setting for modelling learning processes in which statistical aspects are relevant is provided by the neural network (NN) paradigm. This is in particular of interest for natural, learning by experience situations. NN learning models can incorporate elementary learning mechanisms based on neuro-physiological analogies, such as the Hebb rule, and lead to quantitative results concerning the dynamics of the learning process [6]. The Hebb rule, however, cannot be directly applied in all cases, and in particular for realistic problems, such as "delayed reinforcement" [4,6], the sophistication of the algorithms rapidly increases. We want to present here a model which can cope with such non trivial tasks, while still being elementary and based only on procedures which one may think of as natural, without any appeal to higher strategies [7]. We can show the capability of this model to provide good learning in many, very different settings [7,8,9]. It may help therefore understanding some basic features of learning.