Abstract

Motor learning can be framed theoretically as a problem of optimizing a movement policy in a potentially uncertain or changing environment. This is precisely the general problem studied in the field of reinforcement learning. Reinforcement learning theory proposes two distinct approaches to solving this general problem: Model-based approaches first identify the dynamics of the task or environment then use this knowledge to compute the optimal movement policy. Model-free approaches, by contrast, directly identify successful policies through a process of trial and error. Here, we review existing literature on motor control in the light of this distinction. Motor learning research in the last decade has been dominated by studies that elicit learning through adaptation paradigms and find the results to be consistent with a model-based framework. Studying the behavior of patients in such adaptation paradigms has implicated the cerebellum as prime candidate for the neural substrate of the internal models that sub serve model-based control. A growing body of experimental results, however, demonstrates that not all of motor learning in conventional paradigms can be explained within model-based frameworks, but can be understood in terms of an additional component of learning driven by model-free reinforcement of successful actions. We conclude that the brain maintains distinct model-based and model-free learning systems, with distinct neural substrates, which act in competitive balance to direct behavior.