Local linearizations are ubiquitous in the control of robotic systems. Analytical methods, if available, can be used to obtain the linearization, but in complex robotics systems where the the dynamics and kinematics are often not faithfully obtainable, empirical linearization may be preferable. In this case, it is important to only use data for the local linearization that lies within a ``reasonable'' linear regime of the system, which can be defined from the Hessian at the point of the linearization -- a quantity that is not available without an analytical model. We introduce a Bayesian approach to solve statistically what constitutes a ``reasonable'' local regime. We approach this problem in the context local linear regression. In contrast to previous locally linear methods, we avoid cross-validation or complex statistical hypothesis testing techniques to find the appropriate local regime. Instead, we treat the parameters of the local regime probabilistically and use approximate Bayesian inference for their estimation. This approach results in an analytical set of iterative update equations that are easily implemented on real robotics systems for real-time applications. As in other locally weighted regressions, our algorithm also lends itself to complete nonlinear function approximation for learning empirical internal models. We sketch the derivation of our Bayesian method and provide evaluations on synthetic data and actual robot data where the analytical linearization was known.

The planning and execution of human arm movements is still unresolved. An ongoing controversy is whether we plan a movement in kinematic coordinates and convert these coordinates with an inverse internal model into motor commands (like muscle activation) or whether we combine a few muscle synergies or equilibrium points to move a hand, e.g., between two targets. The first hypothesis implies that a planner produces a desired end-effector position for all time points; the second relies on the dynamics of the muscular-skeletal system for a given control command to produce a continuous end-effector trajectory. To distinguish between these two possibilities, we use a visuomotor adaptation experiment.
Subjects moved a pen on a graphics tablet and observed the pen's mapped position onto a screen (subjects quickly adapted to this mapping). The task was to move a cursor between two points in a given time window. In the adaptation test, we manipulated the velocity profile of the cursor feedback such that the shape of the trajectories remained unchanged (for straight paths). If humans would use a kinematic plan and map at each time the desired end-effector position onto control commands, subjects should adapt to the above manipulation. In a similar experiment, Wolpert et al (1995) showed adaptation to changes in the curvature of trajectories. This result, however, cannot rule out a shift of an equilibrium point or an additional synergy activation between start and end point of a movement.
In our experiment, subjects did two sessions, one control without and one with velocity-profile manipulation. To skew the velocity profile of the cursor trajectory, we added to the current velocity, v, the function 0.8*v*cos(pi + pi*x), where x is the projection of the cursor position onto the start-goal line divided by the distance start to goal (x=0 at the start point).
As result, subjects did not adapt to this manipulation: for all subjects, the true hand motion was not significantly modified in a direction consistent with adaptation, despite that the visually presented motion differed significantly from the control motion.
One may still argue that this difference in motion was insufficient to be processed visually. Thus, as a control experiment, we replayed control and modified motions to the subjects and asked which of the two motions appeared 'more natural'. Subjects chose the unperturbed motion as more natural significantly better than chance.
In summary, for a visuomotor transformation task, the hypothesis of a planned continuous end-effector trajectory predicts adaptation to a modified velocity profile. The current experiment found no adaptation under such transformation.

Autonomous robots that can adapt to novel situations has
been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by
observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in
order to get one step closer towards human-like performance. For doing so, we study two ma jor components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

In Proceedings of the 2007 European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 25-27, 2007, clmc (inproceedings)

Abstract

In this paper, we investigate motor primitive learning with the Natural Actor-Critic approach. The Natural Actor-Critic consists out of actor updates which are achieved using natural stochastic policy gradients while the critic obtains the natural policy gradient by linear regression. We show that this architecture can be used to learn the Òbuilding blocks of movement generationÓ, called motor primitives. Motor primitives are parameterized control policies such as splines or nonlinear differential equations with desired attractor properties. We show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.

HFSP Journal Frontiers of Interdisciplinary Research in the Life Sciences, 1(2):115-126, 2007, clmc (article)

Abstract

Research in robotics has moved away from its primary focus on industrial
applications. The New Robotics is a vision that has been developed in past years
by our own university and many other national and international research
instiutions and addresses how increasingly more human-like robots can live
among us and take over tasks where our current society has shortcomings. Elder
care, physical therapy, child education, search and rescue, and general
assistance in daily life situations are some of the examples that will benefit from
the New Robotics in the near future. With these goals in mind, research for the
New Robotics has to embrace a broad interdisciplinary approach, ranging from
traditional mathematical issues of robotics to novel issues in psychology,
neuroscience, and ethics. This paper outlines some of the important research
problems that will need to be resolved to make the New Robotics a reality.

In Abstracts of the 37st Meeting of the Society of Neuroscience, San Diego, CA, Nov. 3-7, 2007, clmc (inproceedings)

Abstract

A popular computational model suggests that smooth reaching movements are generated in humans by minimizing a difference vector between hand and target in visual coordinates (Shadmehr and Wise, 2005). To achieve such a task, the optimal joint accelerations may be pre-computed. However, this pre-planning is inflexible towards perturbations of the limb, and there is strong evidence that reaching movements can be modified on-line at any moment during the movement. Thus, next-state planning models (Bullock and Grossberg, 1988) have been suggested that compute the current control command from a function of the goal state such that the overall movement smoothly converges to the goal (see Shadmehr and Wise (2005) for an overview). So far, these models have been restricted to simple point-to-point reaching movements with (approximately) straight trajectories. Here, we present a computational model for learning and executing arbitrary trajectories that combines ideas from pattern generation with dynamic systems and the observation of convergent force fields, which control a frog leg after spinal stimulation (Giszter et al., 1993).
In our model, we incorporate the following two observations: first, the orientation of vectors in a force field is invariant over time, but their amplitude is modulated by a time-varying function, and second, two force fields add up when stimulated simultaneously (Giszter et al., 1993). This addition of convergent force fields varying over time results in a virtual trajectory (a moving equilibrium point) that correlates with the actual leg movement (Giszter et al., 1993).
Our next-state planner is a set of differential equations that provide the desired end-effector or joint accelerations using feedback of the current state of the limb. These accelerations can be interpreted as resulting from a damped spring that links the current limb position with a virtual trajectory. This virtual trajectory can be learned to realize any desired limb trajectory and velocity profile, and learning is efficient since the time-modulated sum of convergent force fields equals a sum of weighted basis functions (Gaussian time pulses). Thus, linear algebra is sufficient to compute these weights, which correspond to points on the virtual trajectory. During movement execution, the differential equation corrects automatically for perturbations and brings back smoothly the limb towards the goal. Virtual trajectories can be rescaled and added allowing to build a set of movement primitives to describe movements more complex than previously learned. We demonstrate the potential of the suggested model by learning and generating a wide variety of movements.

In Abstracts of the 37st Meeting of the Society of Neuroscience, San Diego, CA, Nov 3-7, 2007, clmc (inproceedings)

Abstract

Several scientists used a double-step target-displacement protocol to investigate how an unexpected upcoming new target modifies ongoing discrete movements. Interesting observations are the initial direction of the movement, the spatial path of the movement to the second target, and the amplification of the speed in the second movement. Experimental data show that the above properties are influenced by the movement reaction time and the interstimulus interval between the onset of the first and second target.
Hypotheses in the literature concerning the interpretation of the observed data include a) the second movement is superimposed on the first movement (Henis and Flash, 1995), b) the first movement is aborted and the second movement is planned to smoothly connect the current state of the arm with the new target (Hoff and Arbib, 1992), c) the second movement is initiated by a new control signal that replaces the first movement's control signal, but does not take the state of the system into account (Flanagan et al., 1993), and (d) the second movement is initiated by a new goal command, but the control structure stays unchanged, and feed-back from the current state is taken into account (Hoff and Arbib, 1993).
We investigate target switching from the viewpoint of Dynamic Movement Primitives (DMPs). DMPs are trajectory planning units that are formalized as stable nonlinear attractor systems (Ijspeert et al., 2002). They are a useful framework for biological motor control as they are highly flexible in creating complex rhythmic and discrete behaviors that can quickly adapt to the inevitable perturbations of dynamically changing, stochastic environments. In this model, target switching is accomplished simply by updating the target input to the discrete movement primitive for reaching. The reaching trajectory in this model can be straight or take any other route; in contrast, the Hoff and Arbib (1993) model is restricted to straight reaching movement plans.
In the present study, we use DMPs to reproduce in simulation a large number of target-switching experimental data from the literature and to show that online correction and the observed target switching phenomena can be accomplished by changing the goal state of an on-going DMP, without the need to switch to different movement primitives or to re-plan the movement.
:

In Abstracts of the 37st Meeting of the Society of Neuroscience., Neuroscience, 2007, clmc (inproceedings)

Abstract

Every day motor behavior consists of a plethora of challenging motor skills from discrete movements such as reaching and throwing to rhythmic movements such as walking, drumming and running. How this plethora of motor skills can be learned remains an open question. In particular, is there any unifying computa-tional framework that could model the learning process of this variety of motor behaviors and at the same time be biologically plausible? In this work we aim to give an answer to these questions by providing a computational framework that unifies the learning mechanism of both rhythmic and discrete movements under optimization criteria, i.e., in a non-supervised trial-and-error fashion.
Our suggested framework is based on Reinforcement Learning, which is mostly considered as too costly to be a plausible mechanism for learning com-plex limb movement. However, recent work on reinforcement learning with pol-icy gradients combined with parameterized movement primitives allows novel and more efficient algorithms. By using the representational power of such mo-tor primitives we show how rhythmic motor behaviors such as walking, squash-ing and drumming as well as discrete behaviors like reaching and grasping can be learned with biologically plausible algorithms. Using extensive simulations and by using different reward functions we provide results that support the hy-pothesis that Reinforcement Learning could be a viable candidate for motor learning of human motor behavior when other learning methods like supervised learning are not feasible.

In this paper, we address the issues of compliant
control of a robot under contact constraints with a goal of using
joint space based pattern generators as movement primitives,
as often considered in the studies of legged locomotion and
biological motor control. For this purpose, we explore inverse
dynamics control of constrained dynamical systems. When the
system is overconstrained, it is not straightforward to formulate
an inverse dynamics control law since the problem becomes
an ill-posed one, where infinitely many combinations of joint
torques are possible to achieve the desired joint accelerations.
The goal of this paper is to develop a general and computationally
efficient inverse dynamics algorithm for a robot
with a free floating base and constraints. We suggest an
approximate way of computing inverse dynamics algorithm by
treating constraint forces computed with a Lagrange multiplier
method as simply external forces based on FeatherstoneÕs
floating base formulation of inverse dynamics. We present how
all the necessary quantities to compute our controller can be
efficiently extracted from FeatherstoneÕs spatial notation of
robot dynamics. We evaluate the effectiveness of the suggested
approach on a simulated biped robot model.

Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can accomplish a multitude of different tasks, triggered by environmental context or higher level instruction. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning and human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error.
However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains.
In this thesis, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.
As a theoretical foundation, we first study a general framework to generate control laws for real robots with a particular focus on skills represented as dynamical systems in differential constraint form. We present a point-wise optimal control framework resulting from a generalization of Gauss' principle and show how various well-known robot control laws can be derived by modifying the metric of the employed cost function. The framework has been successfully applied to task space tracking control for holonomic systems for several different metrics on the anthropomorphic SARCOS Master Arm.
In order to overcome the limiting requirement of accurate robot models, we first employ learning methods to find learning controllers for task space control.
However, when learning to execute a redundant control problem, we face the general problem of the non-convexity of the solution space which can force the robot to steer into physically impossible configurations if supervised learning methods are employed without further consideration. This problem can be resolved using two major insights, i.e., the learning problem can be treated as locally convex and the cost function of the analytical framework can be used to ensure global consistency. Thus, we derive an immediate reinforcement learning algorithm from the expectation-maximization point of view which leads to a reward-weighted regression technique. This method can be used both for operational space control as well as general immediate reward reinforcement learning problems. We demonstrate the feasibility of the resulting framework on the problem of redundant end-effector tracking for both a simulated 3 degrees of freedom robot arm as well as for a simulated anthropomorphic SARCOS Master Arm.
While learning to execute tasks in task space is an essential component to a general framework to motor skill learning, learning the actual task is of even higher importance, particularly as this issue is more frequently beyond the abilities of analytical approaches than execution. We focus on the learning of elemental tasks which can serve as the "building blocks of movement
generation", called motor primitives. Motor primitives are parameterized task representations based on splines or nonlinear differential equations with desired attractor properties. While imitation learning of parameterized motor primitives is a relatively well-understood problem, the self-improvement by interaction of the system with the environment remains a challenging problem, tackled in the fourth chapter of this thesis.
For pursuing this goal, we highlight the difficulties with current
reinforcement learning methods, and outline both established and novel
algorithms for the gradient-based improvement of parameterized
policies. We compare these algorithms in the context of motor
primitive learning, and show that our most modern algorithm, the
Episodic Natural Actor-Critic outperforms previous algorithms by at
least an order of magnitude. We demonstrate the efficiency of this
reinforcement learning method in the application of learning to hit a
baseball with an anthropomorphic robot arm.
In conclusion, in this thesis, we have contributed a general framework for analytically computing robot control laws which can be used for deriving various previous control approaches and serves as foundation as well as inspiration for our learning algorithms. We have introduced two classes of novel reinforcement learning methods, i.e., the Natural Actor-Critic and the Reward-Weighted Regression algorithm. These algorithms have been used in order to replace the analytical components of the theoretical framework by learned representations. Evaluations have been performed on both simulated and real robot arms.

Many robot control problems of practical importance, including
operational space control, can be reformulated as immediate reward
reinforcement learning problems. However, few of the known
optimization or reinforcement learning algorithms can be used in
online learning control for robots, as they are either prohibitively
slow, do not scale to interesting domains of complex robots, or
require trying out policies generated by random search, which are
infeasible for a physical system. Using a generalization of the
EM-base reinforcement learning framework suggested by Dayan &
Hinton, we reduce the problem of learning with immediate rewards to a
reward-weighted regression problem with an adaptive, integrated reward
transformation for faster convergence. The resulting algorithm is
efficient, learns smoothly without dangerous jumps in solution space,
and works well in applications of complex high degree-of-freedom robots.

In the past, computational motor control has been approached from at least two major frameworks: the dynamic systems approach and the viewpoint of optimal control. The dynamic system approach emphasizes motor control as a process of self-organization between an animal and its environment. Nonlinear differential equations that can model entrainment and synchronization behavior are among the most favorable tools of dynamic systems modelers. In contrast, optimal control approaches view motor control as the evolutionary or development result of a nervous system that tries to optimize rather general organizational principles, e.g., energy consumption or accurate task achievement. Optimal control theory is usually employed to develop appropriate theories. Interestingly, there is rather little interaction between dynamic systems and optimal control modelers as the two approaches follow rather different philosophies and are often viewed as diametrically opposing. In this paper, we develop a computational approach to motor control that offers a unifying modeling framework for both dynamic systems and optimal control approaches. In discussions of several behavioral experiments and some theoretical and robotics studies, we demonstrate how our computational ideas allow both the representation of self-organizing processes and the optimization of movement based on reward criteria. Our modeling framework is rather simple and general, and opens opportunities to revisit many previous modeling results from this novel unifying view.

We present a Bayesian formulation of locally weighted learning (LWL) using the novel concept of a randomly varying coefficient model. Based on this, we propose a mechanism for multivariate non-linear regression using spatially localised linear models that learns completely independent of each other, uses only local information and adapts the local model complexity in a data driven fashion. We derive online updates for the model parameters based on variational Bayesian EM. The evaluation of the proposed algorithm against other state-of-the-art methods reveal the excellent, robust generalization performance beside surprisingly efficient time and space complexity properties. This paper, for the first time, brings together the computational efficiency and the adaptability of Õnon-competitiveÕ locally weighted learning schemes and the modeling guarantees of the Bayesian formulation.

We present an in-depth survey of policy gradient methods as they are used in the machine learning community for optimizing parameterized, stochastic control policies in Markovian systems with respect to the expected reward. Despite having been developed separately in the reinforcement learning literature, policy gradient methods employ likelihood ratio gradient estimators as also suggested in the stochastic simulation optimization community. It is well-known that this approach to policy gradient estimation traditionally suffers from three drawbacks, i.e., large variance, a strong dependence on baseline functions and a inefficient gradient descent. In this talk, we will present a series of recent results which tackles each of these problems. The variance of the gradient estimation can be reduced significantly through recently introduced techniques such as optimal baselines, compatible function approximations and all-action gradients. However, as even the analytically obtainable policy gradients perform unnaturally slow, it required the step from ÔvanillaÕ policy gradient methods towards natural policy gradients in order to overcome the inefficiency of the gradient descent. This development resulted into the Natural Actor-Critic architecture which can be shown to be very efficient in application to motor primitive learning for robotics.

Policy learning which allows autonomous robots to adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach policy learning with
the goal of an application to motor skill refinement in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, we study policy learning algorithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structures for task representation and execution.

While operational space control is of essential importance
for robotics and well-understood from an analytical
point of view, it can be prohibitively hard to achieve accurate
control in face of modeling errors, which are inevitable in
complex robots, e.g., humanoid robots. In such cases, learning
control methods can offer an interesting alternative to analytical
control algorithms. However, the resulting supervised learning
problem is ill-defined as it requires to learn an inverse mapping
of a usually redundant system, which is well known to suffer
from the property of non-convexity of the solution space, i.e.,
the learning system could generate motor commands that try
to steer the robot into physically impossible configurations. The
important insight that many operational space control algorithms
can be reformulated as optimal control problems, however, allows
addressing this inverse learning problem in the framework of
reinforcement learning. However, few of the known optimization
or reinforcement learning algorithms can be used in online
learning control for robots, as they are either prohibitively
slow, do not scale to interesting domains of complex robots,
or require trying out policies generated by random search,
which are infeasible for a physical system. Using a generalization
of the EM-based reinforcement learning framework suggested
by Dayan & Hinton, we reduce the problem of learning with
immediate rewards to a reward-weighted regression problem
with an adaptive, integrated reward transformation for faster
convergence. The resulting algorithm is efficient, learns smoothly
without dangerous jumps in solution space, and works well in
applications of complex high degree-of-freedom robots.

Legged locomotion excels when terrains become
too rough for wheeled systems or open-loop walking pattern
generators to succeed, i.e., when accurate foot placement is of
primary importance in successfully reaching the task goal. In
this paper we address the scenario where the rough terrain is
traversed with a static walking gait, and where for every foot
placement of a leg, the location of the foot placement was selected
irregularly by a planning algorithm. Our goal is to adjust
a smooth walking pattern generator with the selection of
every foot placement such that the COG of the robot follows a
stable trajectory characterized by a stability margin relative to
the current support triangle. We propose a novel parameterization
of the COG trajectory based on the current position,
velocity, and acceleration of the four legs of the robot. This
COG trajectory has guaranteed continuous velocity and acceleration
profiles, which leads to continuous velocity and acceleration
profiles of the leg movement, which is ideally suited for
advanced model-based controllers. Pitch, yaw, and ground
clearance of the robot are easily adjusted automatically under
any terrain situation. We evaluate our gait generation technique
on the Little-Dog quadruped robot when traversing
complex rocky and sloped terrains.

This technical report describes a cute idea of how to create new policy search approaches. It directly relates to the Natural Actor-Critic methods but allows the derivation of one shot solutions. Future work may include the application to interesting problems.

We introduce a modified Kalman filter that performs robust, real-time outlier detection, without the need for manual parameter tuning by the user. Systems that rely on high quality sensory data (for instance, robotic systems) can be sensitive to data containing outliers. The standard Kalman filter is not robust to outliers, and other variations of the Kalman filter have been proposed to overcome this issue. However, these methods may require manual parameter tuning, use of heuristics or complicated parameter estimation procedures. Our Kalman filter uses a weighted least squares-like approach by introducing weights for each data sample. A data sample with a smaller weight has a weaker contribution when estimating the current time step?s state. Using an incremental variational Expectation-Maximization framework, we learn the weights and system dynamics. We evaluate our Kalman filter algorithm on data from a robotic dog.

This paper addresses locomotion with active balancing, via task space control with prioritization. The center of gravity (COG) and foot of the swing leg are treated as task space control points. Floating base inverse kinematics with constraints is employed, thereby allowing for a mobile platform suitable for locomotion. Different techniques of task prioritization are discussed and we clarify differences and similarities of previous suggested work. Varying levels of prioritization for control are examined with emphasis on singularity robustness and the negative effects of constraint switching. A novel controller for task space control of balance and locomotion is developed which attempts to address singularity robustness, while minimizing discontinuities created by constraint switching. Controllers are evaluated using a quadruped robot simulator engaging in a locomotion task.

In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

While operational space control is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in complex robots, e.g., humanoid robots. In such cases, learning control methods can offer an interesting alternative to analytical control algorithms. However, the resulting learning problem is ill-defined as it requires to learn an inverse mapping of a usually redundant system, which is well known to suffer from the property of non-covexity of the solution space, i.e., the learning system could generate motor commands that try to steer the robot into physically impossible configurations. A first important insight for this paper is that, nevertheless, a physically correct solution to the inverse problem does exits when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on a recent insight that many operational space controllers can be understood in terms of a constraint optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the view of machine learning, the learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward and that employs an expectation-maximization policy search algorithm. Evaluations on a three degrees of freedom robot arm illustrate the feasability of our suggested approach.

One of the major challenges in both action generation for robotics and in the understanding of human motor control is to learn the "building blocks of movement generation", called motor primitives. Motor primitives, as used in this paper, are parameterized control policies such as splines or nonlinear differential equations with desired attractor properties. While a lot of progress has been made in teaching parameterized motor primitives using supervised or imitation learning, the self-improvement by interaction of the system with the environment remains a challenging problem. In this paper, we evaluate different reinforcement learning approaches for improving the performance of parameterized motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and outline both established and novel algorithms for the gradient-based improvement of parameterized policies. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.

The aquisition and improvement of motor skills and
control policies for robotics from trial and error is of essential
importance if robots should ever leave precisely pre-structured
environments. However, to date only few existing reinforcement
learning methods have been scaled into the domains of highdimensional
robots such as manipulator, legged or humanoid
robots. Policy gradient methods remain one of the few exceptions
and have found a variety of applications. Nevertheless, the
application of such methods is not without peril if done in an uninformed
manner. In this paper, we give an overview on learning
with policy gradient methods for robotics with a strong focus on
recent advances in the field. We outline previous applications to
robotics and show how the most recently developed methods can
significantly improve learning performance. Finally, we evaluate
our most promising algorithm in the application of hitting a
baseball with an anthropomorphic arm.

While the predictive nature of the primate smooth pursuit system has been evident through several behavioural and neurophysiological experiments, few models have attempted to explain these results comprehensively. The model we propose in this paper in line with previous models employing optimal control theory; however, we hypothesize two new issues: (1) the medical superior temporal (MST) area in the cerebral cortex implements a recurrent neural network (RNN) in order to predict the current or future target velocity, and (2) a forward model of the target motion is acquired by on-line learning. We use stimulation studies to demonstrate how our new model supports these hypotheses.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems