One of the hallmarks of the performance, versatility, and robustness
of biological motor control is the ability to adapt the impedance of
the overall biomechanical system to different task requirements and
stochastic disturbances. A transfer of this principle to robotics is
desirable, for instance to enable robots to work robustly and safely
in everyday human environments. It is, however, not trivial to derive
variable impedance controllers for practical high degree-of-freedom
(DOF) robotic tasks.
In this contribution, we accomplish such variable impedance control
with the reinforcement learning (RL) algorithm PISq ({f P}olicy
{f I}mprovement with {f P}ath {f I}ntegrals). PISq is a
model-free, sampling based learning method derived from first
principles of stochastic optimal control. The PISq algorithm requires no tuning
of algorithmic parameters besides the exploration noise. The designer
can thus fully focus on cost function design to specify the task. From
the viewpoint of robotics, a particular useful property of PISq is
that it can scale to problems of many DOFs, so that reinforcement learning on real robotic
systems becomes feasible.
We sketch the PISq algorithm and its theoretical properties, and how
it is applied to gain scheduling for variable impedance control.
We evaluate our approach by presenting results on several simulated and real robots.
We consider tasks involving accurate tracking through via-points, and manipulation tasks requiring physical contact with the environment.
In these tasks, the optimal strategy requires both tuning of a reference trajectory emph{and} the impedance of the end-effector.
The results show that we can use path integral based reinforcement learning not only for
planning but also to derive variable gain feedback controllers in
realistic scenarios. Thus, the power of variable impedance control
is made available to a wide variety of robotic systems and practical
applications.

Segmenting complex movements into a sequence of primitives remains a difficult problem with many applications in the robotics and vision communities. In this work, we show how the movement segmentation problem can be reduced to a sequential movement recognition problem. To this end, we reformulate the orig-inal Dynamic Movement Primitive (DMP) formulation as a linear dynamical sys-tem with control inputs. Based on this new formulation, we develop an Expecta-tion-Maximization algorithm to estimate the duration and goal position of a par-tially observed trajectory. With the help of this algorithm and the assumption that a library of movement primitives is present, we present a movement seg-mentation framework. We illustrate the usefulness of the new DMP formulation on the two applications of online movement recognition and movement segmen-tation.

Developing robots capable of fine manipulation skills is of major importance in order to build truly assistive robots. These robots need to be compliant in their actuation and control in order to operate safely in human environments. Manipulation tasks imply complex contact interactions with the external world, and involve reasoning about the forces and torques to be applied. Planning under contact conditions is usually impractical due to computational complexity, and a lack of precise dynamics models of the environment. We present an approach to acquiring manipulation skills on compliant robots through reinforcement learning. The initial position control policy for manipulation is initialized through kinesthetic demonstration. We augment this policy with a force/torque profile to be controlled in combination with the position trajectories. We use the Policy Improvement with Path Integrals (PI2) algorithm to learn these force/torque profiles by optimizing a cost function that measures task success. We demonstrate our approach on the Barrett WAM robot arm equipped with a 6-DOF force/torque sensor on two different manipulation tasks: opening a door with a lever door handle, and picking up a pen off the table. We show that the learnt force control policies allow successful, robust execution of the tasks.

The development of agile and safe humanoid robots require controllers that guarantee both high tracking performance and compliance with the environment. More specifically, the control of contact interaction is of crucial importance for robots that will actively interact with their environment. Model-based controllers such as inverse dynamics or operational space control are very appealing as they offer both high tracking performance and compliance. However, while widely used for fully actuated systems such as manipulators, they are not yet standard controllers for legged robots such as humanoids. Indeed such robots are fundamentally different from manipulators as they are underactuated due to their floating-base and subject to switching contact constraints. In this paper we present an inverse dynamics controller for legged robots that use torque redundancy to create an optimal distribution of contact constraints. The resulting controller is able to minimize, given a desired motion, any quadratic cost of the contact constraints at each instant of time. In particular we show how this can be used to minimize tangential forces during locomotion, therefore significantly improving the locomotion of legged robots on difficult terrains. In addition to the theoretical result, we present simulations of a humanoid and a quadruped robot, as well as experiments on a real quadruped robot that demonstrate the advantages of the controller.

Applying model-free reinforcement learning to manipulation remains challenging for several reasons. First, manipulation involves physical contact, which causes discontinuous cost functions. Second, in manipulation, the end-point of the movement must be chosen carefully, as it represents a grasp which must be adapted to the pose and shape of the object. Finally, there is uncertainty in the object pose, and even the most carefully planned movement may fail if the object is not at the expected position. To address these challenges we 1) present a simplified, computationally more efficient version of our model-free reinforcement learning algorithm PI2; 2) extend PI2 so that it simultaneously learns shape parameters and goal parameters of motion primitives; 3) use shape and goal learning to acquire motion primitives that are robust to object pose uncertainty. We evaluate these contributions on a manipulation platform consisting of a 7-DOF arm with a 4-DOF hand.

Inverse dynamics controllers and operational space controllers have proved to be very efficient for compliant control of fully actuated robots such as fixed base manipulators. However legged robots such as humanoids are inherently different as they are underactuated and subject to switching external contact constraints. Recently several methods have been proposed to create inverse dynamics controllers and operational space controllers for these robots. In an attempt to compare these different approaches, we develop a general framework for inverse dynamics control and show that these methods lead to very similar controllers. We are then able to greatly simplify recent whole-body controllers based on operational space approaches using kinematic projections, bringing them closer to efficient practical implementations. We also generalize these controllers such that they can be optimal under an arbitrary quadratic cost in the commands.

Personal robots can only become widespread if they are capable of safely operating among humans. In uncertain and highly dynamic environments such as human households, robots need to be able to instantly adapt their behavior to unforseen events. In this paper, we propose a general framework to achieve very contact-reactive motions for robotic grasping and manipulation. Associating stereotypical movements to particular tasks enables our system to use previous sensor experiences as a predictive model for subsequent task executions. We use dynamical systems, named Dynamic Movement Primitives (DMPs), to learn goal-directed behaviors from demonstration. We exploit their dynamic properties by coupling them with the measured and predicted sensor traces. This feedback loop allows for online adaptation of the movement plan. Our system can create a rich set of possible motions that account for external perturbations and perception uncertainty to generate truly robust behaviors. As an example, we present an application to grasping with the WAM robot arm.

We present an approach that enables robots to
learn motion primitives that are robust towards state estimation
uncertainties. During reaching and preshaping, the robot learns
to use fine manipulation strategies to maneuver the object into
a pose at which closing the hand to perform the grasp is more
likely to succeed. In contrast, common assumptions in grasp
planning and motion planning for reaching are that these tasks
can be performed independently, and that the robot has perfect
knowledge of the pose of the objects in the environment.
We implement our approach using Dynamic Movement
Primitives and the probabilistic model-free reinforcement learning
algorithm Policy Improvement with Path Integrals (PI2 ).
The cost function that PI2 optimizes is a simple boolean that
penalizes failed grasps. The key to acquiring robust motion
primitives is to sample the actual pose of the object from a
distribution that represents the state estimation uncertainty.
During learning, the robot will thus optimize the chance of
grasping an object from this distribution, rather than at one
specific pose.
In our empirical evaluation, we demonstrate how the motion
primitives become more robust when grasping simple cylindrical
objects, as well as more complex, non-convex objects. We
also investigate how well the learned motion primitives generalize
towards new object positions and other state estimation
uncertainty distributions.

Applying reinforcement learning to humanoid robots is challenging because humanoids have a large number of degrees of freedom and state and action spaces are continuous. Thus, most reinforcement learning algorithms would become computationally infeasible and require a prohibitive amount of trials to explore such high-dimensional spaces. In this paper, we present a probabilistic reinforcement learning approach, which is derived from the framework of stochastic optimal control and path integrals. The algorithm, called Policy Improvement with Path Integrals (PI2), has a surprisingly simple form, has no open tuning parameters besides the exploration noise, is model-free, and performs numerically robustly in high dimensional learning problems. We demonstrate how PI2 is able to learn full-body motor skills on a 34-DOF humanoid robot. To demonstrate the generality of our approach, we also apply PI2 in the context of variable impedance control, where both planned trajectories and gain schedules for each joint are optimized simultaneously.

In IROS’10 Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics, October 2010 (inproceedings)

Abstract

Object grasping and manipulation pose major challenges for perception and control and require rich interaction between these two fields. In this paper, we concentrate on the plethora of perceptual problems that have to be solved before a robot can be moved in a controlled way to pick up an object. A vision system is presented that integrates a number of different computational processes, e.g. attention, segmentation, recognition or reconstruction to incrementally build up a representation of the scene suitable for grasping and manipulation of objects. Our vision system is equipped with an active robotic head and a robot arm. This embodiment enables the robot to perform a number of different actions like saccading, fixating, and grasping. By applying these actions, the robot can incrementally build a scene representation and use it for interaction. We demonstrate our system in a scenario for picking up known objects from a table top. We also show the system’s extendibility towards grasping of unknown and familiar objects.

We propose a method for multi-modal scene exploration where initial object hypothesis formed by active visual segmentation are confirmed and augmented through haptic exploration with a robotic arm. We update the current belief about the state of the map with the detection results and predict yet unknown parts of the map with a Gaussian Process. We show that through the integration of different sensor modalities, we achieve a more complete scene model. We also show that the prediction of the scene structure leads to a valid scene representation even if the map is not fully traversed. Furthermore, we propose different exploration strategies and evaluate them both in simulation and on our robotic platform.

In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.

Policy search is a successful approach to reinforcement
learning. However, policy improvements often result
in the loss of information. Hence, it has been marred
by premature convergence and implausible solutions.
As first suggested in the context of covariant policy
gradients (Bagnell and Schneider 2003), many of these
problems may be addressed by constraining the information
loss. In this paper, we continue this path of reasoning
and suggest the Relative Entropy Policy Search
(REPS) method. The resulting method differs significantly
from previous policy gradient approaches and
yields an exact update step. It works well on typical
reinforcement learning benchmark problems.

Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.

Model-based control methods can be used to enable fast, dexterous, and compliant motion of robots without sacrificing control accuracy. However, implementing such techniques on floating base robots, e.g., humanoids and legged systems, is non-trivial due to under-actuation, dynamically changing constraints from the environment, and potentially closed loop kinematics. In this paper, we show how to compute the analytically correct inverse dynamics torques for model-based control of sufficiently constrained floating base rigid-body systems, such as humanoid robots with one or two feet in contact with the environment. While our previous inverse dynamics approach relied on an estimation of contact forces to compute an approximate inverse dynamics solution, here we present an analytically correct solution by using an orthogonal decomposition to project the robot dynamics onto a reduced dimensional space, independent of contact forces. We demonstrate the feasibility and robustness of our approach on a simulated floating base bipedal humanoid robot and an actual robot dog locomoting over rough terrain.

We present a control architecture for fast quadruped locomotion over rough terrain. We approach the problem by decomposing it into many sub-systems, in which we apply state-of-the-art learning, planning, optimization and control techniques to achieve robust, fast locomotion. Unique features of our control strategy include: (1) a system that learns optimal foothold choices from expert demonstration using terrain templates, (2) a body trajectory optimizer based on the Zero-Moment Point (ZMP) stability criterion, and (3) a floating-base inverse dynamics controller that, in conjunction with force control, allows for robust, compliant locomotion over unperceived obstacles. We evaluate the performance of our controller by testing it on the LittleDog quadruped robot, over a wide variety of rough terrain of varying difficulty levels. We demonstrate the generalization ability of this controller by presenting test results from an independent external test team on terrains that have never been shown to us.

This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.

Robot learning methods which allow au- tonomous robots to adapt to novel situations have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to ful- fill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics. If possible, scaling was usually only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general ap- proach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human- like performance. For doing so, we study two major components for such an approach, i. e., firstly, we study policy learning algo- rithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structu- res for task representation and execution.

For complex robots such as humanoids, model-based control is highly beneficial for accurate tracking
while keeping negative feedback gains low for compliance. However, in such multi degree-of-freedom
lightweight systems, conventional identification of rigid body dynamics models using CAD data and
actuator models is inaccurate due to unknown nonlinear robot dynamic effects. An alternative method
is data-driven parameter estimation, but significant noise in measured and inferred variables affects it
adversely. Moreover, standard estimation procedures may give physically inconsistent results due to
unmodeled nonlinearities or insufficiently rich data. This paper addresses these problems, proposing
a Bayesian system identification technique for linear or piecewise linear systems. Inspired by Factor
Analysis regression, we develop a computationally efficient variational Bayesian regression algorithm
that is robust to ill-conditioned data, automatically detects relevant features, and identifies input and
output noise. We evaluate our approach on rigid body parameter estimation for various robotic systems,
achieving an error of up to three times lower than other state-of-the-art machine learning methods.

In this work we present the ﬁrst constrained stochastic op-
timal feedback controller applied to a fully nonlinear, tendon
driven index ﬁnger model. Our model also takes into account an
extensor mechanism, and muscle force-length and force-velocity
properties. We show this feedback controller is robust to noise
and perturbations to the dynamics, while successfully handling
the nonlinearities and high dimensionality of the system. By ex-
tending prior methods, we are able to approximate physiological
realism by ensuring positivity of neural commands and tendon
tensions at all timesthus can, for the ﬁrst time, use the optimal control framework
to predict biologically plausible tendon tensions for a nonlinear
neuromuscular ﬁnger model.
METHODS
1 Muscle Model
The rigid-body triple pendulum ﬁnger model with slightly
viscous joints is actuated by Hill-type muscle models. Joint
torques are generated by the seven muscles of the index ﬁn-

Whether human reaching movements are planned and optimized in kinematic
(task space) or dynamic (joint or muscle space) coordinates is still an
issue of debate. The first hypothesis implies that a planner produces a
desired end-effector position at each point in time during the reaching
movement, whereas the latter hypothesis includes the dynamics of the
muscular-skeletal control system to produce a continuous end-effector
trajectory.
Previous work by Wolpert et al (1995) showed that when subjects were
led to believe that their straight reaching paths corresponded to curved
paths as shown on a computer screen, participants adapted the true path
of their hand such that they would visually perceive a straight line
in visual space, despite that they actually produced a curved path.
These results were interpreted as supporting the stance that reaching
trajectories are planned in kinematic coordinates. However, this
experiment could only demonstrate that adaptation to altered paths, i.e.
the position of the end-effector, did occur, but not that the precise
timing of end-effector position was equally planned, i.e., the trajectory.
Our current experiment aims at filling this gap by explicitly testing whether
position over time, i.e. velocity, is a property of reaching movements
that is planned in kinematic coordinates.
In the current experiment, the velocity profiles of cursor movements
corresponding to the participant's hand motions were skewed either to
the left or to the right; the path itself was left unaltered.
We developed an adaptation paradigm, where the skew of the velocity profile was
introduced gradually and participants reported no awareness of any
manipulation.
Preliminary results indicate that the true hand motion of participants
did not alter, i.e. there was no adaptation so as to counterbalance the
introduced skew. However, for some participants, peak hand velocities
were lowered for higher skews, which suggests that participants
interpreted the manipulation as mere noise due to variance in their own
movement.
In summary, for a visuomotor transformation task, the hypothesis of a
planned continuous end-effector trajectory predicts adaptation to a
modified velocity profile. The current experiment found no systematic adaptation
under such transformation, but did demonstrate an effect that is more in
accordance that subjects could not perceive the manipulation and rather
interpreted as an increase of noise.

In 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2010, clmc (inproceedings)

Abstract

Abstract? We provide an overview of optimal control meth- ods to nonlinear neuromuscular systems and discuss their lim- itations. Moreover we extend current optimal control methods to their application to neuromuscular models with realistically numerous musculotendons; as most prior work is limited to torque-driven systems. Recent work on computational motor control has explored the used of control theory and esti- mation as a conceptual tool to understand the underlying computational principles of neuromuscular systems. After all, successful biological systems regularly meet conditions for stability, robustness and performance for multiple classes of complex tasks. Among a variety of proposed control theory frameworks to explain this, stochastic optimal control has become a dominant framework to the point of being a standard computational technique to reproduce kinematic trajectories of reaching movements (see [12])
In particular, we demonstrate the application of optimal control to a neuromuscular model of the index finger with all seven musculotendons producing a tapping task. Our simu- lations include 1) a muscle model that includes force- length and force-velocity characteristics; 2) an anatomically plausible biomechanical model of the index finger that includes a tendi- nous network for the extensor mechanism and 3) a contact model that is based on a nonlinear spring-damper attached at the end effector of the index finger. We demonstrate that it is feasible to apply optimal control to systems with realistically large state vectors and conclude that, while optimal control is an adequate formalism to create computational models of neuro- musculoskeletal systems, there remain important challenges and limitations that need to be considered and overcome such as contact transitions, curse of dimensionality, and constraints on states and controls.

We present a novel algorithm for efficient learning and feature selection in high-
dimensional regression problems. We arrive at this model through a modification of
the standard regression model, enabling us to derive a probabilistic version of the
well-known statistical regression technique of backfitting. Using the Expectation-
Maximization algorithm, along with variational approximation methods to overcome
intractability, we extend our algorithm to include automatic relevance detection
of the input features. This Variational Bayesian Least Squares (VBLS) approach
retains its simplicity as a linear model, but offers a novel statistically robust â??black-
boxâ? approach to generalized linear regression with high-dimensional inputs. It can
be easily extended to nonlinear regression and classification problems. In particular,
we derive the framework of sparse Bayesian learning, e.g., the Relevance Vector
Machine, with VBLS at its core, offering significant computational and robustness
advantages for this class of methods. We evaluate our algorithm on synthetic and
neurophysiological data sets, as well as on standard regression and classification
benchmark data sets, comparing it with other competitive statistical approaches
and demonstrating its suitability as a drop-in replacement for other generalized
linear regression techniques.

In the proceedings of American Control Conference (ACC 2010) , 2010, clmc (article)

Abstract

We present a generalization of the classic Differential Dynamic Programming algorithm. We assume the existence of state- and control-dependent process noise, and proceed to derive the second-order expansion of the cost-to-go. Despite having quartic and cubic terms in the initial expression, we show that these vanish, leaving us with the same quadratic structure as standard DDP.

In International Conference on Artificial Intelligence and Statistics (AISTATS 2010), 2010, clmc (inproceedings)

Abstract

With the goal to generate more scalable algo- rithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classi- cal techniques from optimal control and dy- namic programming with modern learning techniques from statistical estimation the- ory. In this vein, this paper suggests the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parametrized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path inte- gral which has no open parameters other than the exploration noise. The resulting algorithm can be conceived of as model- based, semi-model-based, or even model free, depending on how the learning problem is structured. Our new algorithm demon- strates interesting similarities with previous RL research in the framework of proba- bility matching and provides intuition why the slightly heuristically motivated proba- bility matching approach can actually per- form well. Empirical evaluations demon- strate significant performance improvements over gradient-based policy learning and scal- ability to high-dimensional control problems. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.

Investigating principles of human motor control in the framework of optimal control has had a long tradition in neural control of movement, and has recently experienced a new surge of investigations. Ideally, optimal control problems are addresses as a reinforcement learning (RL) problem, which would allow to investigate both the process of acquiring an optimal control solution as well as the solution itself. Unfortunately, the applicability of RL to complex neural and biomechanics systems has been largely impossible so far due to the computational difficulties that arise in high dimensional continuous state-action spaces. As a way out, research has focussed on computing optimal control solutions based on iterative optimal control methods that are based on linear and quadratic approximations of dynamical models and cost functions. These methods require perfect knowledge of the dynamics and cost functions while they are based on gradient and Newton optimization schemes. Their applicability is also restricted to low dimensional problems due to problematic convergence in high dimensions. Moreover, the process of computing the optimal solution is removed from the learning process that might be plausible in biology. In this work, we present a new reinforcement learning method for learning optimal control solutions or motor control. This method, based on the framework of stochastic optimal control with path integrals, has a very solid theoretical foundation, while resulting in surprisingly simple learning algorithms. It is also possible to apply this approach without knowledge of the system model, and to use a wide variety of complex nonlinear cost functions for optimization. We illustrate the theoretical properties of this approach and its applicability to learning motor control tasks for reaching movements and locomotion studies. We discuss its applicability to learning desired trajectories, variable stiffness control (co-contraction), and parameterized control policies. We also investigate the applicability to signal dependent noise control systems. We believe that the suggested method offers one of the easiest to use approaches to learning optimal control suggested in the literature so far, which makes it ideally suited for computational investigations of biological motor control.

In a not too distant future, robots will be a natural part of
daily life in human society, providing assistance in many
areas ranging from clinical applications, education and care
giving, to normal household environments [1]. It is hard to
imagine that all possible tasks can be preprogrammed in such
robots. Robots need to be able to learn, either by themselves
or with the help of human supervision. Additionally, wear and
tear on robots in daily use needs to be automatically compensated
for, which requires a form of continuous self-calibration,
another form of learning. Finally, robots need to react to stochastic
and dynamic environments, i.e., they need to learn
how to optimally adapt to uncertainty and unforeseen
changes. Robot learning is going to be a key ingredient for the
future of autonomous robots.
While robot learning covers a rather large field, from learning
to perceive, to plan, to make decisions, etc., we will focus
this review on topics of learning control, in particular, as it is
concerned with learning control in simulated or actual physical
robots. In general, learning control refers to the process of
acquiring a control strategy for a particular control system and
a particular task by trial and error. Learning control is usually
distinguished from adaptive control [2] in that the learning system
can have rather general optimization objectivesâ??not just,
e.g., minimal tracking errorâ??and is permitted to fail during
the process of learning, while adaptive control emphasizes fast
convergence without failure. Thus, learning control resembles
the way that humans and animals acquire new movement
strategies, while adaptive control is a special case of learning
control that fulfills stringent performance constraints, e.g., as
needed in life-critical systems like airplanes.
Learning control has been an active topic of research for at
least three decades. However, given the lack of working robots
that actually use learning components, more work needs to be
done before robot learning will make it beyond the laboratory
environment. This article will survey some ongoing and past
activities in robot learning to assess where the field stands and
where it is going. We will largely focus on nonwheeled robots
and less on topics of state estimation, as typically explored in
wheeled robots [3]â??6], and we emphasize learning in continuous
state-action spaces rather than discrete state-action spaces [7], [8].
We will illustrate the different topics of robot learning with
examples from our own research with anthropomorphic and
humanoid robots.

We present a control architecture for fast quadruped locomotion over rough terrain. We approach the problem by decomposing
it into many sub-systems, in which we apply state-of-the-art learning, planning, optimization, and control techniques
to achieve robust, fast locomotion. Unique features of our control strategy include: (1) a system that learns optimal
foothold choices from expert demonstration using terrain templates, (2) a body trajectory optimizer based on the Zero-
Moment Point (ZMP) stability criterion, and (3) a floating-base inverse dynamics controller that, in conjunction with force
control, allows for robust, compliant locomotion over unperceived obstacles. We evaluate the performance of our controller
by testing it on the LittleDog quadruped robot, over a wide variety of rough terrains of varying difficulty levels. The
terrain that the robot was tested on includes rocks, logs, steps, barriers, and gaps, with obstacle sizes up to the leg length
of the robot. We demonstrate the generalization ability of this controller by presenting results from testing performed by
an independent external test team on terrain that has never been shown to us.

Energy-shaping control methods have produced strong theoretical results for asymptotically stable 3D bipedal dynamic walking in the literature. In particular, geometric controlled reduction exploits robot symmetries to control momentum conservation laws that decouple the sagittal-plane dynamics, which are easier to stabilize. However, the associated control laws require high-dimensional matrix inverses multiplied with complicated energy-shaping terms, often making these control theories difficult to apply to highly-redundant humanoid robots. This paper presents a first step towards the application of energy-shaping methods on real robots by casting controlled reduction into a framework of constrained accelerations for inverse dynamics control. By representing momentum conservation laws as constraints in acceleration space, we construct a general expression for desired joint accelerations that render the constraint surface invariant. By appropriately choosing an orthogonal projection, we show that the unconstrained (reduced) dynamics are decoupled from the constrained dynamics. Any acceleration-based controller can then be used to stabilize this planar subsystem, including passivity-based methods. The resulting control law is surprisingly simple and represents a practical way to employ control theoretic stability results in robotic platforms. Simulated walking of a 3D compass-gait biped show correspondence between the new and original controllers, and simulated motions of a 16-DOF humanoid demonstrate the applicability of this method.

One of the hallmarks of the performance, versatility,
and robustness of biological motor control is the ability to adapt
the impedance of the overall biomechanical system to different
task requirements and stochastic disturbances. A transfer of this
principle to robotics is desirable, for instance to enable robots
to work robustly and safely in everyday human environments. It
is, however, not trivial to derive variable impedance controllers
for practical high DOF robotic tasks. In this contribution, we accomplish
such gain scheduling with a reinforcement learning approach
algorithm, PI2 (Policy Improvement with Path Integrals).
PI2 is a model-free, sampling based learning method derived from
first principles of optimal control. The PI2 algorithm requires no
tuning of algorithmic parameters besides the exploration noise.
The designer can thus fully focus on cost function design to
specify the task. From the viewpoint of robotics, a particular
useful property of PI2 is that it can scale to problems of many
DOFs, so that RL on real robotic systems becomes feasible. We
sketch the PI2 algorithm and its theoretical properties, and how
it is applied to gain scheduling. We evaluate our approach by
presenting results on two different simulated robotic systems, a
3-DOF Phantom Premium Robot and a 6-DOF Kuka Lightweight
Robot. We investigate tasks where the optimal strategy requires
both tuning of the impedance of the end-effector, and tuning
of a reference trajectory. The results show that we can use
path integral based RL not only for planning but also to derive
variable gain feedback controllers in realistic scenarios. Thus,
the power of variable impedance control is made available to a
wide variety of robotic systems and practical applications.

In Proceedings of the 13th International Conference on Climbing and Walking Robots (CLAWAR), pages: 580-587, Nagoya, Japan, sep 2010 (inproceedings)

Abstract

Contact interaction with the environment is crucial in the design of locomotion controllers for legged robots, to prevent slipping for example. Therefore, it is of great importance to be able to control the effects of the robots movements on the contact reaction forces. In this contribution, we extend a recent inverse dynamics algorithm for floating base robots to optimize the distribution of contact forces while achieving precise trajectory tracking. The resulting controller is algorithmically simple as compared to other approaches. Numerical simulations show that this result significantly increases the range of possible movements of a humanoid robot as compared to the previous inverse dynamics algorithm. We also present a simplification of the result where no inversion of the inertia matrix is needed which is particularly relevant for practical use on a real robot. Such an algorithm becomes interesting for agile locomotion of robots on difficult terrains where the contacts with the environment are critical, such as walking over rough or slippery terrain.

2008

One of the most general frameworks for phrasing control problems for
complex, redundant robots is operational space control. However, while
this framework is of essential importance for robotics and well-understood
from an analytical point of view, it can be prohibitively hard to achieve
accurate control in face of modeling errors, which are inevitable in com-
plex robots, e.g., humanoid robots. In this paper, we suggest a learning
approach for opertional space control as a direct inverse model learning
problem. A ï¬rst important insight for this paper is that a physically cor-
rect solution to the inverse problem with redundant degrees-of-freedom
does exist when learning of the inverse map is performed in a suitable
piecewise linear way. The second crucial component for our work is based
on the insight that many operational space controllers can be understood
in terms of a constrained optimal control problem. The cost function as-
sociated with this optimal control problem allows us to formulate a learn-
ing algorithm that automatically synthesizes a globally consistent desired
resolution of redundancy while learning the operational space controller.
From the machine learning point of view, this learning problem corre-
sponds to a reinforcement learning problem that maximizes an immediate
reward. We employ an expectation-maximization policy search algorithm
in order to solve this problem. Evaluations on a three degrees of freedom
robot arm are used to illustrate the suggested approach. The applica-
tion to a physically realistic simulator of the anthropomorphic SARCOS
Master arm demonstrates feasibility for complex high degree-of-freedom
robots. We also show that the proposed method works in the setting of
learning resolved motion rate control on real, physical Mitsubishi PA-10
medical robotics arm.

Dexterous manipulation with a highly redundant movement system is one of the hallmarks of hu-
man motor skills. From numerous behavioral studies, there is strong evidence that humans employ
compliant task space control, i.e., they focus control only on task variables while keeping redundant
degrees-of-freedom as compliant as possible. This strategy is robust towards unknown disturbances
and simultaneously safe for the operator and the environment. The theory of operational space con-
trol in robotics aims to achieve similar performance properties. However, despite various compelling
theoretical lines of research, advanced operational space control is hardly found in actual robotics imple-
mentations, in particular new kinds of robots like humanoids and service robots, which would strongly
profit from compliant dexterous manipulation. To analyze the pros and cons of different approaches
to operational space control, this paper focuses on a theoretical and empirical evaluation of different
methods that have been suggested in the literature, but also some new variants of operational space
controllers. We address formulations at the velocity, acceleration and force levels. First, we formulate
all controllers in a common notational framework, including quaternion-based orientation control, and
discuss some of their theoretical properties. Second, we present experimental comparisons of these
approaches on a seven-degree-of-freedom anthropomorphic robot arm with several benchmark tasks.
As an aside, we also introduce a novel parameter estimation algorithm for rigid body dynamics, which
ensures physical consistency, as this issue was crucial for our successful robot implementations. Our
extensive empirical results demonstrate that one of the simplified acceleration-based approaches can
be advantageous in terms of task performance, ease of parameter tuning, and general robustness and
compliance in face of inevitable modeling errors.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract

Force field experiments have been a successful paradigm for studying the principles of planning, execution, and learning in human arm movements. Subjects have been shown to cope with the disturbances generated by force fields by learning internal models of the underlying dynamics to predict disturbance effects or by increasing arm impedance (via co-contraction) if a predictive approach becomes infeasible.
Several studies have addressed the issue uncertainty in force field learning. Scheidt et al. demonstrated that subjects exposed to a viscous force field of fixed structure but varying strength (randomly changing from trial to trial), learn to adapt to the mean disturbance, regardless of the statistical distribution. Takahashi et al. additionally show a decrease in strength of after-effects after learning in the randomly varying environment. Thus they suggest that the nervous system adopts a dual strategy: learning an internal model of the mean of the random environment, while simultaneously increasing arm impedance to minimize the consequence of errors.
In this study, we examine what role variance plays in the learning of uncertain force fields. We use a 7 degree-of-freedom exoskeleton robot as a manipulandum (Sarcos Master Arm, Sarcos, Inc.), and apply a 3D viscous force field of fixed structure and strength randomly selected from trial to trial. Additionally, in separate blocks of trials, we alter the variance of the randomly selected strength multiplier (while keeping a constant mean). In each block, after sufficient learning has occurred, we apply catch trials with no force field and measure the strength of after-effects.
As expected in higher variance cases, results show increasingly smaller levels of after-effects as the variance is increased, thus implying subjects choose the robust strategy of increasing arm impedance to cope with higher levels of uncertainty. Interestingly, however, subjects show an increase in after-effect strength with a small amount of variance as compared to the deterministic (zero variance) case. This result implies that a small amount of variability aides in internal model formation, presumably a consequence of the additional amount of exploration conducted in the workspace of the task.

Real-time control of the endeffector of a humanoid robot in external coordinates requires
computationally efficient solutions of the inverse kinematics problem. In this context, this
paper investigates methods of resolved motion rate control (RMRC) that employ optimization
criteria to resolve kinematic redundancies. In particular we focus on two established techniques,
the pseudo inverse with explicit optimization and the extended Jacobian method. We prove that
the extended Jacobian method includes pseudo-inverse methods as a special solution. In terms of
computational complexity, however, pseudo-inverse and extended Jacobian differ significantly in
favor of pseudo-inverse methods. Employing numerical estimation techniques, we introduce a
computationally efficient version of the extended Jacobian with performance comparable to the
original version. Our results are illustrated in simulation studies with a multiple degree-offreedom
robot, and were evaluated on an actual 30 degree-of-freedom full-body humanoid robot.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract

Reinforcement learning (RL) - learning solely based on reward or cost
feedback - is widespread in robotics control and has been also suggested
as computational model for human motor control. In human motor control,
however, hardly any experiment studied reinforcement learning. Here, we
study learning based on visual cost feedback in a reaching task and did
three experiments: (1) to establish a simple enough experiment for RL,
(2) to study spatial localization of RL, and (3) to study the dependence
of RL on the cost function.
In experiment (1), subjects sit in front of a drawing tablet and look at
a screen onto which the drawing pen's position is projected. Beginning
from a start point, their task is to move with the pen through a target
point presented on screen. Visual feedback about the pen's position is
given only before movement onset. At the end of a movement, subjects get
visual feedback only about the cost of this trial. We choose as cost the
squared distance between target and virtual pen position at the target
line. Above a threshold value, the cost was fixed at this value. In the
mapping of the pen's position onto the screen, we added a bias (unknown
to subject) and Gaussian noise. As result, subjects could learn the
bias, and thus, showed reinforcement learning.
In experiment (2), we randomly altered the target position between three
different locations (three different directions from start point: -45,
0, 45). For each direction, we chose a different bias. As result,
subjects learned all three bias values simultaneously. Thus, RL can be
spatially localized.
In experiment (3), we varied the sensitivity of the cost function by
multiplying the squared distance with a constant value C, while keeping
the same cut-off threshold. As in experiment (2), we had three target
locations. We assigned to each location a different C value (this
assignment was randomized between subjects). Since subjects learned the
three locations simultaneously, we could directly compare the effect of
the different cost functions. As result, we found an optimal C value; if
C was too small (insensitive cost), learning was slow; if C was too
large (narrow cost valley), the exploration time was longer and learning
delayed. Thus, reinforcement learning in human motor control appears to
be sen

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems