Central pattern generators (CPGs) are becoming a popular model for the control of locomotion of legged robots. Biological CPGs are neural networks responsible for the generation of rhythmic movements, especially locomotion. In robotics, a systematic way of designing such CPGs as artificial neural networks or systems of coupled oscillators with sensory feedback inclusion is still missing. In this contribution, we present a way of designing CPGs with coupled oscillators in which we can independently control the ascending and descending phases of the oscillations (i.e. the swing and stance phases of the limbs). Using insights from dynamical system theory, we construct generic networks of oscillators able to generate several gaits under simple parameter changes. Then we introduce a systematic way of adding sensory feedback from touch sensors in the CPG such that the controller is strongly coupled with the mechanical system it controls. Finally we control three different simulated robots (iCub, Aibo and Ghostdog) using the same controller to show the effectiveness of the approach. Our simulations prove the importance of independent control of swing and stance duration. The strong mutual coupling between the CPG and the robot allows for more robust locomotion, even under non precise parameters and non-flat environment.

In this contribution we present a CPG (central pattern generator) controller based on coupled Rossler systems. It is able to generate both limit cycle and chaotic behaviors through bifurcation. We develop an experimental test bench to measure quantitatively the performance of different controllers on unknown terrains of increasing difficulty. First, we show that for flat terrains, open loop limit cycle systems are the most efficient (in terms of speed of locomotion) but that they are quite sensitive to environmental changes. Second, we show that sensory feedback is a crucial addition for unknown terrains. Third, we show that the chaotic controller with sensory feedback outperforms the other controllers in very difficult terrains and actually promotes the emergence of short synchronized movement patterns. All that is done using an unified framework for the generation of limit cycle and chaotic behaviors, where a simple parameter change can switch from one behavior to the other through bifurcation. Such flexibility would allow the automatic adaptation of the robot locomotion strategy to the terrain uncertainty.

Dexterous manipulation with a highly redundant movement system is one of the hallmarks of hu-
man motor skills. From numerous behavioral studies, there is strong evidence that humans employ
compliant task space control, i.e., they focus control only on task variables while keeping redundant
degrees-of-freedom as compliant as possible. This strategy is robust towards unknown disturbances
and simultaneously safe for the operator and the environment. The theory of operational space con-
trol in robotics aims to achieve similar performance properties. However, despite various compelling
theoretical lines of research, advanced operational space control is hardly found in actual robotics imple-
mentations, in particular new kinds of robots like humanoids and service robots, which would strongly
profit from compliant dexterous manipulation. To analyze the pros and cons of different approaches
to operational space control, this paper focuses on a theoretical and empirical evaluation of different
methods that have been suggested in the literature, but also some new variants of operational space
controllers. We address formulations at the velocity, acceleration and force levels. First, we formulate
all controllers in a common notational framework, including quaternion-based orientation control, and
discuss some of their theoretical properties. Second, we present experimental comparisons of these
approaches on a seven-degree-of-freedom anthropomorphic robot arm with several benchmark tasks.
As an aside, we also introduce a novel parameter estimation algorithm for rigid body dynamics, which
ensures physical consistency, as this issue was crucial for our successful robot implementations. Our
extensive empirical results demonstrate that one of the simplified acceleration-based approaches can
be advantageous in terms of task performance, ease of parameter tuning, and general robustness and
compliance in face of inevitable modeling errors.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract

Force field experiments have been a successful paradigm for studying the principles of planning, execution, and learning in human arm movements. Subjects have been shown to cope with the disturbances generated by force fields by learning internal models of the underlying dynamics to predict disturbance effects or by increasing arm impedance (via co-contraction) if a predictive approach becomes infeasible.
Several studies have addressed the issue uncertainty in force field learning. Scheidt et al. demonstrated that subjects exposed to a viscous force field of fixed structure but varying strength (randomly changing from trial to trial), learn to adapt to the mean disturbance, regardless of the statistical distribution. Takahashi et al. additionally show a decrease in strength of after-effects after learning in the randomly varying environment. Thus they suggest that the nervous system adopts a dual strategy: learning an internal model of the mean of the random environment, while simultaneously increasing arm impedance to minimize the consequence of errors.
In this study, we examine what role variance plays in the learning of uncertain force fields. We use a 7 degree-of-freedom exoskeleton robot as a manipulandum (Sarcos Master Arm, Sarcos, Inc.), and apply a 3D viscous force field of fixed structure and strength randomly selected from trial to trial. Additionally, in separate blocks of trials, we alter the variance of the randomly selected strength multiplier (while keeping a constant mean). In each block, after sufficient learning has occurred, we apply catch trials with no force field and measure the strength of after-effects.
As expected in higher variance cases, results show increasingly smaller levels of after-effects as the variance is increased, thus implying subjects choose the robust strategy of increasing arm impedance to cope with higher levels of uncertainty. Interestingly, however, subjects show an increase in after-effect strength with a small amount of variance as compared to the deterministic (zero variance) case. This result implies that a small amount of variability aides in internal model formation, presumably a consequence of the additional amount of exploration conducted in the workspace of the task.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems