2015

An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor-actuator-agents observe a dynamic process and sporadically exchange their measurements and inputs over a bus network. Based on these data, each agent estimates the full state of the dynamic system, which may exhibit arbitrary inter-agent couplings. Local event-based protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. This event-based scheme is shown to mimic a centralized Luenberger observer design up to guaranteed bounds, and stability is proven in the sense of bounded estimation errors for bounded disturbances. The stability result extends to the distributed control system that results when the local state estimates are used for distributed feedback control. Simulation results highlight the benefit of the event-based approach over classical periodic ones in reducing communication requirements.

Machine Learning in Planning and Control of Robot Motion Workshop at the IEEE/RSJ International Conference on Intelligent Robots and Systems (iROS), pages: , , Machine Learning in Planning and Control of Robot Motion Workshop, October 2015 (conference)

Abstract

This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree-of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Preliminary results of a low-dimensional tuning problem highlight the method’s potential for automatic controller tuning on robotic platforms.

Inverse Optimal Control (IOC) has strongly impacted the systems engineering process, enabling automated planner tuning through straightforward and intuitive demonstration. The most successful and established applications, though, have been in lower dimensional problems such as navigation planning where exact optimal planning or control is feasible. In higher dimensional systems, such as humanoid robots, research has made substantial progress toward generalizing the ideas to model free or locally optimal settings, but these systems are complicated to the point where demonstration itself can be difficult. Typically, real-world applications are restricted to at best noisy or even partial or incomplete demonstrations that prove cumbersome in existing frameworks. This work derives a very flexible method of IOC based on a form of Structured Prediction known as Direct Loss Minimization. The resulting algorithm is essentially Policy Search on a reward function that rewards similarity to demonstrated behavior (using Covariance Matrix Adaptation (CMA) in our experiments). Our framework blurs the distinction between IOC, other forms of Imitation Learning, and Reinforcement Learning, enabling us to derive simple, versatile, and practical algorithms that blend imitation and reinforcement signals into a unified framework. Our experiments analyze various aspects of its performance and demonstrate its efficacy on conveying preferences for motion shaping and combined reach and grasp quality optimization.

In Proceedings of the American Control Conference, July 2015 (inproceedings)

Abstract

This paper presents an LMI-based synthesis procedure for distributed event-based state estimation. Multiple agents observe and control a dynamic process by sporadically exchanging data over a broadcast network according to an event-based protocol. In previous work [1], the synthesis of event-based state estimators is based on a centralized design. In that case three different types of communication are required: event-based communication of measurements, periodic reset of all estimates to their joint average, and communication of inputs. The proposed synthesis problem eliminates the communication of inputs as well as the periodic resets (under favorable circumstances) by accounting explicitly for the distributed structure of the control system.

Detecting and identifying the different objects in an image fast and reliably is an
important skill for interacting with one’s environment. The main problem is that in
theory, all parts of an image have to be searched for objects on many different scales
to make sure that no object instance is missed. It however takes considerable time
and effort to actually classify the content of a given image region and both time
and computational capacities that an agent can spend on classification are limited.
Humans use a process called visual attention to quickly decide which locations of
an image need to be processed in detail and which can be ignored. This allows us
to deal with the huge amount of visual information and to employ the capacities
of our visual system efficiently.
For computer vision, researchers have to deal with exactly the same problems,
so learning from the behaviour of humans provides a promising way to improve
existing algorithms. In the presented master’s thesis, a model is trained with eye
tracking data recorded from 15 participants that were asked to search images for
objects from three different categories. It uses a deep convolutional neural network
to extract features from the input image that are then combined to form a saliency
map. This map provides information about which image regions are interesting
when searching for the given target object and can thus be used to reduce the
parts of the image that have to be processed in detail. The method is based on a
recent publication of Kümmerer et al., but in contrast to the original method that
computes general, task independent saliency, the presented model is supposed to
respond differently when searching for different target categories.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract

We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard Îµ-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the Îµ-metric and therefore lead to a better classification performance.

For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial
for successful controlling its motion. Often, pose estimations can be acquired from encoders
inside the arm, but they can have significant inaccuracy which makes the use of additional
techniques necessary.
In this master thesis, a novel approach of robot arm pose estimation is presented, that works on
single depth images without the need of prior foreground segmentation or other preprocessing
steps.
A random regression forest is used, which is trained only on synthetically generated data.
The approach improves former work by Bohg et al. by considerably reducing the computational
effort both at training and test time. The forest in the new method directly estimates the
desired joint angles while in the former approach, the forest casts 3D position votes for the
joints, which then have to be clustered and fed into an iterative inverse kinematic process to
finally get the joint angles.
To improve the estimation accuracy, the standard training objective of the forest training is
replaced by a specialized function that makes use of a model-dependent distance metric, called
DISP.
Experimental results show that the specialized objective indeed improves pose estimation and
it is shown that the method, despite of being trained on synthetic data only, is able to
provide reasonable estimations for real data at test time.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract

An event-based communication framework for remote operation of a robot via a bandwidth-limited network is proposed. The robot sends state and environment estimation data to the operator, and the operator transmits updated control commands or policies to the robot. Event-based communication protocols are designed to ensure that data is transmitted only when required: the robot sends new estimation data only if this yields a significant information gain at the operator, and the operator transmits an updated control policy only if this comes with a significant improvement in control performance. The developed framework is modular and can be used with any standard estimation and control algorithms. Simulation results of a robotic arm highlight its potential for an efficient use of limited communication resources, for example, in disaster response scenarios such as the DARPA Robotics Challenge.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract

Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form.
For nonparametric filters, such as the Particle Filter, the converse holds. Such methods are able to approximate any posterior, but the computational requirements scale exponentially with the number of dimensions of the state space. In this paper, we present the Coordinate Particle Filter which alleviates this problem. We propose to compute the particle weights recursively, dimension by dimension. This allows us to explore one dimension at a time, and resample after each dimension if necessary.
Experimental results on simulated as well as real data con- firm that the proposed method has a substantial performance advantage over the Particle Filter in high-dimensional systems where not all dimensions are highly correlated. We demonstrate the benefits of the proposed method for the problem of multi-object and robotic manipulator tracking.

The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

For robots to be able to manipulate in unknown and unstructured environments the robot should be capable of operating under partial observability of the environment. Object occlusions and unmodeled environments are some of the factors that result in partial observability. A common scenario where this is encountered is manipulation in clutter. In the case that the robot needs to locate an object of interest and manipulate it, it needs to perform a series of decluttering actions to accurately detect the object of interest. To perform such a series of actions, the robot also needs to account for the dynamics of objects in the environment and how they react to contact. This is a non trivial problem since one needs to reason not only about robot-object interactions but also object-object interactions in the presence of contact. In the example scenario of manipulation in clutter, the state vector would have to account for the pose of the object of interest and the structure of the surrounding environment. The process model would have to account for all the aforementioned robot-object, object-object interactions. The complexity of the process model grows exponentially as the number of objects in the scene increases. This is commonly the case in unstructured environments. Hence it is not reasonable to attempt to model all object-object and robot-object interactions explicitly. Under this setting we propose a hypothesis based action selection algorithm where we construct a hypothesis set of the possible poses of an object of interest given the current evidence in the scene and select actions based on our current set of hypothesis. This hypothesis set tends to represent the belief about the structure of the environment and the number of poses the object of interest can take. The agent's only stopping criterion is when the uncertainty regarding the pose of the object is fully resolved.

The Gaussian Filter (GF) is one of the most widely used filtering algorithms; instances are the Extended Kalman Filter, the Unscented Kalman Filter and the Divided Difference Filter. GFs represent the belief of the current state by a Gaussian with the mean being an affine function of the measurement. We show that this representation can be too restrictive to accurately capture the dependencies in systems with nonlinear observation models, and we investigate how the GF can be generalized to alleviate this problem. To this end we view the GF from a variational-inference perspective, and analyze how restrictions on the form of the belief can be relaxed while maintaining simplicity and efficiency. This analysis provides a basis for generalizations of the GF. We propose one such generalization which coincides with a GF using a virtual measurement, obtained by applying a nonlinear function to the actual measurement. Numerical experiments show that the proposed Feature Gaussian Filter (FGF) can have a substantial
performance advantage over the standard GF for systems with nonlinear observation models.

In Emergent Trends in Robotics and Intelligent Systems: Where is the Role of Intelligent Technologies in the Next Generation of Robots?, pages: 31-38, Springer International Publishing, Cham, 2015 (inbook)

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems