Inhalt des Dokuments

Motion Generation

Motion Generation is concerned with the planning and execution of motion tasks while avoiding collisions. We are especially interested in motion generation for mobile manipulators operating in the real world.

Mobile manipulation takes place in unstructured environments which still poses a challenge for motion generation method. Detailed and reliable models of unstructured environments are usually not available. The robot has to perceive the world with its on-board sensors to obtain the information needed for collision-free motion and successful execution of manipulation tasks. To do so efficiently, it needs to 1) Exploit rich feedback from diverse sensor modalities 2) Reason efficiently about the uncertainty present in the environment, and 3) balance exploration with exploitation.

Multi-modal Control for Manipulation

We want to find methods for motion generation that leverage the full perceptual capabilities of mobile manipulators. These methods should integrate visual and force perception to perceive the world while moving in it. Current motion planning methods do not incorporate the high-dimensional information of currently available sensors and thus are limited in robustness. To overcome this, we shift the boundary between planning and control. Giving more responsibility to control allows us to leverage the inherent capability of controllers; dealing with dynamic situations.

Visual servo control defines tight perception-action loops on task-relevant visual features. Force control provides complex manipulation skills using force feedback. We want to find these loops and use them for motion generation. We believe that using recursive estimation, a robot can learn multi-modal servo controllers while manipulating the world. These controllers adapt to the available information, the task, and thus provide more general and robust motion skills.

Motion Generation under Uncertainty

A mobile manipulator in unstructured environments needs to account for uncertainty due to noisy sensors with limited range and partially unknown, dynamic environments. General algorithms that plan under uncertainty are infeasible to use in real applications due to their high complexity. Feedback controllers provide many desirable qualities for planning under uncertainty. Controllers continuously integrate new sensor data using task-relevant modalities. These local controllers reduce uncertainty about the robots state in the environment implicitly, so much that we do not consider it during planning. A source of uncertainty we cannot ignore is the environment itself. We propose a time-dependent probabilistical model for the uncertain state of the world. In the case of an environment composed of static and moving obstacles, we can solve planning problems in polynomial time by solving an Expected Shortest Path problem. The solution to this problem is a motion policy for the robot that incorporates goal-directed sensing actions and task-consistent motion.

Motion planning

Motion Planning is the task of finding a way to move a robot from one position to another while avoiding obstacles. To do this, we need to find a trajectory in the space of all possible robot configurations. For a robot with several joints, this search space is high dimensional and very complex. Even for simple two-dimensional spaces, general motion planning is NP-hard.

We think that a successful and fast motion planning algorithm has to balance between two general planning strategies: Exploration and exploitation. Exploration tries to obtain information about the connectivity of the space without considering any specific goal. Exploitation tries to find a valid path to a specific goal, using the available information.

In our motion planners, we use several sources of information to balance exploration and exploitation. Utility-guided sampling apply guided exploration by choosing samples with maximal expected utility. This approach gives us a good understanding of connectivity using much less samples than other approaches. Disassembly planning uses the 3D-workspace connectivity to identify the regions of the configuration space where a detailed search is needed. This connectivity can be obtained by expanding the workspace with a tree of bubbles. The same workspace information is used in our Exploring/Exploiting Tree. In easy regions we use the workspace information as a navigation function to drag the robot along the spheres. If this approach fails in hard regions, we gradually shift to exploration.