Learning motion primitive goals for robust manipulation

2011

Conference Paper

am

mg

Applying model-free reinforcement learning to manipulation remains challeng-ing for several reasons. First, manipulation involves physical contact, which causes discontinuous cost functions. Second, in manipulation, the end-point of the movement must be chosen carefully, as it represents a grasp which must be adapted to the pose and shape of the object. Finally, there is uncertainty in the object pose, and even the most carefully planned movement may fail if the object is not at the expected position.
To address these challenges we 1) present a simplified, computationally more ef-ficient version of our model-free reinforcement learning algorithm PI2; 2) extend PI2 so that it simultaneously learns shape parameters and goal parameters of mo-tion primitives; 3) use shape and goal learning to acquire motion primitives that are robust to object pose uncertainty. We evaluate these contributions on a ma-nipulation platform consisting of a 7-DOF arm with a 4-DOF hand.

People

Share

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems