For students

The Curious Robot

Many approaches for robot learning observe the human, to emulate or otherwise learn from human behavior. However, often humans do not know how to interact with the robot and furthermore, visual analysis of humans is hard. Thus, in the curious robot scenario, we have turned the usual paradigm around: In this scenario, the robot asks the human questions about objects, its environment, and so on.

Multi-Modal Grasp-Learning

The latest iteration of the Curious Robot scenario focuses on two issues: 1) more intuitive grasp teaching and 2) continuous feedback on the interaction state. These two aspects featured prominently in our previous interaction studies and adding them has been much anticipated. Preliminary tests were already quite successful and we're currently preparing more in-depth studies.

For grasp teaching, we use a CyberGlove II hand-posture sensor, which allows people to demonstrate a grasp naturally -- by performing it. Grasps are categorized into two types at the moment: Power grasp and precision grip.

You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.

Mixed-Initiative in Object Learning

The video below shows the experimental setup, with the big PA-10 arm and the Shadow hand in the foreground and the humanoid torso BARTHOC in the background.
We are adressing several different questions with the scenario. The first one is published at ICRA 2009 and concerns how to give the human appropriate guidance[1], because that is actually not obvious at all.

Other questions investigated include system architecture, behavior modeling, vision for interaction and so on.

You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.