Back-to-School With the PR2

Submitted by admin on Mon, 09/19/2011 - 11:31

Given how pervasive the PR2 has become in academic institutions around the world, we thought it might be worth checking in with our PR2 community to see what research plans they have in place for the coming year.
As always, we're inspired by the innovative research under way but we were frankly surprised by the breadth and ambition of these initiatives. In our goal to catalyze the personal robotics industry, the more R&D underway, the better.
The following brief descriptions provide some insight into personal robot applications in the not-too-distant future. These include household tasks such as laundry and clean-up; robot to robot cooperation; navigating within human environments; and even dancing and pet-sitting.

In the next year, the team at Freiburg will be continuing the TidyUp project and start with the integration process. Currently, they are working on cleaning up items from tables and bringing them back to where they belong. The robots will wipe the tables and perhaps also other furniture such as shelves. They also plan to learn from human table settings to set the table again for a selected number of people attending a meal.

During the upcoming year, Bosch plans to continue pursuing both hardware and software developments. Bosch plans to continue development of their proximity sensor for safe teleoperation in dynamic environments. They also plan to create Web interfaces that can be used for multiple tasks as well as for different robots without additional coding. As part of their efforts on shared autonomy, Bosch plans to conduct a user study comparing different manipulation assistance interfaces, as well as release additional packages for shared autonomy task planning. Together with TUM, Bosch will release a pipeline for autonomous semantic mapping.

Along with newly-recruited faculty Gabe Sibley, Professor Evan Drumwright will be co-teaching a class this Fall on Autonomous Robots using the PR2 as their platform of focus. Students will propose and carry out projects in the class related to a theme. The theme of the projects this semester will be getting the robot to perform tasks to aid in dog-sitting. Pets are an important human companion, we don't like leaving them in a kennel while we are away, and it is hard to find someone you trust to watch your pet at your home while you are away. Also, it's a damn hard thing for a robot to do!

During this second year of the beta program, MIT's goal is integration. The key objective is to be able to look for objects that are out of sight, including moving objects out of the way and opening doors.

This will require the team to integrate their hierarchical task-level planner, which plans in belief space, with their state estimation algorithm, visibility modeling, RRT* motion planner and object localization system to demonstrate planning involving information gathering.

In the upcoming academic year, Stanford will use the PR2 to research methods to increase the productivity of robot teleoperators. They will investigate interaction modalities and user interfaces that combine autonomous execution of high-performing subsystems (e.g., robotic navigation) with human supervision of subsystems with lower success rates (e.g., correcting automatically-generated "garbage" or "not garbage" labels of point cloud clusters in a clean-up task). They anticipate that such interfaces will allow temporal, as well as spatial, separation between the teleoperator and the robot, with potential to dramatically increase teleoperator productivity on tasks currently too difficult to fully automate.

With very robust results in place for folding of towels and sorting of socks, and promising results for folding of t-shirts, pants and sweaters, UC Berkeley will continue to focus on enabling the PR2 to perform the entire laundry task, from a basket with dirty laundry, to washing, drying, folding or hanging, and putting the articles away. UC Berkeley will also continue to work on (rigid) object instance detection, and investigate push-grasps under uncertainty.

Researchers at the GRASP Lab at Penn recently added two microphone "ears" to their PR2 and posted his methods on the hardware mods list. They are now working on various ways to use audio input to enable Graspy to do interesting things. One thrust is to adapt work they have been doing for the DARPA ARM-S project to work on the PR2. The team at Penn has written ROAR, the ROS Opensource Audio Recognizer. ROAR enables the user to easily train a one-class SVM to recognize a certain important sound that might intentionally or unintentionally arise during execution of a certain action, such as a handheld drill turning on or an object being knocked over.
Penn is also currently working on a demo that will make the PR2 move in interesting ways ("dance") when you play various musical instruments. Second, we are doing work on physical Human Robot Interaction, building on PR2-props code, which enables the PR2 to give high-fives and fist bumps. Other researchers at Penn are both working on new methods for teleoperating mobile manipulator robots. They have code for providing quality vibrotactile feedback from the accelerometer in the robot's gripper, and are looking at various methods of measuring human arm movement and mapping it naturally to the robot.

The JSK lab at the University of Tokyo has been using the PR2 robot to buy sandwiches at a local restaurant and deliver documents across offices. The technical issues they have been tackling are inter-floor navigation, on-site action learning, high level task planning and compiling, iPad interfaces, and knowledge database integration. These efforts are getting JSK one step closer to a real robot service application that can be used every day.
They have already been teaching a class on ROS, OpenRTM, OpenHRP, and OpenRAVE, which raised a lot of awareness of the PR2 Beta Program throughout the University of Tokyo. In the second semester, the JSK lab will tackle the difficulties in getting the PR2 and a humanoid robot to cooperate together for a household task.

The Cognitive Robotics Group at Ulster's plans for the coming year are mainly in support of research related to the IM-CLeVer European FP7 project. The acronym stands for Intrinsically Motivated Cumulative Learning Versatile Robots.

More specifically, the IM-CLeVeR project aims at designing robots that cumulatively learn new efficient skills through autonomous development based on intrinsic motivations and reuse such skills for accomplishing multiple, complex, and externally-assigned tasks. In the attached image the robot was engaged in a task of cumulatively learning the appearance of objects placed on a table. In the next term, they plan to move forward in the direction of skills building, by having the PR2 solve complex problems using either skills it is provided with, or new skills that it will learn "on-demand".