Research Spotlight: Oberlin GS teaches robots how to lift

Computer science research aims to make robotics lifting functions more efficient

Through practice and play, John Oberlin GS is working to help robots learn how to pick up any number of household objects.

Oberlin, a PhD candidate in computer science, is collaborating with his adviser, Associate Professor of Computer Science and Engineering Stefanie Tellex. The duo aims to have their robots pick up one million objects.

The robots in their research are Baxter models built by Rethink Robotics, Inc., a robotics company based in Boston, Massachusetts. The lab’s two robots are named in honor of Brown’s mascot, Bruno the bear. They are dubbed Iorek — after the armored bear in Phillip Pullman’s novel “The Golden Compass” — and Ursula, which comes from the Latin word for “little bear,” Oberlin said.

It is “tedious to train the robot how to pick up each object individually” because of the extensive human data entry involved, Oberlin said. It is preferable for the robot to figure out how to identify and pick up a general object, eliminating the need for such data entry, he said.

To pick up objects, the robot needs to learn four distinctions: that the objects are separate, what the objects are, where the objects are and how they can be picked up, Oberlin said. In computer science lingo, these terms are known as segmentation, classification, pose and grasp planning, respectively, he said.

During the learning process, the robot takes pictures of the object in order to recognize it again in the future, Oberlin said. The robot then practices different ways of picking up the object and remembers which strategies lead to the highest probability of success.

The robot also plays with the object to figure out the different “gravitationally stable configurations,” Oberlin said. For example, a mug may have three stable configurations: upright, upside-down and on its side. The robot attempts to determine the best way of picking up the object in each of these configurations.

At this point, the robot switches between possible configurations by dropping the object. But in the future, the goal is for the robot to “intentionally transition the object between different states,” Oberlin said. As it takes time for the robot to learn about new objects and their possible configurations, Oberlin and Tellex aim to create a more “robust” robot that can deal with an object in a variety of possible configurations.

“Lighting is the most difficult thing to deal with in vision for robots,” Oberlin said.

Reflective objects pose a unique challenge for the robot, as they may appear so different that the robot needs to create new configurations for each possible reflection, he said. For example, a reflective object may require multiple configurations while in the same position.