Robot affordance learning with human interaction

View/Open

Date

Author

Metadata

Abstract

Many of the current tasks robots perform are trivial and already programmed in their systems. Modifying a robot’s ability to dynamically adapt learning through both human interaction and independent exploration can be helpful to various groups of people, such as elders needing assistance, students getting an education, and doctors needing medical assistants. In my research, the goal was to compare how two different ways of learning, independent and human-guided, compared when a robot learned object affordances. This research can give insight into how robots can take advantage of human help to improve their learning. My main goal was to have Simon independently learn affordances of different objects in the environment and see how the learning benefited when the robot had human guidance. I first had Simon perform several actions on objects, and I recorded the state of the object prior to the action, the actual action Simon performed, and the state of the object afterwards. The robot then used this data to predict the affordances of future objects. Once the robot was able to learn about its environment independently, I included human guidance in the second condition by presenting the objects in a way that humans would naturally, based on previous work. The benefit of having this human guidance was that the examples were much more balanced in terms of positive and negative examples, leading to a more effective classifier. To test this, an experiment was conducted in which Simon performed slide and grasp actions on 5 different objects. There were two conditions for each action, systematic, in which Simon tried all possible configurations, and human-guided, in which the examples were more balanced. Results showed that the human-guided condition resulted in slightly more accurate predictions than the independent, systematic condition.