The problem of autonomous learning of affordances typically requires a robot to learn the types of changes it can induce and detect in its environment. In sufficiently complex environments, however, it is impossible to know in advance the exact nature and number of possible environmental outcomes that the robot can induce through its behaviors. In addition, the changes that the robot can detect are often high-dimensional, making it difficult to use standard machine learning algorithms. This work addresses this problem by proposing a framework in which the robot learns a taxonomy for the types of perceivable changes produced by its own behaviors. The proposed method also allows the robot to incrementally update the taxonomy and to conceptualize new types of observed outcomes. In addition, the robot solves a hierarchical classification task by learning a model that predicts the future outcome of its behaviors in relation to the learned taxonomy. Thus, the robot builds an affordance ontology consisting of an outcome class taxonomy and a predictive model grounded in the robot's perceptual and behavioral repertoire.

Subjects: 17. Robotics; 19.1 Perception

Submitted: Apr 7, 2008

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.