This paper considers ways in which a person can cue and constrain an artificial agent’s attention to salient features. In one experiment, a person uses gestures to direct an otherwise autonomous robot hand through a known task. Each gesture instantiates the key spatial and intentional features for the task at that moment in time. In a second experiment, which iswork in progress, a person will use speech and gesture to assist an "intelligent room" in learning to recognize the objects in its environment. In this case, the robot (the room) will take both direction and correction signals from the person and use them to tune its feature saliency map and limit its search space.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.