Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Robot assistants need to interact with people in a natural way in order to be
accepted into people’s day-to-day lives. We have been researching robot assistants
with capabilities that include visually tracking humans in the environment,
identifying the context in which humans carry out their activities, understanding
spoken language (with a fixed vocabulary), participating in spoken dialogs to resolve
ambiguities, and learning task procedures. In this paper, we describe a robot
task learning algorithm in which the human explicitly and interactively instructs
a series of steps to the robot through spoken language. The training algorithm
fuses the robot’s perception of the human with the understood speech data, maps
the spoken language to robotic actions, and follows the human to gather the action
applicability state information. The robot represents the acquired task as a conditional
procedure and engages the human in a spoken-language dialog to fill in
information that the human may have omitted.