Abstract

The aim of the 'Talking AIBO' project is to build a system enabling the AIBO, Sony's autonomous four-legged robot, to learn how to interact
with humans using real words. We review in this article an experiment in which the robot builds a vocabulary concerning the objects it perceives visually. We discuss the results of this first
prototype and the difficulties we have encountered.