Abstract

Smart avatars are virtual human representations controlled by real people. Given instructions interactively, smart avatars can act as autonomous or reactive agents. During a real-time simulation, a user should be able to dynamically refine his or her avatar’s behavior in reaction to simulated stimuli without having to undertake a lengthy off-line programming session. In this paper, we introduce an architecture, which allows users to input immediate or persistent instructions using natural language and see the agents’ resulting behavioral changes in the graphical output of the simulation.