Much of artificial intelligence research is focused on devising
optimal solutions for challenging and well-defined but highly
constrained problems. However, as we begin creating autonomous agents to operate in the rich environments of modern videogames and computer simulations, it becomes important to devise agent behaviors that display the visible attributes of intelligence, rather than simply performing optimally. Such visibly intelligent behavior is difficult to specify with rules or characterize in terms of quantifiable objective functions, but it is possible to utilize human intuitions to directly guide a learning system toward the desired sorts of behavior. Policy induction from human-generated examples is a promising approach
to training such agents. In this paper, such a method is
developed and tested using Lamarckian neuroevolution. Artificial
neural networks are evolved to control autonomous agents in a strategy game. The evolution is guided by human-generated examples of play, and the system effectively learns the policies that were used by the player to generate the examples. I.e., the agents learn visibly intelligent behavior. In the future, such methods are likely to play a central role in creating autonomous agents for complex environments, making it possible to generate rich behaviors derived from nothing more formal than the intuitively generated examples of designers, players, or subject-matter experts.

Subjects: 1.8 Game Playing;
14. Neural Networks

Submitted: Apr 24, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.