Cognitive Soccer Robots

The effective development of a system consisting of autonomous heterogeneous robots that have to perform complex tasks over long periods of time, operating in a partially known and partially observable environment, implies addressing a wide range of issues. These issues involve many areas of Artificial Intelligence, from computer vision to deductive systems.

Since 1998 we have been developing autonomous robotic systems that have been used within the RoboCup competitions. RoboCup is an organization that promotes research in the field of autonomous robotics by organizing annual world competitions of soccer-player robots. Such competitions are made within different leagues depending on the type and size of the robots. We have created two robot teams: SPQR-Wheeled and SPQR-Legged belonging to different leagues, and although the robots composing the teams are quite different (various kinds of wheeled robots sized about 50x50 cm for the Wheeled team, and Sony AIBO four legged robots for the Legged one), they share the same cognitive system.

The SPQR team of soccer robots.

The main features required for the design and the implementation of a team of cognitive robots are: (i) a framework for defining the robots' knowledge of the environment and its capabilities; (ii) a reasoning system able to derive plans (ie programs) that must be executed to achieve a given goal; (iii) a plan execution mechanism; (iv) a coordination module that is in charge of assigning sub tasks to each robot in the team in order to achieve a global team goal.

Architecture
The cognitive robotic system, which we have implemented on several robotic platforms, is based on a hybrid layered architecture, with two levels: the Operative Level, in which the information is handled numerically, and the Deliberative Level, which relies on a symbolic representation of the robot's knowledge. In particular, at the Deliberative Level, the environment is described by a knowledge base containing both axioms and static facts (i.e background knowledge) provided by the designer, as well as dynamic facts (ie symbolic information) corresponding to the data acquired through the robot's sensor during its mission.

Domain and Plan Representation
Our planning and reasoning framework relies on an epistemic representation of the world, which explicitly takes into account the robot's knowledge, rather than the true world status. The domain description includes the specification of the basic actions the robot can perform, which are defined in term of preconditions (conditions that must be verified for an action to be executed) and effects (conditions that are verified after the execution of the action). Our framework makes it possible to define two kinds of actions: ordinary actions - which effect changes in the world, and sensing actions - which effect changes in the robot's knowledge (ie in its mental state) and allows for sensing the value of a property that is not known.

Plans are represented as transition graphs, whose nodes represent epistemic states and whose edges represent actions causing state transitions. The graph representing a plan has an initial node from which the execution starts and a set of final states in which the goal is satisfied. The execution of an ordinary action causes a deterministic state transition, while the execution of a sensing action causes a state transition depending on the sensing outcome. Moreover, since in the real world the execution of an action is not guaranteed to succeed, a set of recovery states are associated with each action.

Plan Generation and Execution
We have defined a logic formalism and developed a plan generator that, given: i) the domain specification as a set of axioms; ii) the actions specification, in terms of pre-conditions and effects; iii) an initial state; iv) a goal, generates a plan for reaching the goal from the initial state. Through the use of sensing actions, the plan generated can contain both if and goto constructs. It is also possible to manually edit plans using a graphic tool.

A plan is executed by navigating the plan graph, starting from the initial node and terminating when a goal node is reached or when the plan fails. A plan failure can be caused by the failure of some of the execution conditions of the action, and will activate a plan recovery or a new plan selection depending on the presence of edges exiting from the recovery state.

Coordination
Coordination among robots is achieved by a coordination module that selects which sub-task must be accomplished by every robot. In our cognitive systems, each task is related to a goal to be achieved by a robot and thus to a plan to be executed.
Tasks are assigned during robot operation by means of a distributed protocol that allows information about the current state of each robot in the team to be shared, and provides the necessary autonomy in case of network failures. The state of each robot and its capabilities are evaluated in order to decide to which robot a given sub-task should be assigned and a broadcast agreement among all the robots is performed in order to effectively assign sub-tasks to robots, while avoiding interferences among them.