The first talking robot we present plays chess against the user (Fig. 3). It moves chess pieces on a board by means of a magnetic arm, which it can move up and down in order to grab and release a piece, and can place the arm under a certain position by driving back and forth on wheels, and to the right and left on a gear rod.
The dialogue between the human player and the robot is centred around the chess game: The human speaks the move he wants to make, and the robot confirms the intended move, and announces check and checkmate. In order to perform the moves for the robot, the dialogue manager connects to a specialised client which encapsulates the GNU Chess system.5 In addition to computing the moves that the robot will perform, the chess programme is also used in disambiguating elliptical player inputs.

Figure 4 shows the part of the chess dialogue model that accepts a move as a spoken command from the player. The Input node near the top waits for the speech recognition client to report that it understood a player utterance as a command. An excerpt from the recogniser grammar is shown in Fig. 5: The grammar is a context-free grammar in JSGF format, whose production rules are annotated with tags (in curly brackets) representing a very shallow semantics. The tags for all production rules used in a parse tree are collected into a table.

The dialogue manager then branches depending on the type of the command given by the user. If the command specified the piece and target square, e.g. “move the pawn to e4”, the recogniser will return a representation like
{piece="pawn"
colTo="e" rowTo="4"}, and the dialogue will continue in the centre branch. The user can also specify the source and target square.