Teaching

In this project, we address the question of how to enable a robot to engage in fine-grained interactional coordination. To do so, the robot has to actively structure its interactional environment in a manageable way – all based on the robot’s permanent monitoring of the human’s multimodal conduct with its own system internal perception capabilities. The project proposal address two focus areas: (1) actively orienting visitors to an exhibit and (2) different natures and functions of user utterances (e.g. signaling trouble) and their intended addressees. The robot’s control architecture will be extended by mechanisms that enable it to autonomously engage in fine-grained interaction with human users. Using only the robot's internal capabilities, the system will be able to monitor the user's behavior, estimate a level of interest. These cues will be used as a basis for repairing strategies, enabling a proper reaction to confused users.