Learning through Human-Robot Dialogues

Learning through Human-Robot Dialogues

Today, robots are present in our lives and they will be even more ubiquitous in the future. This advance of robotic technologies also poses the question how humans can guide the behavior of a robot and, in particular, how non-specialists can advise robots to learn new tasks. Different research areas are concerned with this topic including, but not limited to, learning from demonstration, observational learning, interactive task learning and grounded language learning. We are interested in investigating multimodal techniques across the different topics. A particular focus lies on eye tracking in combination with speech-based interaction.

Our Prototype at HRI 2017

Gaze is known to be a dominant modality for conveying spatial information, and it has been used for grounding in human-robot dialogues. In this work, we present the prototype of a gaze-supported multi-modal dialogue system that enhances two core tasks in human-robot collaboration: 1) our robot is able to learn new objects and their location from user instructions involving gaze, and 2) it can instruct the user to move objects and passively track this movement by interpreting the user’s gaze.

Search

Categories

Related Posts

Visual Search Target Inference in Natural Interaction Settings with Machine Learning Visual search is a perceptual task in which humans aim at identifying a search target object such as a traffic sign among other objects. Read more…

In this present day and age of smart phones, ipads and other smart devices a digital camera has become an integral part of our daily experience. This combined with our desire to capture moments in Read more…

In the medical domain physicians are often forced to decide quickly whether a further examination is appropriate or which of several treatments is best suited for a patient.A first preliminary ruling can sometimes be made Read more…