Development of new technology geared at making robots more autonomous, robust, and safe.

This is exemplified in our current project which focuses specifically on the topic of body representations: “Robot self-calibration and safe physical human-robot interaction inspired by body representations in primate brains” (Czech Science Foundation, GA17-15697Y).

For more details about our Research see the corresponding section bellow.

2017-09-14: Zdenek Straka and Matej Hoffmann were awarded ENNS Best Paper Award at 26th International Conference on Artificial Neural Networks (ICANN17) for their paper Learning a Peripersonal Space Representation as a Visual-Tactile Prediction Task.

2017-04-18: A 4-page cover story about humanoid robotics in Respekt 16/2017 (a major Czech weekly journal; larger section available for free at ihned.cz.). Matej Hoffmann and Zdenek Straka interviewed and photographed with robots. A video, plus coverage of the rescue robots in the online addition to the story.

Student topics

We offer interesting topics for student theses and projects as well as paid internships. Have a look HERE and feel free to contact Matej Hoffmann for more information.

List of currently open topics at Department of Cybernetics (mostly in Czech): LINK.

Other topics can be defined upon request – simply drop by at KN-E211 or write an email to matej.hoffmann [guess-what] fel.cvut.cz

Research

For our publications, please see the Google Scholar profiles of individual group members.

Models of body representations

How do babies learn about their bodies? Newborns probably do not have a holistic perception of their body; instead they are starting to pick up correlations in the streams of individual sensory modalities (in particular visual, tactile, proprioceptive). The structure in these streams allows them to learn the first models of their bodies. The mechanisms behind these processes are largely unclear. In collaboration with developmental and cognitive psychologists, we want to shed more light on this topic by developing robotic models.

Automatic robot self-calibration

Standard robot calibration procedures require prior knowledge of a number of quantities from the robot’s environment. These conditions have to be present for recalibration to be performed. This has motivated alternative solutions to the self-calibration problem that are more “self-contained” and can be performed automatically by the robot. These typically rely on self-observation of specific points on the robot using the robot’s own camera(s). The advent of robotic skin technologies opens up the possibility of completely new approaches. In particular, the kinematic chain can be closed and the necessary redundant information obtained through self-touch, broadening the sample collection from end-effector to whole body surface. Furthermore, the possibility of truly multimodal calibration – using visual, proprioceptive, tactile, and inertial information – is open.

Safe physical human-robot interaction

Robots are leaving the factory, entering domains that are far less structured and starting to share living spaces with humans. As a consequence, they need to dynamically adapt to unpredictable interactions with people and guarantee safety at every moment. “Body awareness” acquired through artificial skin can be used not only to improve reactions to collisions, but when coupled with vision, it can be extended to a surface around the body (so-called peripersonal space), facilitating collision avoidance and contact anticipation, eventually leading to safer and more natural interaction of the robot with objects, including humans.