Vincent Berenz

Vincent Berenz is scientific software engineer in the Autonomous Motion Department at the Max Planck Institute for Intelligent Systems (Tübingen, Germany) since January 2015. He is in charge of overseeing the development of software related to robotic architecture and research projects.

Vincent received a Chemical Engineering degree from Ecole Superieure de Chimie Organique et Minerale (2001, France) and a Master of Business Engineering Bio-informatics from Ecole de Biologie Industrielle (2002, France). He worked as a software engineer at CEREP (USA and France) and Pharmadesign (Japan) from 2002 to 2006 and from 2006 to 2007, respectively. In 2012, he received his Ph.D. in Intelligent Interactions Technologies from the University of Tsukuba (Japan) under the supervision of Kenji Suzuki. In 2013 and 2014 he was a research scientist at RIKEN Brain Science Institute/Toyota Collaborative Center (Japan) and he remains a guest researcher of the RIKEN Intelligent Behavior Control Unit.

We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

2015

In Emergent Trends in Robotics and Intelligent Systems: Where is the Role of Intelligent Technologies in the Next Generation of Robots?, pages: 31-38, Springer International Publishing, Cham, 2015 (inbook)

The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

2006

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems