AbstractOnline adaptation of motion models enables autonomous robots to move more accurate in case of unknown disturbances. This paper proposes a new adaptive compensation feedforward controller capable of online learning a compensation motion model without any prior knowledge to counteract non-modeled disturbance such as slippage or hardware malfunctions. The controller is able to prevent motion errors a priori and is well suited for real hardware due to high adaptation rate. It can be used in conjunction with any motion model as only motion errors are compensated. A simple interface enables quick deployment of other robot systems as demonstrated in Small Size and Rescue Robot RoboCup leagues.

@INPROCEEDINGS{2017:RC-Ommer, author = {Nicolai Ommer and Alexander Stumpf and Oskar von Stryk}, title = {Real-Time Online Adaptive Feedforward Velocity Control for Unmanned Ground Vehicles}, year = {2017}, pages = {to appear}, publisher = {Springer}, booktitle = {RoboCup Symposium 2017}, pdf = {2017_Ommer_adaptive_controller_rc_symposium.pdf}, abstract = {Online adaptation of motion models enables autonomous robots to move more accurate in case of unknown disturbances. This paper proposes a new adaptive compensation feedforward controller capable of online learning a compensation motion model without any prior knowledge to counteract non-modeled disturbance such as slippage or hardware malfunctions. The controller is able to prevent motion errors a priori and is well suited for real hardware due to high adaptation rate. It can be used in conjunction with any motion model as only motion errors are compensated. A simple interface enables quick deployment of other robot systems as demonstrated in Small Size and Rescue Robot RoboCup leagues.},}

AbstractTeam ViGIR and Team Hector participated in the DARPA Robotics Challenge (DRC) Finals, held June 2015 in Pomona, California, along with 21 other teams from around the world. Both teams competed using the same high-level software, in conjunction with independently developed low-level software specific to their humanoid robots. Based on previous work on operator-centric manipulation control at the level of affordances, we developed an approach that allows one or more human operators to share control authority with a high-level behavior controller. This collaborative autonomy decreases the completion time of manipulation tasks, increases the reliability of the human-robot team, and allows the operators to adjust the robotic system’s autonomy on-the-fly. This article discusses the technical challenges we faced and overcame during our efforts to allow the human operators to interact with the robotic system at a higher level of abstraction and share control authority with it. We introduce and evaluate the proposed approach in the context of our two teams’ participation in the DRC Finals. We also present additional, systematic experiments conducted in the lab afterwards. Finally, we present a discussion about the lessons learned while transitioning between operator-centered manipulation control and behavior-centered manipulation control during competition.

@ARTICLE{2016:JFR-Romay-etal, author = {A. Romay and S. Maniatopoulos and S. Kohlbrecher and P. Schillinger and A. Stumpf and H. Kress-Gazit and O. von Stryk and D. Conner}, title = {Collaborative autonomy between high-level behaviors and human supervisors for remote manipulation tasks using different humanoid robots}, journal = {Journal of Field Robotics}, year = {2017}, volume = {34}, number = {2}, pages = {333-358}, month = {March}, note = {First published: 8 September 2016}, doi = {10.1002/rob.21671}, url = {http://onlinelibrary.wiley.com/doi/10.1002/rob.21671/full}, pdf = {2016_RomayEtAl_JFR.pdf}, abstract = {Team ViGIR and Team Hector participated in the DARPA Robotics Challenge (DRC) Finals, held June 2015 in Pomona, California, along with 21 other teams from around the world. Both teams competed using the same high-level software, in conjunction with independently developed low-level software specific to their humanoid robots. Based on previous work on operator-centric manipulation control at the level of affordances, we developed an approach that allows one or more human operators to share control authority with a high-level behavior controller. This collaborative autonomy decreases the completion time of manipulation tasks, increases the reliability of the human-robot team, and allows the operators to adjust the robotic system’s autonomy on-the-fly. This article discusses the technical challenges we faced and overcame during our efforts to allow the human operators to interact with the robotic system at a higher level of abstraction and share control authority with it. We introduce and evaluate the proposed approach in the context of our two teams’ participation in the DRC Finals. We also present additional, systematic experiments conducted in the lab afterwards. Finally, we present a discussion about the lessons learned while transitioning between operator-centered manipulation control and behavior-centered manipulation control during competition.},}

AbstractHumanoid robots benefit from their anthropomorphic shape when operating in human-made environments. In order to achieve human-like capabilities, robots must be able to perceive, understand and interact with the surrounding world. Humanoid locomotion in uneven terrain is a challenging task as it requires sophisticated world model generation, motion planning and control algorithms and their integration. In recent years, much progress in world modeling and motion control has been achieved. This paper presents one of the very first open source frameworks for full 3D footstep planning available for ROS which integrates perception and locomotion systems of humanoid bipedal robots. The framework is designed to be used for different type of humanoid robots having different perception and locomotion capabilities with minimal implementation effort. In order to integrate with almost any humanoid walking controller, the system can easily be extended with additional functionality that may be needed by low-level motion algorithms. It also considers sophisticated human-robot interaction that enables to direct the planner to generate improved solutions, provides monitoring data to the operator and debugging feedback for developers. The provided software package consists of three major blocks that address world generation, planning and interfacing low-level motion algorithms. The framework has been successfully applied to four different full-size humanoid robots.

@INPROCEEDINGS{2016:Humanoids-Stumpf, author = {Alexander Stumpf and Stefan Kohlbrecher and Oskar von Stryk and David C. Conner}, title = {Open Source Integrated 3D Footstep Planning Framework for Humanoid Robots}, year = {2016}, pages = {938-945}, month = {Nov 15-17}, address = {Cancún, Mexico}, booktitle = {Proc. IEEE-RAS Intl. Conf. Humanoid Robots}, doi = {10.1109/HUMANOIDS.2016.7803385}, pdf = {2016_Stumpf_footstep_planning_framework_Humanoids.pdf}, abstract = {Humanoid robots benefit from their anthropomorphic shape when operating in human-made environments. In order to achieve human-like capabilities, robots must be able to perceive, understand and interact with the surrounding world. Humanoid locomotion in uneven terrain is a challenging task as it requires sophisticated world model generation, motion planning and control algorithms and their integration. In recent years, much progress in world modeling and motion control has been achieved. This paper presents one of the very first open source frameworks for full 3D footstep planning available for ROS which integrates perception and locomotion systems of humanoid bipedal robots. The framework is designed to be used for different type of humanoid robots having different perception and locomotion capabilities with minimal implementation effort. In order to integrate with almost any humanoid walking controller, the system can easily be extended with additional functionality that may be needed by low-level motion algorithms. It also considers sophisticated human-robot interaction that enables to direct the planner to generate improved solutions, provides monitoring data to the operator and debugging feedback for developers. The provided software package consists of three major blocks that address world generation, planning and interfacing low-level motion algorithms. The framework has been successfully applied to four different full-size humanoid robots.},}

AbstractWhile recent advances in approaches for control of humanoid robot systems show promising results, consideration of fully integrated humanoid systems for solving complex tasks, such as disaster response, has only recently gained focus. In this paper, a software framework for humanoid disaster response robots is introduced. It provides newcomers as well as experienced researchers in humanoid robotics a comprehensive system comprising open source packages for locomotion, manipulation, perception, world modeling, behavior control, and operator interaction. The system uses the Robot Operating System (ROS) as a middleware, which has emerged as a de facto standard in robotics research in recent years. The described architecture and components allow for flexible interaction between operator(s) and robot from teleoperation to remotely supervised autonomous operation while considering bandwidth constraints. The components are self-contained and can be used either in combination with others or standalone. They have been developed and evaluated during participation in the DARPA Robotics Challenge, and their use for different tasks and parts of this competition are described.

@ARTICLE{2016:FRAI-Kohlbrecher-etal, author = {Stefan Kohlbrecher and Alexander Stumpf and Alberto Romay and Philipp Schillinger and Oskar von Stryk and David C. Conner}, title = {A comprehensive software framework for complex locomotion and manipulation tasks applicable to different types of humanoid robots}, journal = {Frontiers in Robotics and AI}, year = {2016}, pages = {online}, doi = {10.3389/frobt.2016.00031}, url = {http://journal.frontiersin.org/article/10.3389/frobt.2016.00031}, abstract = {While recent advances in approaches for control of humanoid robot systems show promising results, consideration of fully integrated humanoid systems for solving complex tasks, such as disaster response, has only recently gained focus. In this paper, a software framework for humanoid disaster response robots is introduced. It provides newcomers as well as experienced researchers in humanoid robotics a comprehensive system comprising open source packages for locomotion, manipulation, perception, world modeling, behavior control, and operator interaction. The system uses the Robot Operating System (ROS) as a middleware, which has emerged as a de facto standard in robotics research in recent years. The described architecture and components allow for flexible interaction between operator(s) and robot from teleoperation to remotely supervised autonomous operation while considering bandwidth constraints. The components are self-contained and can be used either in combination with others or standalone. They have been developed and evaluated during participation in the DARPA Robotics Challenge, and their use for different tasks and parts of this competition are described.},}

2015

AbstractThis report documents Team ViGIR’s efforts in the DARPA Robotics Challenge (DRC) between October 2012 and August 2015. Team ViGIR, a multinational collaborative research and development effort that spanned nine time zones, began as a Track B participant in the simulation-based Virtual Robotics Challenge; after placing in the top six, we began working the Atlas humanoid robotic system developed by Boston Dynamics. Team ViGIR competed in both the DRC Trials and DRC Finals. This report documents our performance, lessons learned along the way, and describes the novel contributions of our team. Specific focus areas include template-based manipulation, footstep planning, and autonomous behavior specification and execution. The software used in the competition and described in this report is being open sourced at http://github.com/team-vigir as part of our commitment to improving the capabilities of humanitarian rescue robotics.

@TECHREPORT{2015:ViGIR-Final-Report, author = {David Conner and Stefan Kohlbrecher and Alberto Romay and Alexander Stumpf and Spyros Maniatopoulos and Moritz Schappler and Benjamin Waxler }, title = {Team ViGIR: DARPA Robotics Challenge}, year = {2015}, institution = {TORC Robotics, Technical Universita Darmstadt, Cornell University, Leibniz University Hanover}, url = {http://www.dtic.mil/dtic/tr/fulltext/u2/a623035.pdf}, abstract = {This report documents Team ViGIR’s efforts in the DARPA Robotics Challenge (DRC) between October 2012 and August 2015. Team ViGIR, a multinational collaborative research and development effort that spanned nine time zones, began as a Track B participant in the simulation-based Virtual Robotics Challenge; after placing in the top six, we began working the Atlas humanoid robotic system developed by Boston Dynamics. Team ViGIR competed in both the DRC Trials and DRC Finals. This report documents our performance, lessons learned along the way, and describes the novel contributions of our team. Specific focus areas include template-based manipulation, footstep planning, and autonomous behavior specification and execution. The software used in the competition and described in this report is being open sourced at http://github.com/team-vigir as part of our commitment to improving the capabilities of humanitarian rescue robotics.},}

AbstractAmong the eight tasks of the DARPA Robotics Challenge (DRC), the driving task was one of the most challenging. Obstacles in the course prevented straight driving and restricted communications limited the situation awareness of the operator. In this video we show how Team Hector and Team ViGIR successfully completed the driving task with dfferent robot platforms, THOR-Mang and Atlas respectively, but using the same software and compliant steering adapter. Our driving user interface presents to the operator image view from cameras and driving aids such as wheel positioning and turn radius path of the wheels. The operator uses a standard computer game joystick which is used to command steering wheel angles and gas pedal pressure. Steering wheel angle positions are generated off-line and interpolated on-line in the robot"s onboard computer. The compliant steering adapter accommodates end-effector positioning errors. Gas pedal pressure is generated by a binary joint position of the robot"s leg. Commands are generated in the operator control station and sent as target positions to the robot. The driving user interface also provides feedback from the current steering wheel position. Video footage with descriptions from the driving interface, robot"s camera and LIDAR perception and external task monitoring is presented.

@INPROCEEDINGS{2015:HUM-Romay-Driving, author = {A. Romay and A. Stein and M. Oehler and A. Stumpf and S. Kohlbrecher and D.C. Conner and O. von Stryk}, title = {Open source driving controller concept for humanoid robots: Teams Hector and ViGIR at 2015 DARPA Robotics Challenge Finals}, year = {2015}, pages = {video}, month = {Nov. 3-5}, address = {Seoul, Korea}, booktitle = {IEEE-RAS Intl. Conf. on Humanoid Robots}, pdf = {2015_RomayEtAl_Humanoids_DrivingVideo.mp4}, abstract = {Among the eight tasks of the DARPA Robotics Challenge (DRC), the driving task was one of the most challenging. Obstacles in the course prevented straight driving and restricted communications limited the situation awareness of the operator. In this video we show how Team Hector and Team ViGIR successfully completed the driving task with dfferent robot platforms, THOR-Mang and Atlas respectively, but using the same software and compliant steering adapter. Our driving user interface presents to the operator image view from cameras and driving aids such as wheel positioning and turn radius path of the wheels. The operator uses a standard computer game joystick which is used to command steering wheel angles and gas pedal pressure. Steering wheel angle positions are generated off-line and interpolated on-line in the robot"s onboard computer. The compliant steering adapter accommodates end-effector positioning errors. Gas pedal pressure is generated by a binary joint position of the robot"s leg. Commands are generated in the operator control station and sent as target positions to the robot. The driving user interface also provides feedback from the current steering wheel position. Video footage with descriptions from the driving interface, robot"s camera and LIDAR perception and external task monitoring is presented.},}

AbstractThis paper describes the approach used by Team Hector Darmstadt for participation in the 2015 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. In 2015, the team focuses on improving the rough terrain motion capabilities of used platforms as well as manipulation capabilities. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.

@Inproceedings{2014_rc_towards_highly_reliable, author = {Stefan Kohlbrecher and Florian Kunz and Dorothea Koert and Christian Rose and Paul Manns and Kevin Daun and Johannes Schubert and Alexander Stumpf and Oskar von Stryk}, title = {Towards Highly Reliable Autonomy for Urban Search and Rescue Robots}, year = {2015}, volume = {8992}, pages = {118-129}, publisher = {Springer}, editor = {R.A.C. Bianchi, H.L. Akin, S. Ramamoorthy, K. Sugiura}, series = {Lecture Notes in Artificial Intelligence (LNAI)}, booktitle = {RoboCup 2014: Robot World Cup XVIII}, url = {http://www.springer.com/br/book/9783319186146}, abstract = {This paper describes the approach used by Team Hector Darmstadt for participation in the 2015 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. In 2015, the team focuses on improving the rough terrain motion capabilities of used platforms as well as manipulation capabilities. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.},}

AbstractTeam ViGIR entered the 2013 DARPA Robotics Challenge (DRC) with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just simple supervisors. We designed our operator control station (OCS) to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the Robot Operating System (ROS) libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and describes our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well.

@ARTICLE{2014:JFR-ViGIR-DRC-Trials, author = {S. Kohlbrecher and A. Romay and A. Stumpf and A. Gupta and O. von Stryk and F. Bacim and D.A. Bowman and A. Goins and R. Balasubramanian and D.C. Conner}, title = {Human-Robot Teaming for Rescue Missions: Team ViGIR´s Approach to the 2013 DARPA Robotics Challenge Trials}, journal = {Journal of Field Robotics}, year = {2015}, volume = {32}, number = {3}, pages = {352-377}, note = {First published online 4 Dec 2014}, url = {http://onlinelibrary.wiley.com/doi/10.1002/rob.21558/full}, pdf = {2014_vigir_jfr_main.pdf}, abstract = {Team ViGIR entered the 2013 DARPA Robotics Challenge (DRC) with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just simple supervisors. We designed our operator control station (OCS) to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the Robot Operating System (ROS) libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and describes our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well.},}

AbstractIn recent years, the numbers of life-size humanoids as well as their mobility capabilities have steadily grown. Stable walking motion and control for humanoid robots are already well investigated research topics. This raises the question how navigation problems in complex and unstructured environments can be solved utilizing a given black box walking controller with proper perception and modeling of the environment provided. In this paper we present a complete system for supervised footstep planning including perception, world modeling, 3D planner and operator interface to enable a humanoid robot to perform sequences of steps to traverse uneven terrain. A proper height map and surface normal estimation are directly obtained from point cloud data. A search-based planning approach (ARA*) is extended to sequences of footsteps in full 3D space (6 DoF). The planner utilizes a black box walking controller without knowledge of its implementation details. Results are presented for an Atlas humanoid robot during participation of Team ViGIR in the 2013 DARPA Robotics Challenge Trials.

@INPROCEEDINGS{2014:Humanoids-Stumpf, author = {A. Stumpf and S. Kohlbrecher and D.C. Conner and O. von Stryk}, title = {Supervised Footstep Planning for Humanoid Robots in Rough Terrain Tasks using a Black Box Walking Controller}, year = {2014}, pages = {287-294}, month = {Nov 18-20}, address = {Madrid, Spain}, booktitle = {Proc. IEEE-RAS Intl. Conf. Humanoid Robots}, pdf = {2014_Stumpf_footstep_planning_Humanoids.pdf}, abstract = {In recent years, the numbers of life-size humanoids as well as their mobility capabilities have steadily grown. Stable walking motion and control for humanoid robots are already well investigated research topics. This raises the question how navigation problems in complex and unstructured environments can be solved utilizing a given black box walking controller with proper perception and modeling of the environment provided. In this paper we present a complete system for supervised footstep planning including perception, world modeling, 3D planner and operator interface to enable a humanoid robot to perform sequences of steps to traverse uneven terrain. A proper height map and surface normal estimation are directly obtained from point cloud data. A search-based planning approach (ARA*) is extended to sequences of footsteps in full 3D space (6 DoF). The planner utilizes a black box walking controller without knowledge of its implementation details. Results are presented for an Atlas humanoid robot during participation of Team ViGIR in the 2013 DARPA Robotics Challenge Trials.},}

AbstractHumanoid robotic manipulation in unstructured environments is a challenging problem. Limited perception, communications and environmental constraints present challenges that prevent fully autonomous or purely teleoperated robots from reliably interacting with their environment. In order to achieve higher reliability in manipulation we present an approach involving remote human supervision. Strengths from both human operator and humanoid robot are leveraged through a user interface that allows the operator to perceive the remote environment through an aggregated worldmodel based on onboard sensing, while the robot can efficiently receive perceptual and semantic information from the operator. A template based manipulation approach has been successfully applied to the Atlas humanoid robot; we show real world footage of the results obtained in the DARPA Robotics Challenge Trials 2013.

@INPROCEEDINGS{2014:Humanoids-Romay, author = {A. Romay and S. Kohlbrecher and D.C. Conner and A. Stumpf and O. von Stryk}, title = {Template-Based Manipulation in Unstructured Environments for Supervised Semi-Autonomous Humanoid Robots}, year = {2014}, pages = {979 - 986}, month = {Nov 18-10}, address = {Madrid, Spain}, booktitle = {Proc. IEEE-RAS Intl. Conf. Humanoid Robots}, pdf = {2014_RomayEtAl_Humanoids.pdf}, abstract = {Humanoid robotic manipulation in unstructured environments is a challenging problem. Limited perception, communications and environmental constraints present challenges that prevent fully autonomous or purely teleoperated robots from reliably interacting with their environment. In order to achieve higher reliability in manipulation we present an approach involving remote human supervision. Strengths from both human operator and humanoid robot are leveraged through a user interface that allows the operator to perceive the remote environment through an aggregated worldmodel based on onboard sensing, while the robot can efficiently receive perceptual and semantic information from the operator. A template based manipulation approach has been successfully applied to the Atlas humanoid robot; we show real world footage of the results obtained in the DARPA Robotics Challenge Trials 2013.},}

AbstractWith advances in walking abilities of autonomous soccer playing humanoid robots, the world modeling and state estimation problem moves into focus, as only sufficiently accurate and robust modeling allows to leverage improved locomotion capabilities. A novel approach for dense grid-based obstacle mapping in dynamic environments with an additional application for automatic gaze control is presented in this paper. It is applicable for soccer playing humanoid robots with external sensing limited to human-like vision and strongly limited onboard computing abilities. The proposed approach allows fusion of information from different sources and efficiently provides a single consistent and robust world state estimate despite strong robot hardware limitations.

2010

AbstractAn important research topic at the Simulation, Systems Optimization and Robotics group at TU Darmstadt is research and development of autonomous robot systems. One of the research projects is the Darmstadt Dribblers, developing autonomous soccer playing humanoid robots and participating in the annual RoboCup competition. In this work a new approach for modeling the environment of the robot is introduced. For this purpose an occupancy grid mapping system is developed which provides a more detailed model of obstacles in the dynamic environment than previously used approaches. Furthermore, an entropy-based active gaze control system generating gaze commands to maximize perceptual input is introduced. This system accounts for the reliability estimates in existing modules like ball modeling and is used to automatically control the camera gaze of the robot. Evaluation of the system shows that the system performs well on real hardware.

@BACHELORTHESIS{2010:ba_stumpf, author = {Alexander Stumpf}, title = {Grid-Based Obstacle Mapping and Gaze Control for Soccer Playing Humanoid Robots}, year = {2010}, school = {Technische Universitaet Darmstadt, Department of Computer Science (SIM)}, pdf = {2010-Stumpf-Bachelorarbeit.pdf}, abstract = {An important research topic at the Simulation, Systems Optimization and Robotics group at TU Darmstadt is research and development of autonomous robot systems. One of the research projects is the Darmstadt Dribblers, developing autonomous soccer playing humanoid robots and participating in the annual RoboCup competition. In this work a new approach for modeling the environment of the robot is introduced. For this purpose an occupancy grid mapping system is developed which provides a more detailed model of obstacles in the dynamic environment than previously used approaches. Furthermore, an entropy-based active gaze control system generating gaze commands to maximize perceptual input is introduced. This system accounts for the reliability estimates in existing modules like ball modeling and is used to automatically control the camera gaze of the robot. Evaluation of the system shows that the system performs well on real hardware. },}