AbstractTeam ViGIR and Team Hector participated in the DARPA Robotics Challenge (DRC) Finals, held June 2015 in Pomona, California, along with 21 other teams from around the world. Both teams competed using the same high-level software, in conjunction with independently developed low-level software specific to their humanoid robots. Based on previous work on operator-centric manipulation control at the level of affordances, we developed an approach that allows one or more human operators to share control authority with a high-level behavior controller. This collaborative autonomy decreases the completion time of manipulation tasks, increases the reliability of the human-robot team, and allows the operators to adjust the robotic system’s autonomy on-the-fly. This article discusses the technical challenges we faced and overcame during our efforts to allow the human operators to interact with the robotic system at a higher level of abstraction and share control authority with it. We introduce and evaluate the proposed approach in the context of our two teams’ participation in the DRC Finals. We also present additional, systematic experiments conducted in the lab afterwards. Finally, we present a discussion about the lessons learned while transitioning between operator-centered manipulation control and behavior-centered manipulation control during competition.

@ARTICLE{2016:JFR-Romay-etal, author = {A. Romay and S. Maniatopoulos and S. Kohlbrecher and P. Schillinger and A. Stumpf and H. Kress-Gazit and O. von Stryk and D. Conner}, title = {Collaborative autonomy between high-level behaviors and human supervisors for remote manipulation tasks using different humanoid robots}, journal = {Journal of Field Robotics}, year = {2017}, volume = {34}, number = {2}, pages = {333-358}, month = {March}, note = {First published: 8 September 2016}, doi = {10.1002/rob.21671}, url = {http://onlinelibrary.wiley.com/doi/10.1002/rob.21671/full}, pdf = {2016_RomayEtAl_JFR.pdf}, abstract = {Team ViGIR and Team Hector participated in the DARPA Robotics Challenge (DRC) Finals, held June 2015 in Pomona, California, along with 21 other teams from around the world. Both teams competed using the same high-level software, in conjunction with independently developed low-level software specific to their humanoid robots. Based on previous work on operator-centric manipulation control at the level of affordances, we developed an approach that allows one or more human operators to share control authority with a high-level behavior controller. This collaborative autonomy decreases the completion time of manipulation tasks, increases the reliability of the human-robot team, and allows the operators to adjust the robotic system’s autonomy on-the-fly. This article discusses the technical challenges we faced and overcame during our efforts to allow the human operators to interact with the robotic system at a higher level of abstraction and share control authority with it. We introduce and evaluate the proposed approach in the context of our two teams’ participation in the DRC Finals. We also present additional, systematic experiments conducted in the lab afterwards. Finally, we present a discussion about the lessons learned while transitioning between operator-centered manipulation control and behavior-centered manipulation control during competition.},}

AbstractHumanoid robots benefit from their anthropomorphic shape when operating in human-made environments. In order to achieve human-like capabilities, robots must be able to perceive, understand and interact with the surrounding world. Humanoid locomotion in uneven terrain is a challenging task as it requires sophisticated world model generation, motion planning and control algorithms and their integration. In recent years, much progress in world modeling and motion control has been achieved. This paper presents one of the very first open source frameworks for full 3D footstep planning available for ROS which integrates perception and locomotion systems of humanoid bipedal robots. The framework is designed to be used for different type of humanoid robots having different perception and locomotion capabilities with minimal implementation effort. In order to integrate with almost any humanoid walking controller, the system can easily be extended with additional functionality that may be needed by low-level motion algorithms. It also considers sophisticated human-robot interaction that enables to direct the planner to generate improved solutions, provides monitoring data to the operator and debugging feedback for developers. The provided software package consists of three major blocks that address world generation, planning and interfacing low-level motion algorithms. The framework has been successfully applied to four different full-size humanoid robots.

@INPROCEEDINGS{2016:Humanoids-Stumpf, author = {Alexander Stumpf and Stefan Kohlbrecher and Oskar von Stryk and David C. Conner}, title = {Open Source Integrated 3D Footstep Planning Framework for Humanoid Robots}, year = {2016}, pages = {938-945}, month = {Nov 15-17}, address = {Cancún, Mexico}, booktitle = {Proc. IEEE-RAS Intl. Conf. Humanoid Robots}, doi = {10.1109/HUMANOIDS.2016.7803385}, pdf = {2016_Stumpf_footstep_planning_framework_Humanoids.pdf}, abstract = {Humanoid robots benefit from their anthropomorphic shape when operating in human-made environments. In order to achieve human-like capabilities, robots must be able to perceive, understand and interact with the surrounding world. Humanoid locomotion in uneven terrain is a challenging task as it requires sophisticated world model generation, motion planning and control algorithms and their integration. In recent years, much progress in world modeling and motion control has been achieved. This paper presents one of the very first open source frameworks for full 3D footstep planning available for ROS which integrates perception and locomotion systems of humanoid bipedal robots. The framework is designed to be used for different type of humanoid robots having different perception and locomotion capabilities with minimal implementation effort. In order to integrate with almost any humanoid walking controller, the system can easily be extended with additional functionality that may be needed by low-level motion algorithms. It also considers sophisticated human-robot interaction that enables to direct the planner to generate improved solutions, provides monitoring data to the operator and debugging feedback for developers. The provided software package consists of three major blocks that address world generation, planning and interfacing low-level motion algorithms. The framework has been successfully applied to four different full-size humanoid robots.},}

AbstractWhile recent advances in approaches for control of humanoid robot systems show promising results, consideration of fully integrated humanoid systems for solving complex tasks, such as disaster response, has only recently gained focus. In this paper, a software framework for humanoid disaster response robots is introduced. It provides newcomers as well as experienced researchers in humanoid robotics a comprehensive system comprising open source packages for locomotion, manipulation, perception, world modeling, behavior control, and operator interaction. The system uses the Robot Operating System (ROS) as a middleware, which has emerged as a de facto standard in robotics research in recent years. The described architecture and components allow for flexible interaction between operator(s) and robot from teleoperation to remotely supervised autonomous operation while considering bandwidth constraints. The components are self-contained and can be used either in combination with others or standalone. They have been developed and evaluated during participation in the DARPA Robotics Challenge, and their use for different tasks and parts of this competition are described.

@ARTICLE{2016:FRAI-Kohlbrecher-etal, author = {Stefan Kohlbrecher and Alexander Stumpf and Alberto Romay and Philipp Schillinger and Oskar von Stryk and David C. Conner}, title = {A comprehensive software framework for complex locomotion and manipulation tasks applicable to different types of humanoid robots}, journal = {Frontiers in Robotics and AI}, year = {2016}, pages = {online}, doi = {10.3389/frobt.2016.00031}, url = {http://journal.frontiersin.org/article/10.3389/frobt.2016.00031}, abstract = {While recent advances in approaches for control of humanoid robot systems show promising results, consideration of fully integrated humanoid systems for solving complex tasks, such as disaster response, has only recently gained focus. In this paper, a software framework for humanoid disaster response robots is introduced. It provides newcomers as well as experienced researchers in humanoid robotics a comprehensive system comprising open source packages for locomotion, manipulation, perception, world modeling, behavior control, and operator interaction. The system uses the Robot Operating System (ROS) as a middleware, which has emerged as a de facto standard in robotics research in recent years. The described architecture and components allow for flexible interaction between operator(s) and robot from teleoperation to remotely supervised autonomous operation while considering bandwidth constraints. The components are self-contained and can be used either in combination with others or standalone. They have been developed and evaluated during participation in the DARPA Robotics Challenge, and their use for different tasks and parts of this competition are described.},}

AbstractWith increasing capabilities and reliability of autonomous mobile robots, inspection of remote industrial plants in challenging environments becomes feasible. With the ARGOS challenge, oil and gas company TOTAL S.A. initiated an international competition aimed at the development of the first autonomous mobile robot which can safely operate in complete or supervised autonomy over the entire onshore or offshore production site, potentially in hazardous explosive atmospheres and harsh conditions. In this work, the approach of joint Austrian–German Team ARGONAUTS towards solving this challenge is introduced, focussing on autonomous capabilities. These build on functional components developed during prior participation in the RoboCup Rescue Robot League.

@ARTICLE{2016:KI-Kohlbrecher-etal, author = {Stefan Kohlbrecher and Oskar von Stryk}, title = {From RoboCup Rescue to supervised autonomous mobile robots for remote inspection of industrial plants}, journal = {KI - Künstliche Intelligenz}, year = {2016}, volume = {30}, number = {3}, pages = {311–314}, doi = {doi:10.1007/s13218-016-0446-8}, url = {http://link.springer.com/article/10.1007/s13218-016-0446-8}, pdf = {2016-KI_SKo_OvS.pdf}, abstract = {With increasing capabilities and reliability of autonomous mobile robots, inspection of remote industrial plants in challenging environments becomes feasible. With the ARGOS challenge, oil and gas company TOTAL S.A. initiated an international competition aimed at the development of the first autonomous mobile robot which can safely operate in complete or supervised autonomy over the entire onshore or offshore production site, potentially in hazardous explosive atmospheres and harsh conditions. In this work, the approach of joint Austrian–German Team ARGONAUTS towards solving this challenge is introduced, focussing on autonomous capabilities. These build on functional components developed during prior participation in the RoboCup Rescue Robot League.},}

Alberto Romay, Stefan Kohlbrecher, Oskar von Stryk

An object template approach to manipulation for humanoid avatar robots for rescue tasks

AbstractNowadays, the first steps towards the use of remote mobile robots to perform rescue tasks in disaster environments have been made possible. However, these environments still present several challenges for robots, which open new possibilities for research and development. For example, fully autonomous robots are not yet suitable for such tasks with high degree of uncertainty, and pure teloperated robots require high expertise and high mental workload, as well as fast communication to be reliable. In this paper, we discuss a middle ground approach to manipulation, that leverages the strengths and abilities of a human supervisor and a semi-autonomous robot while at the same tackling their weaknesses. This approach is based on the object template concept, which provides an interaction method to rapidly communicate to a remote robot the physical and abstract information for manipulation of the objects of interest. This approach goes beyond current grasp-centered approaches by focusing on the affordance information of the objects and providing flexibility to solve manipulation tasks in versatile ways. Experimental evaluation of the approach is performed using two highly advanced humanoid robots.

@ARTICLE{2016:KI-Romay-etal, author = {Alberto Romay and Stefan Kohlbrecher and Oskar von Stryk}, title = {An object template approach to manipulation for humanoid avatar robots for rescue tasks}, journal = {KI - Künstliche Intelligenz}, year = {2016}, volume = {30}, number = {3}, pages = {279-287}, doi = {10.1007/s13218-016-0445-9}, url = {http://rdcu.be/ur29}, pdf = {2016_RomayEtAl_KI.pdf}, abstract = {Nowadays, the first steps towards the use of remote mobile robots to perform rescue tasks in disaster environments have been made possible. However, these environments still present several challenges for robots, which open new possibilities for research and development. For example, fully autonomous robots are not yet suitable for such tasks with high degree of uncertainty, and pure teloperated robots require high expertise and high mental workload, as well as fast communication to be reliable. In this paper, we discuss a middle ground approach to manipulation, that leverages the strengths and abilities of a human supervisor and a semi-autonomous robot while at the same tackling their weaknesses. This approach is based on the object template concept, which provides an interaction method to rapidly communicate to a remote robot the physical and abstract information for manipulation of the objects of interest. This approach goes beyond current grasp-centered approaches by focusing on the affordance information of the objects and providing flexibility to solve manipulation tasks in versatile ways. Experimental evaluation of the approach is performed using two highly advanced humanoid robots.},}

Philipp Schillinger, Stefan Kohlbrecher, Oskar von Stryk

Human-Robot Collaborative High-Level Control with Application to Rescue Robotics

AbstractMotivated by the DARPA Robotics Challenge (DRC), the application of operator assisted (semi-)autonomous robots with highly complex locomotion and manipulation abilities is considered for solving complex tasks in potentially unknown and unstructured environments. Because of the limited a priori knowledge about the state of the environment and tasks needed to achieve a complex mission, a sufficiently complete a priori design of high level robot behaviors is not possible. Most of the situational knowledge required for such behavior design is gathered only during runtime and needs to be interpreted by a human operator. However, current behavior control approaches only allow for very limited adaptation at runtime and no flexible operator interaction. In this paper an approach for definition and execution of complex robot behaviors based on hierarchical state machines is presented, allowing to flexibly change the structure of behaviors on the fly during runtime through assistance of a remote operator. The efficiency of the proposed approach is demonstrated and evaluated not only in an example scenario, but also by application in two robot competitions.

@INPROCEEDINGS{2016:ICRA_Schillinger-etal, author = {Philipp Schillinger and Stefan Kohlbrecher and Oskar von Stryk}, title = {Human-Robot Collaborative High-Level Control with Application to Rescue Robotics}, year = {2016}, pages = {2796-2802}, month = {May 16-21}, note = {Finalist for Best Human-Robot Interaction Paper Award}, booktitle = {Proc. IEEE Int. Conf. on Robotics and Automation (ICRA)}, doi = {10.1109/ICRA.2016.7487442}, url = {ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7487442}, abstract = {Motivated by the DARPA Robotics Challenge (DRC), the application of operator assisted (semi-)autonomous robots with highly complex locomotion and manipulation abilities is considered for solving complex tasks in potentially unknown and unstructured environments. Because of the limited a priori knowledge about the state of the environment and tasks needed to achieve a complex mission, a sufficiently complete a priori design of high level robot behaviors is not possible. Most of the situational knowledge required for such behavior design is gathered only during runtime and needs to be interpreted by a human operator. However, current behavior control approaches only allow for very limited adaptation at runtime and no flexible operator interaction. In this paper an approach for definition and execution of complex robot behaviors based on hierarchical state machines is presented, allowing to flexibly change the structure of behaviors on the fly during runtime through assistance of a remote operator. The efficiency of the proposed approach is demonstrated and evaluated not only in an example scenario, but also by application in two robot competitions.},}

2015

AbstractThis report documents Team ViGIR’s efforts in the DARPA Robotics Challenge (DRC) between October 2012 and August 2015. Team ViGIR, a multinational collaborative research and development effort that spanned nine time zones, began as a Track B participant in the simulation-based Virtual Robotics Challenge; after placing in the top six, we began working the Atlas humanoid robotic system developed by Boston Dynamics. Team ViGIR competed in both the DRC Trials and DRC Finals. This report documents our performance, lessons learned along the way, and describes the novel contributions of our team. Specific focus areas include template-based manipulation, footstep planning, and autonomous behavior specification and execution. The software used in the competition and described in this report is being open sourced at http://github.com/team-vigir as part of our commitment to improving the capabilities of humanitarian rescue robotics.

@TECHREPORT{2015:ViGIR-Final-Report, author = {David Conner and Stefan Kohlbrecher and Alberto Romay and Alexander Stumpf and Spyros Maniatopoulos and Moritz Schappler and Benjamin Waxler }, title = {Team ViGIR: DARPA Robotics Challenge}, year = {2015}, institution = {TORC Robotics, Technical Universita Darmstadt, Cornell University, Leibniz University Hanover}, url = {http://www.dtic.mil/dtic/tr/fulltext/u2/a623035.pdf}, abstract = {This report documents Team ViGIR’s efforts in the DARPA Robotics Challenge (DRC) between October 2012 and August 2015. Team ViGIR, a multinational collaborative research and development effort that spanned nine time zones, began as a Track B participant in the simulation-based Virtual Robotics Challenge; after placing in the top six, we began working the Atlas humanoid robotic system developed by Boston Dynamics. Team ViGIR competed in both the DRC Trials and DRC Finals. This report documents our performance, lessons learned along the way, and describes the novel contributions of our team. Specific focus areas include template-based manipulation, footstep planning, and autonomous behavior specification and execution. The software used in the competition and described in this report is being open sourced at http://github.com/team-vigir as part of our commitment to improving the capabilities of humanitarian rescue robotics.},}

AbstractAmong the eight tasks of the DARPA Robotics Challenge (DRC), the driving task was one of the most challenging. Obstacles in the course prevented straight driving and restricted communications limited the situation awareness of the operator. In this video we show how Team Hector and Team ViGIR successfully completed the driving task with dfferent robot platforms, THOR-Mang and Atlas respectively, but using the same software and compliant steering adapter. Our driving user interface presents to the operator image view from cameras and driving aids such as wheel positioning and turn radius path of the wheels. The operator uses a standard computer game joystick which is used to command steering wheel angles and gas pedal pressure. Steering wheel angle positions are generated off-line and interpolated on-line in the robot"s onboard computer. The compliant steering adapter accommodates end-effector positioning errors. Gas pedal pressure is generated by a binary joint position of the robot"s leg. Commands are generated in the operator control station and sent as target positions to the robot. The driving user interface also provides feedback from the current steering wheel position. Video footage with descriptions from the driving interface, robot"s camera and LIDAR perception and external task monitoring is presented.

@INPROCEEDINGS{2015:HUM-Romay-Driving, author = {A. Romay and A. Stein and M. Oehler and A. Stumpf and S. Kohlbrecher and D.C. Conner and O. von Stryk}, title = {Open source driving controller concept for humanoid robots: Teams Hector and ViGIR at 2015 DARPA Robotics Challenge Finals}, year = {2015}, pages = {video}, month = {Nov. 3-5}, address = {Seoul, Korea}, booktitle = {IEEE-RAS Intl. Conf. on Humanoid Robots}, pdf = {2015_RomayEtAl_Humanoids_DrivingVideo.mp4}, abstract = {Among the eight tasks of the DARPA Robotics Challenge (DRC), the driving task was one of the most challenging. Obstacles in the course prevented straight driving and restricted communications limited the situation awareness of the operator. In this video we show how Team Hector and Team ViGIR successfully completed the driving task with dfferent robot platforms, THOR-Mang and Atlas respectively, but using the same software and compliant steering adapter. Our driving user interface presents to the operator image view from cameras and driving aids such as wheel positioning and turn radius path of the wheels. The operator uses a standard computer game joystick which is used to command steering wheel angles and gas pedal pressure. Steering wheel angle positions are generated off-line and interpolated on-line in the robot"s onboard computer. The compliant steering adapter accommodates end-effector positioning errors. Gas pedal pressure is generated by a binary joint position of the robot"s leg. Commands are generated in the operator control station and sent as target positions to the robot. The driving user interface also provides feedback from the current steering wheel position. Video footage with descriptions from the driving interface, robot"s camera and LIDAR perception and external task monitoring is presented.},}

AbstractThe investigations of this paper are motivated by the scenario of a supervised semi-autonomous humanoid robot entering a mainly unknown, potentially degraded human environment to perform highly diverse disaster recovery tasks. For this purpose, the robot must be enabled to use any object it can find in the environment as tool for achieving its current manipulation task. This requires the use of potential unknown objects as well as known objects for new purposes (e.g. using a drill as a hammer). A recently proposed object template manipulation approach is extended to provide a semi-autonomous humanoid robot assisted by a remote human supervisor with the versatility needed to utilize objects in the described manner applying affordances [1] from other previously known objects. For an Atlas humanoid robot it is demonstrated how using a small set of such object templates with well defined affordances can be used to solve manipulation tasks using new unknown objects.

@INPROCEEDINGS{2015:HUM-Romay, author = {A. Romay and S. Kohlbrecher and D.C. Conner and O. von Stryk}, title = {Achieving versatile manipulation tasks with unknown objects by supervised humanoid robots based on object templates}, year = {2015}, pages = {to appear}, month = {Nov. 3-5}, booktitle = {IEEE-RAS Intl. Conf. on Humanoid Robots}, pdf = {2015_RomayEtAl_Humanoids.pdf}, abstract = {The investigations of this paper are motivated by the scenario of a supervised semi-autonomous humanoid robot entering a mainly unknown, potentially degraded human environment to perform highly diverse disaster recovery tasks. For this purpose, the robot must be enabled to use any object it can find in the environment as tool for achieving its current manipulation task. This requires the use of potential unknown objects as well as known objects for new purposes (e.g. using a drill as a hammer). A recently proposed object template manipulation approach is extended to provide a semi-autonomous humanoid robot assisted by a remote human supervisor with the versatility needed to utilize objects in the described manner applying affordances [1] from other previously known objects. For an Atlas humanoid robot it is demonstrated how using a small set of such object templates with well defined affordances can be used to solve manipulation tasks using new unknown objects.},}

AbstractThis paper describes the approach used by Team Hector Darmstadt for participation in the 2015 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. In 2015, the team focuses on improving the rough terrain motion capabilities of used platforms as well as manipulation capabilities. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.

@Inproceedings{2014_rc_towards_highly_reliable, author = {Stefan Kohlbrecher and Florian Kunz and Dorothea Koert and Christian Rose and Paul Manns and Kevin Daun and Johannes Schubert and Alexander Stumpf and Oskar von Stryk}, title = {Towards Highly Reliable Autonomy for Urban Search and Rescue Robots}, year = {2015}, volume = {8992}, pages = {118-129}, publisher = {Springer}, editor = {R.A.C. Bianchi, H.L. Akin, S. Ramamoorthy, K. Sugiura}, series = {Lecture Notes in Artificial Intelligence (LNAI)}, booktitle = {RoboCup 2014: Robot World Cup XVIII}, url = {http://www.springer.com/br/book/9783319186146}, abstract = {This paper describes the approach used by Team Hector Darmstadt for participation in the 2015 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. In 2015, the team focuses on improving the rough terrain motion capabilities of used platforms as well as manipulation capabilities. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.},}

AbstractTeam ViGIR entered the 2013 DARPA Robotics Challenge (DRC) with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just simple supervisors. We designed our operator control station (OCS) to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the Robot Operating System (ROS) libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and describes our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well.

@ARTICLE{2014:JFR-ViGIR-DRC-Trials, author = {S. Kohlbrecher and A. Romay and A. Stumpf and A. Gupta and O. von Stryk and F. Bacim and D.A. Bowman and A. Goins and R. Balasubramanian and D.C. Conner}, title = {Human-Robot Teaming for Rescue Missions: Team ViGIR´s Approach to the 2013 DARPA Robotics Challenge Trials}, journal = {Journal of Field Robotics}, year = {2015}, volume = {32}, number = {3}, pages = {352-377}, note = {First published online 4 Dec 2014}, url = {http://onlinelibrary.wiley.com/doi/10.1002/rob.21558/full}, pdf = {2014_vigir_jfr_main.pdf}, abstract = {Team ViGIR entered the 2013 DARPA Robotics Challenge (DRC) with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just simple supervisors. We designed our operator control station (OCS) to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the Robot Operating System (ROS) libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and describes our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well.},}

AbstractIn recent years, the numbers of life-size humanoids as well as their mobility capabilities have steadily grown. Stable walking motion and control for humanoid robots are already well investigated research topics. This raises the question how navigation problems in complex and unstructured environments can be solved utilizing a given black box walking controller with proper perception and modeling of the environment provided. In this paper we present a complete system for supervised footstep planning including perception, world modeling, 3D planner and operator interface to enable a humanoid robot to perform sequences of steps to traverse uneven terrain. A proper height map and surface normal estimation are directly obtained from point cloud data. A search-based planning approach (ARA*) is extended to sequences of footsteps in full 3D space (6 DoF). The planner utilizes a black box walking controller without knowledge of its implementation details. Results are presented for an Atlas humanoid robot during participation of Team ViGIR in the 2013 DARPA Robotics Challenge Trials.

@INPROCEEDINGS{2014:Humanoids-Stumpf, author = {A. Stumpf and S. Kohlbrecher and D.C. Conner and O. von Stryk}, title = {Supervised Footstep Planning for Humanoid Robots in Rough Terrain Tasks using a Black Box Walking Controller}, year = {2014}, pages = {287-294}, month = {Nov 18-20}, address = {Madrid, Spain}, booktitle = {Proc. IEEE-RAS Intl. Conf. Humanoid Robots}, pdf = {2014_Stumpf_footstep_planning_Humanoids.pdf}, abstract = {In recent years, the numbers of life-size humanoids as well as their mobility capabilities have steadily grown. Stable walking motion and control for humanoid robots are already well investigated research topics. This raises the question how navigation problems in complex and unstructured environments can be solved utilizing a given black box walking controller with proper perception and modeling of the environment provided. In this paper we present a complete system for supervised footstep planning including perception, world modeling, 3D planner and operator interface to enable a humanoid robot to perform sequences of steps to traverse uneven terrain. A proper height map and surface normal estimation are directly obtained from point cloud data. A search-based planning approach (ARA*) is extended to sequences of footsteps in full 3D space (6 DoF). The planner utilizes a black box walking controller without knowledge of its implementation details. Results are presented for an Atlas humanoid robot during participation of Team ViGIR in the 2013 DARPA Robotics Challenge Trials.},}

AbstractHumanoid robotic manipulation in unstructured environments is a challenging problem. Limited perception, communications and environmental constraints present challenges that prevent fully autonomous or purely teleoperated robots from reliably interacting with their environment. In order to achieve higher reliability in manipulation we present an approach involving remote human supervision. Strengths from both human operator and humanoid robot are leveraged through a user interface that allows the operator to perceive the remote environment through an aggregated worldmodel based on onboard sensing, while the robot can efficiently receive perceptual and semantic information from the operator. A template based manipulation approach has been successfully applied to the Atlas humanoid robot; we show real world footage of the results obtained in the DARPA Robotics Challenge Trials 2013.

@INPROCEEDINGS{2014:Humanoids-Romay, author = {A. Romay and S. Kohlbrecher and D.C. Conner and A. Stumpf and O. von Stryk}, title = {Template-Based Manipulation in Unstructured Environments for Supervised Semi-Autonomous Humanoid Robots}, year = {2014}, pages = {979 - 986}, month = {Nov 18-10}, address = {Madrid, Spain}, booktitle = {Proc. IEEE-RAS Intl. Conf. Humanoid Robots}, pdf = {2014_RomayEtAl_Humanoids.pdf}, abstract = {Humanoid robotic manipulation in unstructured environments is a challenging problem. Limited perception, communications and environmental constraints present challenges that prevent fully autonomous or purely teleoperated robots from reliably interacting with their environment. In order to achieve higher reliability in manipulation we present an approach involving remote human supervision. Strengths from both human operator and humanoid robot are leveraged through a user interface that allows the operator to perceive the remote environment through an aggregated worldmodel based on onboard sensing, while the robot can efficiently receive perceptual and semantic information from the operator. A template based manipulation approach has been successfully applied to the Atlas humanoid robot; we show real world footage of the results obtained in the DARPA Robotics Challenge Trials 2013.},}

AbstractThis paper presents a generic framework for semi-autonomous manipulation for rescue robots. The presented framework concept is the outcome of the RoboCup short-term visit of team TEDUSAR Graz at Team Hector Darmstadt.

AbstractThis paper describes the approach used by Team Hector Darmstadt for participation in the 2014 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. In 2014, the team focuses on integration of newly acquired highly mobile tracked robot platforms as well as manipulation capabilities. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.

@TECHREPORT{2014:hector_rescue_tdp, author = {Stefan Kohlbrecher and Johannes Meyer and Thorsten Graber and Karen Petersen and Oskar von Stryk and Uwe Klingauf}, title = {RoboCupRescue 2014 - Robot League Team Hector Darmstadt (Germany)}, year = {2014}, institution = {Technische UniversitÃ¤t Darmstadt}, pdf = {2014_tdp_hector.pdf}, abstract = {This paper describes the approach used by Team Hector Darmstadt for participation in the 2014 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. In 2014, the team focuses on integration of newly acquired highly mobile tracked robot platforms as well as manipulation capabilities. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS. },}

AbstractKey abilities for robots deployed in urban search and rescue tasks include autonomous exploration of disaster sites and recognition of victims and other objects of interest. In this paper, we present related open source software modules for the development of such complex capabilities which include Hector slam for self-localization and mapping in a degraded urban environment. All modules have been successfully applied and tested originally in the RoboCup Rescue competition. Up to now they have already been re-used and adopted by numerous international research groups for a wide variety of tasks. Recently, they have also become part of the basis of a broader initiative for key open source software modules for urban search and rescue robots.

@INPROCEEDINGS{2013:RoboCup, author = {S. Kohlbrecher and J. Meyer and T. Graber and K. Petersen and O. von Stryk and U. Klingauf}, title = {Hector open source modules for autonomous mapping and navigation with rescue robots}, year = {2014}, volume = {8371}, pages = {624-631}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, booktitle = {RoboCup 2013: Robot World Cup XVII}, pdf = {2013_RoboCup_Kohlbrecher_open_source_tools.pdf}, abstract = {Key abilities for robots deployed in urban search and rescue tasks include autonomous exploration of disaster sites and recognition of victims and other objects of interest. In this paper, we present related open source software modules for the development of such complex capabilities which include Hector slam for self-localization and mapping in a degraded urban environment. All modules have been successfully applied and tested originally in the RoboCup Rescue competition. Up to now they have already been re-used and adopted by numerous international research groups for a wide variety of tasks. Recently, they have also become part of the basis of a broader initiative for key open source software modules for urban search and rescue robots.},}

2013

AbstractThis paper describes the approach used by Team Hector Darmstadt for participation in the 2013 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disas- ter sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. As a con- tribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.

@Techreport{2013:hector_rescue_tdp, author = {Thorsten Graber and Stefan Kohlbrecher and Johannes Meyer and Karen Petersen and Oskar von Stryk and Uwe Klingauf}, title = {RoboCupRescue 2013 - Robot League Team Hector Darmstadt (Germany)}, year = {2013}, organization = {Technische UniversitÃ¤t Darmstadt}, pdf = {2013_tdp_hector.pdf}, abstract = {This paper describes the approach used by Team Hector Darmstadt for participation in the 2013 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disas- ter sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. As a con- tribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS. },}

AbstractWith the DARPA Robotics Challenge (DRC), a call to an ambitious multi-part competition was sent out to the robotics community. In this paper, we briefly summarize the approach for addressing the Virtual Robotics Challenge (VRC) where software for control and supervision of a capable humanoid robot must be developed. Team ViGIR, comprising members from the US and Germany, leveraged previous robotics competition experience and a variety of open source tools, to achieve sixth place in the VRC out of 126 registrants, thereby advancing to the next round of the DRC and obtaining an Atlas robot.

@INPROCEEDINGS{2013:SSRR-ViGIR, author = {S. Kohlbrecher and D. Conner and A. Romay and F. Bacim and D. Bowman and O. von Stryk}, title = {Overview of Team ViGIR´s Approach to the Virtual Robotics Challenge}, year = {2013}, pages = {1-2}, month = {Oct 21-26}, publisher = {IEEE}, address = {Linkoping, Sweden}, booktitle = {11th IEEE Intl. Symposium on Safety, Security and Rescue Robotics}, doi = {10.1109/SSRR.2013.6719382}, pdf = {2013-IEEE-SSRR_ViGIR_VRC.pdf}, abstract = {With the DARPA Robotics Challenge (DRC), a call to an ambitious multi-part competition was sent out to the robotics community. In this paper, we briefly summarize the approach for addressing the Virtual Robotics Challenge (VRC) where software for control and supervision of a capable humanoid robot must be developed. Team ViGIR, comprising members from the US and Germany, leveraged previous robotics competition experience and a variety of open source tools, to achieve sixth place in the VRC out of 126 registrants, thereby advancing to the next round of the DRC and obtaining an Atlas robot.},}

AbstractQuadrotor UAVs have successfully been used both in research and for commercial applications in recent years and there has been significant progress in the design of robust control software and hardware. Nevertheless, testing of prototype UAV systems still means risk of damage due to failures. Motivated by this, a system for the comprehensive simulation of quadrotor UAVs is presented in this paper. Unlike existing solutions, the presented system is integrated with ROS and the Gazebo simulator. This comprehensive approach allows simultaneous simulation of diverse aspects such as flight dynamics, onboard sensors like IMUs, external imaging sensors and complex environments. The dynamics model of the quadrotor has been parameterized using wind tunnel tests and validated by a comparison of simulated and real flight data. The applicability for simulation of complex UAV systems is demonstrated using LIDAR-based and visual SLAM approaches available as open source software.

@INPROCEEDINGS{2012simpar_meyer, author = {Johannes Meyer and Alexander Sendobry and Stefan Kohlbrecher and Uwe Klingauf and Oskar von Stryk}, title = {Comprehensive Simulation of Quadrotor UAVs using ROS and Gazebo}, year = {2012}, pages = {400-412}, booktitle = {3rd Int. Conf. on Simulation, Modeling and Programming for Autonomous Robots (SIMPAR)}, abstract = {Quadrotor UAVs have successfully been used both in research and for commercial applications in recent years and there has been significant progress in the design of robust control software and hardware. Nevertheless, testing of prototype UAV systems still means risk of damage due to failures. Motivated by this, a system for the comprehensive simulation of quadrotor UAVs is presented in this paper. Unlike existing solutions, the presented system is integrated with ROS and the Gazebo simulator. This comprehensive approach allows simultaneous simulation of diverse aspects such as flight dynamics, onboard sensors like IMUs, external imaging sensors and complex environments. The dynamics model of the quadrotor has been parameterized using wind tunnel tests and validated by a comparison of simulated and real flight data. The applicability for simulation of complex UAV systems is demonstrated using LIDAR-based and visual SLAM approaches available as open source software. },}

AbstractWireless Indoor localization systems based on RSSI-values typically consist of an offline training phase and online position determination phase. During the offline phase, georeferenced RSSI measurement, called fingerprints, are recorded to build a radiomap of the building. This radiomap is then searched during the position determination phase to estimate another nodes location. Usually the radiomap is build manually, either by users pin-pointing their location on a ready-made floorplan or by moving in pre-specified patterns while scanning the network for RSSI values. This cumbersome process leads to inaccuracies in the radiomap. Here, we propose a system to build the indoor- and radio map simultaneously by using a handheld mapping system employing a laser scanner in an IEEE802.15.4-compatible network. This makes indoor- and radio-mapping for wireless localization less cumbersome, faster and more reliable.

@INPROCEEDINGS{fast2012, author = {Philipp M. Scholl and Stefan Kohlbrecher and Vinay Sachidananda and Kristof van Laerhoven}, title = {Fast Indoor Radio-Map Building for RSSI-based Localization Systems}, year = {2012}, booktitle = {Demo Paper, International Conference on Networked Sensing Systems}, pdf = {2012_INSS_Scholl_indoor_radio_map.pdf}, abstract = {Wireless Indoor localization systems based on RSSI-values typically consist of an offline training phase and online position determination phase. During the offline phase, georeferenced RSSI measurement, called fingerprints, are recorded to build a radiomap of the building. This radiomap is then searched during the position determination phase to estimate another nodes location. Usually the radiomap is build manually, either by users pin-pointing their location on a ready-made floorplan or by moving in pre-specified patterns while scanning the network for RSSI values. This cumbersome process leads to inaccuracies in the radiomap. Here, we propose a system to build the indoor- and radio map simultaneously by using a handheld mapping system employing a laser scanner in an IEEE802.15.4-compatible network. This makes indoor- and radio-mapping for wireless localization less cumbersome, faster and more reliable.},}

AbstractThis paper describes the approach used by Team Hector Darmstadt for participation in the 2012 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.

@TECHREPORT{2012:rescue_tdp, author = {Thorsten Graber and Stefan Kohlbrecher and Johannes Meyer and Karen Petersen and Oskar von Stryk and Uwe Klingauf}, title = {RoboCupRescue 2012 - Robot League Team Hector Darmstadt (Germany)}, year = {2012}, institution = {Technische Universität Darmstadt}, pdf = {2012_tdp_hector.pdf}, abstract = {This paper describes the approach used by Team Hector Darmstadt for participation in the 2012 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.},}

AbstractThis paper describes the hardware and software design and developments of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2012. The robots are used as a vehicle for research in humanoid robotics and teams of cooperating, autonomous robots. The Humanoid League of RoboCup provides an ideal testbed for investigation of topics like stability, control and versatility of humanoid locomotion, behavior control of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints, perception and world modeling based on very limited human-like, external sensing abilities as well as benchmarking of autonomous robot performance. The methodologies developed by the Darmstadt Dribblers to address reflex and cognitive control layers, image processing, perception, world modeling, behavior and motion control, robot simulation, monitoring, debugging and bio-inspired humanoid robot bodyware are briefly discussed.

@TECHREPORT{2012:dd_tdp, author = {J. Kuhn and S. Kohlbrecher and K. Petersen and D. Scholz and J. Wojtusch and O. von Stryk}, title = {Team Description for Humanoid KidSize League of RoboCup 2012 }, year = {2012}, institution = {Technische Universität Darmstadt}, pdf = {2012_tdp_hum.pdf}, abstract = {This paper describes the hardware and software design and developments of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2012. The robots are used as a vehicle for research in humanoid robotics and teams of cooperating, autonomous robots. The Humanoid League of RoboCup provides an ideal testbed for investigation of topics like stability, control and versatility of humanoid locomotion, behavior control of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints, perception and world modeling based on very limited human-like, external sensing abilities as well as benchmarking of autonomous robot performance. The methodologies developed by the Darmstadt Dribblers to address reflex and cognitive control layers, image processing, perception, world modeling, behavior and motion control, robot simulation, monitoring, debugging and bio-inspired humanoid robot bodyware are briefly discussed. },}

AbstractFor many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping capabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.

@INPROCEEDINGS{KohlbrecherMeyerStrykKlingaufFlexibleSlamSystem2011, author = {S. Kohlbrecher and J. Meyer and O. von Stryk and U. Klingauf}, title = {A Flexible and Scalable SLAM System with Full 3D Motion Estimation}, year = {2011}, pages = {155-160}, month = {November 1-5}, address = {Kyoto, Japan}, booktitle = {Proc. IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)}, organization = {IEEE}, doi = {10.1109/SSRR.2011.6106777}, pdf = {2011_SSRR_KohlbrecherMeyerStrykKlingauf_Flexible_SLAM_System.pdf}, abstract = {For many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping capabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.},}

S. Kohlbrecher, A. Stumpf, O. von Stryk

Grid-Based Occupancy Mapping and Automatic Gaze Control for Soccer Playing Humanoid Robots

AbstractWith advances in walking abilities of autonomous soccer playing humanoid robots, the world modeling and state estimation problem moves into focus, as only sufficiently accurate and robust modeling allows to leverage improved locomotion capabilities. A novel approach for dense grid-based obstacle mapping in dynamic environments with an additional application for automatic gaze control is presented in this paper. It is applicable for soccer playing humanoid robots with external sensing limited to human-like vision and strongly limited onboard computing abilities. The proposed approach allows fusion of information from different sources and efficiently provides a single consistent and robust world state estimate despite strong robot hardware limitations.

AbstractThe team Hector Darmstadt has been established from a PhD program funded by the German Research Foundation at TU Darmstadt.It combines expertise from Computer Science and Mechanical Engineering. The team successfully participates in the RoboCup Rescue League since 2009, with a focus on autonomous robots. Several team members have already contributed in the past to highly successful teams in the RoboCup Four-Legged and Humanoid League and in UAV competitions.

@TECHREPORT{2011:rescue_tdp, author = {Thorsten Graber and Stefan Kohlbrecher and Johannes Meyer and Karen Petersen and Oskar von Stryk}, title = {RoboCupRescue 2011 - Robot League Team Hector Darmstadt (Germany)}, year = {2011}, institution = {Technische Universität Darmstadt}, pdf = {2011_tdp_hector.pdf}, abstract = {The team Hector Darmstadt has been established from a PhD program funded by the German Research Foundation at TU Darmstadt.It combines expertise from Computer Science and Mechanical Engineering. The team successfully participates in the RoboCup Rescue League since 2009, with a focus on autonomous robots. Several team members have already contributed in the past to highly successful teams in the RoboCup Four-Legged and Humanoid League and in UAV competitions.},}

AbstractThis paper describes the hardware and software design and developments of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2011. The robots are used as a vehicle for research in humanoid robotics and teams of cooperating, autonomous robots. The Humanoid League of RoboCup provides an ideal testbed for investigation of topics like stability, control and versatility of humanoid locomotion, behavior control of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints, perception and world modeling based on very limited human-like, external sensing abilities as well as benchmarking of autonomous robot performance. The methodologies developed by the Darmstadt Dribblers to address reflex and cognitive control layers, image processing, perception, world modeling, behavior and motion control, robot simulation, monitoring and debugging are briefly discussed.

@TECHREPORT{2011:dd_tdp, author = {M. Friedmann and J. Kuhn and S. Kohlbrecher and K. Petersen and D. Scholz and D. Thomas and J. Wojtusch and O. von Stryk}, title = {Darmstadt Dribblers - Team Description for Humanoid KidSize League of RoboCup 2011}, year = {2011}, institution = {Technische Universität Darmstadt}, pdf = {2011_tdp_hum.pdf}, abstract = {This paper describes the hardware and software design and developments of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2011. The robots are used as a vehicle for research in humanoid robotics and teams of cooperating, autonomous robots. The Humanoid League of RoboCup provides an ideal testbed for investigation of topics like stability, control and versatility of humanoid locomotion, behavior control of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints, perception and world modeling based on very limited human-like, external sensing abilities as well as benchmarking of autonomous robot performance. The methodologies developed by the Darmstadt Dribblers to address reflex and cognitive control layers, image processing, perception, world modeling, behavior and motion control, robot simulation, monitoring and debugging are briefly discussed.},}

AbstractIn recent years, humanoid robot soccer robots have shown increasingly robust and fast locomotion, making perception, world modeling and behavior control important. In this paper, we present the world modeling approach of the Darmstadt Dribblers humanoid robot team, which won the competitions in the RoboCup Humanoid KidSize League in 2009 and 2010. The paper focuses on modeling observation uncertainties originating from different contributing factors centrally in one module. This allows different state estimators to use this data in a consistent way, independently of the specific state estimation approach used.

@INPROCEEDINGS{2010:KohlbrecherVonStryk_WsHumSoc, author = {S. Kohlbrecher and O. von Stryk}, title = {Modeling Observation Uncertainty for Soccer Playing Humanoid Robots}, year = {2010}, month = {Dec. 6 - Dec. 8}, address = {Nashville}, booktitle = {Proc. 5th Workshop on Humanoid Soccer Robots at the 2010 IEEE-RAS Int. Conf. on Humanoid Robots}, keywords = {World Modeling, RoboCup, Probabilistic Robotics, State Estimation}, pdf = {Humanoids10_Kohlbrecher_Stryk_Modeling_Observation_Uncertainty.pdf}, abstract = {In recent years, humanoid robot soccer robots have shown increasingly robust and fast locomotion, making perception, world modeling and behavior control important. In this paper, we present the world modeling approach of the Darmstadt Dribblers humanoid robot team, which won the competitions in the RoboCup Humanoid KidSize League in 2009 and 2010. The paper focuses on modeling observation uncertainties originating from different contributing factors centrally in one module. This allows different state estimators to use this data in a consistent way, independently of the specific state estimation approach used.},}

AbstractThis paper describes the hardware and software design of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2010. The robots are used as a vehicle for research in control of locomotion and behavior of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints. The Humanoid League of RoboCup provides an ideal testbed for such aspects of dynamics in motion and autonomous behavior as the problem of generating and maintaining statically or dynamically stable bipedal locomotion is predominant for all types of vision guided motions during a soccer game. A modular software architecture as well as further technologies have been developed for ecient and eective implementation and test of modules for sensing, planning, behavior, and actions of humanoid robots.

@TECHREPORT{2010:dd_tdp, author = {M. Friedmann and T. Hemker and S. Kohlbrecher and K. Petersen and S. Petters and K. Radkhah and M. Risler and D. Scholz and D. Thomas and O. von Stryk}, title = {Darmstadt Dribblers - Team Description for Humanoid KidSize League of RoboCup 2010}, year = {2010}, institution = {Technische Universität Darmstadt}, pdf = {2010-tdp-hum.pdf}, abstract = {This paper describes the hardware and software design of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2010. The robots are used as a vehicle for research in control of locomotion and behavior of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints. The Humanoid League of RoboCup provides an ideal testbed for such aspects of dynamics in motion and autonomous behavior as the problem of generating and maintaining statically or dynamically stable bipedal locomotion is predominant for all types of vision guided motions during a soccer game. A modular software architecture as well as further technologies have been developed for ecient and eective implementation and test of modules for sensing, planning, behavior, and actions of humanoid robots.},}

2009

Stefan Kohlbrecher

A Scalable, Platform-Independent SLAM System for Urban Search and Rescue

AbstractThe Darmstadt Rescue Robot Team is a new team established from a PhD program funded by the German Research Foundation at TU Darmstadt. It combines expertise from Computer Science and Mechanical Engineering. Several team members have already contributed in the past to highly successful teams in the RoboCup four-legged and humanoid leagues.

@TECHREPORT{2009:rescue_tdp, author = {Micha Andriluka and Martin Friedmann and Stefan Kohlbrecher and Johannes Meyer and Karen Petersen and Christian Reinl and Peter Schau{ss} and Paul Schnitzspan and Armin Strobel and Dirk Thomas and Oskar von Stryk}, title = {RoboCupRescue 2009 - Robot League Team: Darmstadt Rescue Robot Team (Germany)}, year = {2009}, institution = {Technische Universit"at Darmstadt}, pdf = {2009_rescue_tdp.pdf}, abstract = {The Darmstadt Rescue Robot Team is a new team established from a PhD program funded by the German Research Foundation at TU Darmstadt. It combines expertise from Computer Science and Mechanical Engineering. Several team members have already contributed in the past to highly successful teams in the RoboCup four-legged and humanoid leagues. },}