Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task. View full abstract»

Industrial manipulators and unmanned systems often address a large number of tasks with some type of human-in-the-loop method. In these systems, the robot is given responsibility for some portion of the control tasks, but the human has some role for a variety of reasons; for example, the current technology may not be sufficient for the robot to complete the entire task: there may be safety, liability, or regulatory constraints, or the economics favor a human-in-the-loop process. An example of where human-in-the-loop control is of increasing interest is for telecommuting by health-care providers [1] and the general public [2] and for data gathering for disaster response [3]. These remote presence applications allow humans to perceive and act from a distance through a mobile robot. Remote presence is more challenging than telesurgery and space telepresence from an interface perspective, as the operators are not expected to be highly trained on robots and will be working in dynamic or unpredictable environments. View full abstract»

Domestic and industrial robots, intelligent software agents, virtual-world avatars, and other artificial entities are being created and deployed in our society for various routine and hazardous tasks, as well as for entertainment and companionship. Over the past ten years or so, primarily in response to the growing security threats and financial fraud, it has become necessary to accurately authenticate the identities of human beings using biometrics. For similar reasons, it may become essential to determine the identities of nonbiological entities. Trust and security issues associated with the large-scale deployment of military soldier-robots [55], robot museum guides [22], software office assistants [24], human like biped robots [67], office robots [5], domestic and industrial androids [93], [76], bots [85], robots with humanlike faces [60], virtual-world avatars [109], and thousands of other man-made entities require the development of methods for a decentralized, affordable, automatic, fast, secure, reliable, and accurate means of authenticating these artificial agents. The approach has to be decentralized to allow authority-free authentication important for open-source and collaborative societies. To address these concerns, we proposed [117], [120], [119], [38] the concept of artimetricsa field of study that identifies, classifies, and authenticates robots, software, and virtual reality agents. In this article, unless otherwise qualified, the term robot refers to both embodied robots (industrial, mobile, tele, personal, military, and service) and virtual robots or avatars, focusing specifically on those that have a human morphology. View full abstract»

The capability to monitor natural phenomena using mobile sensing is a benefit to the Earth science community, given the potentially large impact that humans have on naturally occurring processes. Such phenomena can be readily monitored using networks of mobile sensor nodes that are tasked to regions of interest by scientists. In our article, we hone in on a very specific domain, elevation changes in glacial surfaces, to demonstrate a concept applicable to any spatially distributed phenomena (e.g., temperature or humidity). Our article leverages the sensing of a vision-based odometry system and the design of robotic surveying navigation rules to reconstruct scientific areas of interest, with the goal of monitoring elevation changes in glacial regions. The reconstruction methodology presented makes use of Gaussian process (GP) regression to combine sparse visual landmarks extracted from the glacial scenery into a dense topographic map. Further, this method allows for the natural inclusion of a priori terrain knowledge, such as existing digital elevation models. Results from this system are presented from a three-dimensional (3-D) glacial simulation modeled after actual field trials on Alaskan glaciers. Additionally, we introduce a theory behind spatial coverage, in the context of sampling, as achieved by an intelligently navigating agent. Finally, we validate the output from our methodology and provide results and show that the reconstructed terrain error complies with acceptable mapping standards found in the scientific community. View full abstract»

The open motion planning library (OMPL) is a new library for sampling-based motion planning, which contains implementations of many state-of-the-art planning algorithms. The library is designed in a way that it allows the user to easily solve a variety of complex motion planning problems with minimal input. OMPL facilitates the addition of new motion planning algorithms, and it can be conveniently interfaced with other software components. A simple graphical user interface (GUI) built on top of the library, a number of tutorials, demos, and programming assignments are designed to teach students about sampling-based motion planning. The library is also available for use through Robot Operating System (ROS). View full abstract»

At the End of a Great Year: Listen to the Voice of Young Roboticists [Student's Corner]

European Commission, Industry, and Academia Commit to Bigger and Better Robotics Sector [Regional]

The 2012 IEEE Robotics & Automation Society Safety, Security, and Rescue Robotics Summer School: An Event for the Dissemination of the Challenges and Best-in-Class Capabilities in the SSRR Community [Society News]

John McCarthy is best known as one of the founding fathers of artificial intelligence (AI), a term he coined in 1955, and much has been written about his pioneering work in computer and cognitive science (Figure 1). Less attention has been given to McCarthy's efforts related to robotics, though his AI research instigated and influenced development in the field. As the founder of the Stanford-AI Lab (SAIL) and its director from 1965 to 1980, McCarthy participated in research on computer vision, speech recognition, and planning in robotics, collaborated with innovators such as Bernie Roth and Vic Scheinman to develop some of the first robot arms, and advised 30 students, a number of whom have gone on to become leaders in robotics and AI. View full abstract»

Aims & Scope

IEEE Robotics and Automation Magazine is a unique technology publication which is peer-reviewed, readable and substantive. The Magazine is a forum for articles which fall between the academic and theoretical orientation of scholarly journals and vendor sponsored trade publications.