Technical session talks from ICRA 2012

TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Physical Human-Robot Interaction

Natural body gesture, as well as speech dialog, is crucial for human-robot interaction and human-robot symbiosis. We have already proposed a real-time gesture planning method. In this paper, we afford this method more flexibility by adding motion parameterization function. Especially in multi-person HRI, this function becomes more important because of its adaptation to changes of a speakerâ€™s and/or objectâ€™s locations. We implement our method for multi-person HRI system on the android Actroid-SIT, and conduct two experiments for estimating the precision of gestures and the human impressions about the Actroid. Through these experiments, we confirmed our method gives humans a more sophisticated impressions.

Robots are currently used in some applications to enhance human performance and it is expected that human/robot interactions will become more frequent in the future. In order to achieve effective human augmentation, the cooperation must be very intuitive to the human operator. This paper presents a variable admittance control approach to improve system intuitivity. The proposed variable admittance law is based on the inference of human intentions using desired velocity and acceleration. Stability issues are discussed and a controller design example is given. Finally, experimental results obtained with a full-scale prototype of an intelligent assist device are presented in order to demonstrate the performance of the algorithm.

In this study, we propose a control method for movement assistive robots using measured signals from human users. Some of the wearable assistive robots have mechanisms that can be adjusted to human kinematics (e.g., adjustable link length). However, since the human body has a complicated joint structure, it is generally difficult to design an assistive robot which mechanically well fits human users. We focus on the development of a control algorithm to generate corresponding movements of wearable assistive robots to that of human users even when the kinematic structures of the assistive robot and the human user are different. We first extract the latent kinematic relationship between a human user and the assistive robot. The extracted relationship is then used to control the assistive robot by converting human behavior into the corresponding joint angle trajectories of the robot. The proposed approach is evaluated by a simulated robot model and our newly developed exoskeleton robot, XoR.

In the near future, as robots become more advanced and affordable, we can envision their use as intelligent assistants in a variety of domains. An exemplar human-robot task identified in many previous works is cooperatively carrying a physically large object. An important task objective is to keep the carried object level. In this work, we propose an admittance-based controller that maintains a level orientation of a cooperatively carried object. The controller raises or lowers its end of the object with a human-like behavior in response to perturbations in the height of the other end of the object (e.g., the end supported by the human user). We also propose a novel tuning procedure, and find that most users are in close agreement about preferring a slightly under-damped controller response, even though they vary in their preferences regarding the speed of the controller's response.

We describe a model of &quot;trust&quot; in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a humanâ€™s expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot's autonomous behaviors, in order to improve the efficiency of the collaborative team. We illustrate this trust-driven methodology through an interactive visual robot navigation system. This system is evaluated through controlled user experiments and a field demonstration using an aerial robot.

The 20-DOF miniature humanoid ``MH-2'' designed as a wearable telecommunicator, is a personal telerobot system. An operator can communicate with remote people through the robot. The robot acts as an avatar of the operator. To date, four prototypes of the wearable telecommunicator T1, T2, T3 and MH-1, have been developed as research platforms. MH-1 is also a miniature humanoid robot with 11-DOF for mutual telexistence. Although human-like appearance might be important for such communication systems, it is unable to achieve sophisticated gestures due to the lack of both wrist and body motions. In this paper, to tackle this problem, a 3-DOF parallel wire mechanism with novel wire arrangement for the wrist is introduced, while 3-DOF body motions are also adopted. Consequently, a 20-DOF miniature humanoid with dual 7-DOF arms has been designed and developed. Details of the concept and design are discussed, while fundamental experiments with a developed 7-DOF arm are also executed to confirm the mechanical properties.

Robotic Software, Programming Environments, and Frameworks

During various acts, a robot may unintentionally tip over, rendering it unable to move normally. The ability to self-right and recover in such situations is crucial to mission completion and safe robot recovery. However, nearly all self-righting solutions to date are point solutions, each designed for a specific platform. As a first step toward a generic solution, this paper presents a framework for analyzing the self-righting capabilities of any generic robot on sloped planar surfaces. Based on the planar assumption, interactions with the ground can be modeled entirely using the robotâ€™s convex hull. We begin by analyzing the stability of each robot orientation for all possible joint configurations. From this, we develop a configuration space map, defining stable state sets as nodes and the configurations where discontinuous state changes occur as transitions. Finally, we convert this map into a directed graph and assign costs to the transitions according to changes in potential energy between states. Based upon the ability to traverse this directed graph to the goal state, one can analyze a robotâ€™s ability to self-right. To illustrate each step in our framework, we use a two-dimensional robot with a one degree of freedom arm, and then show a case study of iRobotâ€™s Packbot. Ultimately, we project that this framework will be useful both for designing robots with the ability to self-right and for maximizing autonomous self-righting capabilities of fielded robots.

Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAPâ€™s application in a highly varied range of robotics research scenarios.

The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in varying robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which projects the TOF data onto the stereo image pair as an initial set of correspondences. These initial â€œseedsâ€ are then propagated using a similarity score based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show t

Advancements in robotics have led to an ever-growing repertoire of software capabilities (e.g., recognition, mapping, and object manipulation). However, robotic capabilities grow, the complexity of operating and interacting with such robots increases (such as through speech, gesture, scripting, or programming). Language-based communication can offer users the ability to work with physically and computationally complex robots without diminishing the robot's inherent capability. However, it remains an open question how to build a common ground between natural language and goal-directed robot actions, particularly in a way that scales with the growth of robot capabilities. We examine using semantic frames -- a linguistics concept which describes scenes being acted out -- as a conceptual stepping stone between natural language and robot action. We examine the scalability of this solution through the development of RoboFrameNet, a generic language-to-action pipeline for ROS (the Robot Operating System) that abstracts verbs and their dependents into semantic frames, then grounds these frames into actions. We demonstrate the framework through experiments with the PR2 and Turtlebot robot platforms and consider the future scalability of the approach.

This paper proposes a new method for occupancy map building using a mixture of Gaussian processes. We consider occupancy maps as a binary classification problem of positions being occupied or not, and apply Gaussian processes. Particularly, since the computational complexity of Gaussian processes grows as O(n^3), where n is the number of data points, we divide the training data into small subsets and apply a mixture of Gaussian processes.The procedure of our map building method consists of three steps. First, we cluster acquired data by grouping laser hit points on the same line into the same cluster. Then, we build local occupancy maps by using Gaussian processes with clustered data. Finally, local occupancy maps are merged into one by using a mixture of Gaussian processes. Simulation results will be compared with previous researches and provided demonstrating the benefits of the approach.

Minimally invasive interventions I

This paper reports the design requirements, practical challenges, and a preliminary design for a magnetic resonance imaging (MRI) guided, three degree-of-freedom (DOF) transrectal prostate intervention robot. We show the operational space constraints imposed by patient anatomy when performing transrectal prostate procedures in a magnetic resonance (MR) scanner bore, as determined by analyzing data from 12 patient procedures with a device. We also describe practical challenges arising in designing a compact actuated MR compatible needle placement robot for MRI-guided transrectal needle intervention in the prostate. We present a preliminary design which aims to improve upon previous un-actuated and partially-actuated devices with the addition of an actuated needle insertion module. Such an enhancement enables needle driving to take place inside the MR scanner bore and thereby may reduce the overall procedure time -- thus improving patient comfort and reducing likelyhood of needle targeting errors resulting from patient motion. We show that it is feasible to add such actuation while reducing the footprint of the device in accordance with the anatomical and MR scanner constraints and practical design requirements.

Robot-assisted surgical procedures are perpetually evolving due to potential improvement in patient treatment and healthcare cost reduction. Integration of an imaging modality intraoperatively further strengthens these procedures by incorporating the information pertaining to the area of intervention. Such information needs to be effectively rendered to the operator as a human-in-the-loop requirement. In this work, we propose a guidance approach that uses real-time MRI to assist the operator in performing robot-assisted procedure in a beating heart. Specifically, this approach provides both real-time visualization and force-feedback based guidance for maneuvering an interventional tool safely inside the dynamic environment of a heart's left ventricle. Experimental evaluation of the functionality of this approach was tested on a simulated scenario of transapical aortic valve replacement and it demonstrated improvement in control and manipulation by providing effective and accurate assistance to the operator in real-time.

The novel concept of Trans-abdominal Active Magnetic Linkage for laparoendoscopic single site surgery has the potential to enable the deployment of a bimanual robotic platform trough a single laparoscopic incision. The main advantage of this approach consists in shifting the actuators outside the body of the patient, while transmitting a controlled robotic motion by magnetic field across the abdomen without the need for dedicated incisions. An actuation mechanism based on this approach can be comprised of multiple anchoring and actuation units, mixed depending upon the specific needs. A static model providing anchoring and actuation forces and torques available at the internal side of the magnetic link was developed to provide a tool to navigate among the many possibilities of such an open ended design approach. The model was assessed through bench top experiments, showing a maximum relative error of 4% on force predictions. An example of a single degree of freedom manipulator actuated with the proposed concept and compatible with a 12-mm access port is able to provide an anchoring force of 3.82 N and an actuation force of 2.95 N.

This paper presents a method for estimating drive cable length in an underactuated, hyper-redundant, snake-like manipulator. The continuum manipulator was designed for the surgical removal of osteolysis behind total hip arthroplasties. Two independently actuated cables in a pull-pull configuration control the compliant manipulator in a single plane. Using a previously developed kinematic model, we present a method for estimating drive cable displacement for a given manipulator configuration. This calibrated function is then inverted to explore the ability to achieve local manipulator configurations from prescribed drive cable displacements without the use of continuous visual feedback. Results demonstrate an effectiveness in predicting drive cable lengths from manipulator configurations. Preliminary results also show an ability to achieve manipulator configurations from prescribed cable lengths with reasonable accuracy without continuous visual feedback.

The focus of this paper is on the design and evaluation of a robust drive mechanism intended to robotically steer a thermal ablation electrode or similar percutaneous instrument. We present the design of an improved screw-spline drive mechanism based on a profiled threaded shaft and nut that reduces the part count and simplifies manufacturing and assembly. To determine the optimal parameters for the profile shape, an analytical expression was derived that relates the tolerance between the nut and shaft to the angular backlash, which was validated using SolidWorks. We outline the forward kinematics of a steering mechanism that is based on the concept of substantially straightening a pre-curved Nitinol stylet by retracting it into a concentric outer cannula, and re-deploying it at a different position. This model was compared to data collected during targeting experiments performed in ex-vivo tissue samples where the distal tip of the stylet was repositioned in ex-vivo bovine tissue and the location of its distal tip was recorded with CT imaging. Results demonstrated that the drive mechanism operated robustly and targeting errors of less than 2mm were achieved.