Filter by year:

Results

Showing results for:

Reset all filters

Search results

Visuospatial Skill Learning for Robots

, Handling Uncertainty and Networked Structure in Robot Control

A novel skill learning approach is proposed that allows a robot to acquirehuman-like visuospatial skills for object manipulation tasks. Visuospatialskills are attained by observing spatial relationships among objects throughdemonstrations. The proposed Visuospatial Skill Learning (VSL) is a goal-basedapproach that focuses on achieving a desired goal configuration of objectsrelative to one another while maintaining the sequence of operations. VSL iscapable of learning and generalizing multi-operation skills from a singledemonstration, while requiring minimum prior knowledge about the objects andthe environment. In contrast to many existing approaches, VSL offerssimplicity, efficiency and user-friendly human-robot interaction. We also showthat VSL can be easily extended towards 3D object manipulation tasks, simply byemploying point cloud processing techniques. In addition, a robot learningframework, VSL-SP, is proposed by integrating VSL, Imitation Learning, and aconventional planning method. In VSL-SP, the sequence of performed actions arelearned using VSL, while the sensorimotor skills are learned using aconventional trajectory-based learning approach. such integration easilyextends robot capabilities to novel situations, even by users withoutprogramming ability. In VSL-SP the internal planner of VSL is integrated withan existing action-level symbolic planner. Using the underlying constraints ofthe task and extracted symbolic predicates, identified by VSL, symbolicrepresentation of the task is updated. Therefore the planner maintains ageneralized representation of each skill as a reusable action, which can beused in planning and performed independently during the learning phase. Theproposed approach is validated through several real-world experiments.

This paper presents modular dynamics for dual-arms, expressed in terms of the kinematics and dynamics of each of the stand-alone manipulators. The two arms are controlled as a single manipulator in the task space that is relative to the two end-effectors of the dual-arm robot. A modular relative Jacobian, derived from a previous work, is used which is expressed in terms of the stand-alone manipulator Jacobians. The task space inertia is expressed in terms of the Jacobians and dynamics of each of the stand-alone manipulators. When manipulators are combined and controlled as a single manipulator, as in the case of dual-arms, our proposed approach will not require an entirely new dynamics model for the resulting combined manipulator. But one will use the existing Jacobians and dynamics model for each of the stand-alone manipulators to come up with the dynamics model of the combined manipulator. A dual-arm KUKA is used in the experimental implementation.

Robot Learning for Persistent Autonomy

Chapter. 1. Robot. Learning. for. Persistent. Autonomy. Petar Kormushev and Seyed Reza Ahmadzadeh Abstract Autonomous robots are not very good at being autonomous. They work well in structured environments, but fail quickly in the real&nbsp;...

In this paper, a robot learning approach is pro-posed which integrates Visuospatial Skill Learning, ImitationLearning, and conventional planning methods. In our approach,the sensorimotor skills (i.e., actions) are learned through alearning from demonstration strategy. The sequence of per-formed actions is learned through demonstrations using Visu-ospatial Skill Learning. A standard action-level planner is usedto represent a symbolic description of the skill, which allows thesystem to represent the skill in a discrete, symbolic form. TheVisuospatial Skill Learning module identifies the underlyingconstraints of the task and extracts symbolic predicates (i.e.,action preconditions and effects), thereby updating the plannerrepresentation while the skills are being learned. Therefore theplanner maintains a generalized representation of each skill asa reusable action, which can be planned and performed inde-pendently during the learning phase. Preliminary experimentalresults on the iCub robot are presented.

It is essential for a successful completion of a robot object grasping and manipulation task to accurately sense the manipulated object’s pose. Typically, computer vision is used to obtain this information, but it may not be available or be reliable in certain situations. This paper presents a global optimisation method where tactile and force sensing together with the robot’s proprioceptive data are used to find an object’s pose. This method is used to either improve an estimate of the object’s pose given by vision or globally estimating it when no vision is available. Results show that the proposed method consistently improves an initial estimate (e.g. from vision) and is also able to find the object’s pose without any prior estimate. To demonstrate the performance of the algorithm, an experiment is carried out where the robot is handed a small object (a pencil) and inserts it into a narrow hole without any use of vision

Recent efforts in the field of interventionautonomousunderwater vehicles (I-AUVs) have started to showpromising results in simple manipulation tasks. However, thereis still a long way to go to reach the complexity of the taskscarried out by ROV pilots. This paper proposes an interventionframework based on parametric Learning by Demonstration(p-LbD) techniques in order to acquire multiple strategies toperform an autonomous intervention task adapted to differentenvironment conditions. The manipulation skills of a pilot areacquired thought a set of demonstrations done under differentenvironment circumstances, in our case different levels of watercurrent. The proposed algorithm is able to learn these differentstrategies and depending on the estimated water current,autonomously reproduce a combined strategy to perform thetask. The p-LbD algorithm as well as its interplay with the restof the modules that take part in the proposed framework aredescribed in this paper. We also present results on a free-floatingvalve turning task, using the Girona 500 I-AUV equipped witha manipulator and a customized end-effector. The obtainedresults show the feasibility of the p-LbD algorithm to performautonomous intervention tasks combining the learned strategiesdepending on the environment conditions.

Autonomous manipulation of objects requires re-liable information on robot-object contact state. Underwaterenvironments can adversely affect sensing modalities such asvision, making them unreliable. In this paper we investi-gate underwater robot-object contact perception between anautonomous underwater vehicle and a T-bar valve using aforce/torque sensor and the robot’s proprioceptive information.We present an approach in which machine learning is used tolearn a classifier for different contact states, namely, a contactaligned with the central axis of the valve, an edge contact andno contact. To distinguish between different contact states, therobot performs an exploratory behavior that produces distinctpatterns in the force/torque sensor. The sensor output formsa multidimensional time-series. A probabilistic clustering algo-rithm is used to analyze the time-series. The algorithm dissectsthe multidimensional time-series into clusters, producing a one-dimensional sequence of symbols. The symbols are used to traina hidden Markov model, which is subsequently used to predictnovel contact conditions. We show that the learned classifiercan successfully distinguish the three contact states with anaccuracy of 72% ± 12 %.

Modular Relative Jacobian for Dual-Arms and the Wrench Transformation Matrix

, Proceedings of the 2015 7th IEEE International Conference on Cybernetics and Intelligent Systems (CIS) And Robotics, Automation and Mechatronics (RAM), Publisher: IEEE

A modular relative Jacobian is recently derivedand is expressed in terms of the individual Jacobians ofstand-alone manipulators. It includes a wrench transformationmatrix, which was not shown in earlier expressions. This paperis an experimental extension of that recent work, which showedthat at higher angular end-effector velocities the contributionof the wrench transformation matrix cannot be ignored. Inthis work, we investigate the dual-arm force control performance,without necessarily driving the end-effectors at higherangular velocities. We compare experimental results for twocases: modular relative Jacobian with and without the wrenchtransformation matrix. The experimental setup is a dual-armsystem consisting of two KUKA LWR robots. Two experimentaltasks are used: relative end-effector motion and coordinatedindependent tasks, where a force controller is implemented inboth tasks. Furthermore, we show in an experimental designthat the use of a relative Jacobian affords less accurate taskspecifications for a highly complicated task requirement forboth end-effectors of the dual-arm. Experimental results onthe force control performance are compared and analyzed.

Encoders have been an inseparable part of robotssince the very beginning of modern robotics in the 1950s. As aresult, the foundations of robot control are built on the conceptsof kinematics and dynamics of articulated rigid bodies, whichrely on explicitly measuring the robot configuration in termsof joint angles – done by encoders.In this paper, we propose a radically new concept forcontrolling robots called Encoderless Robot Control (EnRoCo).The concept is based on our hypothesis that it is possible tocontrol a robot without explicitly measuring its joint angles, bymeasuring instead the effects of the actuation on its end-effector.To prove the feasibility of this unconventional control approach,we propose a proof-of-concept control algorithm for encoderlessposition control of a robot’s end-effector in task space. Wedemonstrate a prototype implementation of this controller ina dynamics simulation of a two-link robot manipulator. Theprototype controller is able to successfully control the robot’send-effector to reach a reference position, as well as to trackcontinuously a desired trajectory.Notably, we demonstrate how this novel controller can copewith something that traditional control approaches fail to do:adapt on-the-fly to changes in the kinematics of the robot, suchas changing the lengths of the links.

This paper challenges the well-established as-sumption in robotics that in order to control a robot itis necessary to know its kinematic information, that is, thearrangement of links and joints, the link dimensions and thejoint positions. We propose a kinematic-free robot controlconcept that does not require any prior kinematic knowledge.The concept is based on our hypothesis that it is possible tocontrol a robot without explicitly measuring its joint angles, bymeasuring instead the effects of the actuation on its end-effector.We implement a proof-of-concept encoderless robot con-troller and apply it for the position control of a physical 2-DOF planar robot arm. The prototype controller is able tosuccessfully control the robot to reach a reference position, aswell as to track a continuous reference trajectory. Notably, wedemonstrate how this novel controller can cope with somethingthat traditional control approaches fail to do: adapt to drastickinematic changes such as 100% elongation of a link, 35-degreeangular offset of a joint, and even a complete overhaul of thekinematics involving the addition of new joints and links.

We propose a new algorithm capable of onlineregeneration of walking gait patterns. The algorithm uses anonlinear optimization technique to find step parameters thatwill bring the robot from the present state to a desired state. Itmodifies online not only the footstep positions, but also the steptiming in order to maintain dynamic stability during walking.Inclusion of step time modification extends the robustnessagainst rarely addressed disturbances, such as pushes towardsthe stance foot. The controller is able to recover dynamicstability regardless of the source of the disturbance (e.g. modelinaccuracy, reference tracking error or external disturbance).We describe the robot state estimation and center-of-massfeedback controller necessary to realize stable locomotion onour humanoid platform COMAN. We also present a set ofexperiments performed on the platform that show the per-formance of the feedback controller and of the gait patternregenerator. We show how the robot is able to cope with seriesof pushes, by adjusting step times and positions.

, IFAC Workshop on Navigation, Guidance and Control of Underwater Vehicles (NGCUV’2015)

PANDORA is a EU FP7 project that is developing new computational methodsto make underwater robots Persistently Autonomous, significantly reducing the frequency ofassistance requests. The aim of the project is to extend the range of tasks that can be carried onautonomously and increase their complexity while reducing the need for operator assistances.Dynamic adaptation to the change of conditions is very important while addressing autonomyin the real world and not just in well-known situation. The key of Pandora is the ability torecognise failure and respond to it, at all levels of abstraction. Under the guidance of majorindustrial players, validation tasks of inspection, cleaning and valve turning have been trialledwith partners’ AUVs in Scotland and Spain.

This paper investigates learning approaches for discovering fault-tolerant control policies to overcome thruster failures in Autonomous Underwater Vehicles (AUV). The proposed approach is a model-based direct policy search that learns on an on-board simulated model of the vehicle. When a fault is detected and isolated the model of the AUV is reconfigured according to the new condition. To discover a set of optimal solutions a multi-objective reinforcement learning approach is employed which can deal with multiple conflicting objectives. Each optimal solution can be used to generate a trajectory that is able to navigate the AUV towards a specified target while satisfying multiple objectives. The discovered policies are executed on the robot in a closed-loop using AUV's state feedback. Unlike most existing methods which disregard the faulty thruster, our approach can also deal with partially broken thrusters to increase the persistent autonomy of the AUV. In addition, the proposed approach is applicable when the AUV either becomes under-actuated or remains redundant in the presence of a fault. We validate the proposed approach on the model of the Girona500 AUV.

Learning by demonstration applied to underwater intervention

, Seventeenth International Conference of the Catalan Association of Artificial Intelligence (CCIA 2014)

Performing subsea intervention tasks is a challenge due to the complexitiesof the underwater domain. We propose to use a learning by demonstraitionalgorithm to intuitively teach an intervention autonomous underwater vehicle (IAUV)how to perform a given task. Taking as an input few operator demonstrations,the algorithm generalizes the task into a model and simultaneously controlsthe vehicle and the manipulator (using 8 degrees of freedom) to reproduce the task.A complete framework has been implemented in order to integrate the LbD algorithmwith the different onboard sensors and actuators. A valve turning interventiontask is used to validate the full framework through real experiments conducted in awater tank.

This paper studies the effect of passive and active impedance for protecting jumping robots from landing impacts. The theory of force transmissibility is used for selecting the passive impedance of the system to minimize the shock propagation. The active impedance is regulated online by a joint-level controller. On top of this controller, a reflex-based leg retraction scheme is implemented which is optimized using direct policy search reinforcement learning based on particle filtering. Experiments are conducted both in simulation and on a real-world hopping leg. We show that although the impact dynamics is fast, the addition of passive impedance provides enough time for the active impedance controller to react to the impact and protect the robot from damage.

Covariance Analysis as a Measure of Policy Robustness in Reinforcement Learning

, OCEANS'14 MTS/IEEE

—In this paper we propose covariance analysis as ametric for reinforcement learning to improve the robustness ofa learned policy. The local optima found during the explorationare analyzed in terms of the total cumulative reward and thelocal behavior of the system in the neighborhood of the optima.The analysis is performed in the solution space to select a policythat exhibits robustness in uncertain and noisy environments.We demonstrate the utility of the method using our previouslydeveloped system where an autonomous underwater vehicle(AUV) has to recover from a thruster failure. When a failure isdetected the recovery system is invoked, which uses simulationsto learn a new controller that utilizes the remaining functioningthrusters to achieve the goal of the AUV, that is, to reach a targetposition. In this paper, we use covariance analysis to examinethe performance of the top, n, policies output by the previousalgorithm. We propose a scoring metric that uses the output ofthe covariance analysis, the time it takes the AUV to reach thetarget position and the distance between the target position andthe AUV’s final position. The top polices are simulated in a noisyenvironment and evaluated using the proposed scoring metric toanalyze the effect of noise on their performance. The policy thatexhibits more tolerance to noise is selected. We show experimentalresults where covariance analysis successfully selects a morerobust policy that was ranked lower by the original algorithm.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.