This video demonstrates the iCub robot learning the most novel object in its field of view. This video was produced through a collaboration of the UU and AU.

PR2 Segments Objects from the background at the ISRC

In this video our PR2 (Personal Robot 2 from Willow Garage) successfully segments objects from the background using 3D point clouds and then placing a bounding box around the object.

PR2 Learining Novel Objects at the ISRC

In this video our PR2 learns the most novel object in its view by learning its features by picking up and inspecting the object.

PR2 uses composition of skills to push an object at the ISRC

In this video our PR2 learns to move to the correct position to allow it to carry out a pushing action on an object. It does this by a using an autonomously generated composition of a mix of innate and previously learnt skills.

PR2 predicts the outcome of a push action on an object at the ISRC

In this video our PR2 successfully predicts the outcome of a pushing action on a object and places a blue virtual bounding box in the predicted outcome position. The red bounding box in the video is the object in its original position before the action is carried out.

PR2 predicts the outcome of a toppling action on an object at the ISRC

In this video our PR2 successfully predicts the outcome of a toppling action on a object and places a blue virtual bounding box in the predicted outcome position. The red bounding box in the video is the object in its original position before the action is carried out.

PR2 learns to stack objects at the ISRC-1

In this video our PR2 learns to stack objects by placing a bottle on top of boxes. It does this by using an autonomous composition of already known skills.

PR2 Learns to place an object into a bag at ISRC

In this video our PR2 manages to place the bottle into the bag by using a composition of previously learnt skills in the correct sequence which has been generated autonomously.

Development of hand-eye coordination

This video shows the process of learning hand-eye coordination on the iCub using constraints. The complete learning process took around one hour from start to finish.

Mechatronic Board for monkeys This video shows the Mechatronic Board for monkeys equipped with the three mechatronic modules designed and developed during the first year of the project

Mechatronic Board for childre
This video shows the Mechatronic Board for monkeys equipped with the
three mechatronic modules designed and developed during the first year
of the project

Children Experimental protocol

This video shows the protocol used with children during the preliminary trials. The Mechatronic Board equipped with pushbuttons was used.

The video shows a "curious" active vision system that autonomously explores its environment and learns object representations without any human assistance.

Infants in control: Learning of action outcomes in a gaze contingent experimentEye movement recording of a 6-month-old infant during a Gaze
Contingent Experiment. By looking at the red “button” the infant
triggered the display of an animal picture adjacent to the button.

Prof. Andrew Barto

Autonomous skill acquisition on a mobile manipulator

CLEVER-B2 Demonstrator on simulated robot.
The video shows learning and test within the simulated environment.

CLEVER-B2 Demonstrator on real robot.
The video shows learning and test with the real robot.

Toward Intelligent Humanoids iCub 2012 CLEVER-K3 Demonstrator on real robot "Toward Autonomous Humanoids" is about our ongoing efforts to apply AI algorithms to create autonomous, adaptive, intelligent behaviors on a humanoid robot. It explains many of the problems that humanoids pose with respect to state of the art AI approaches, and it introduces some of our solutions to these problems.

CLEVER-B3 visually elicited reaching

This video shows the Aberystwyth iCub learning to perform visually triggered reaching and grasping by progressing through a series of developmental stages similar to those in infancy.

CLEVER-B3 learning through play

This video shows the Aberystwyth iCub learning using play-like behaviour. It experiments with behaviours it has previously learnt, storing the results in memory-like structures called 'schemas'. These schemas record how various actions can change the state of the world, and can be used to plan a sequence of actions to reach a desired goal. The iCub can make generalisations, and learn about exceptions, based on these schemas.

Skill Chaining using Curious Dr.MISFAiCub 2013 CLEVER-K4 Demonstrator on real robot We developed a curiosity-driven autonomous system for learning perceptual invariances and subsequent skills, called Curious Dr. MISFA that learns from high-dimensional raw image data, generated from the eyes of an exploring iCub robot. Curious Dr. MISFA enables the iCub to continually learn skills --- toppling an object leads to grasping the object, which finally leads to pick & place.

Task Relevant RoadmapsiCub 2013 CLEVER-K4 Demonstrator on real robot We introduce a new flexible framework to build task-relevant roadmaps (TRM) that can produce such motions. Each task is specified by (1) a number of constraints, such as 'keep the left hand behind the table',
and (2) a number of task functions that specify the freedom of movement within those constraints, such as 'move the right hand to different places'.
We show that together, the constraints and task functions are used to build a map that can be used to perform movements related to desired tasks.

CLEVER-B4 Timelapse Development

This video shows the iCub learning hand-eye coordination in a single sitting. Learning on the robot takes just over half an hour, and the reaching is bootstrapped with approximately 30 minutes of real-time learning in simulation. The result is a robot that can learn to gaze and reach to objects, even those just out of reach, from scratch in around one hour. The video ends with a demonstration of the robot using the actions learnt to reach to various objects.

CLEVER-B4 Simulated Reach Learning

This video shows simulated reach learning for a humanoid robot in the later stages of arm control learning. Here, vision and motor babbling are being used to discover how joint movement effects the movement of the hand in space.

CLEVER-B4 Pointing with VAM

In this video, the VAM is identifying individual objects in the environment based on their features and distance from the robot. This is achieved by comparing the images provided by the two cameras on the robot. The robot then selects an object to gaze to and point at.

CLEVER-B4 Button Pressing with a Tool

In a previous experiments the iCub discovered, through play-like behaviour, that it could press buttons (CLEVER-B2) and perform a pressing action whilst holding an object (CLEVER-B3). Here we show how these actions can be combined with the refined reaching behaviours to press buttons positioned in front of the robot.

PR2 intrinsically motivated to learn useful actions at ISRC

The video shows the extension of the Clever-B model using Probabilistic Biased Selection (PBS) methods instead of random selection approach at the Striatal learning block.