INFORMATION NOTICE ON COOKIES : This website uses only browsing/session cookies. Users can choose whether or not to accept the use of cookies and access the website. By clicking on "Further Information", the full information notice on the types of cookies will be displayed and you will be able to choose whether or not to accept them whilst browsing on the website.Further information

I am currently in the first year of my PhD, working at Plymouth University (on the ITALK project). I have a background in Computer Science (B.Sc., Exeter, UK) and Evolutionary and Adaptive Systems (M.Sc., Sussex, UK). The latter was focused on intelligence in animals and machines from an embodied/cybernetic/dynamical systems perspective (adaptive behaviour).

+

+

Right now I'm researching the issue of scalability in 'teleonomic' (apparently purposeful) adaptive systems, in particular how dynamical systems can learn in real-time, with the help of an 'intelligent other', to synthesise and recombine action primitives in complex ways, in order to solve multiple tasks in complex environments. This should lead to more task non-specific architectures/frameworks.

+

+

On a more practical level I am teaching the iCub to solve two object manipulation problems: stacking and sorting three coloured cubes. I'm interested in applying and further developing cybernetics and neocybernetics work, in particular the research of cyberneticist W. Ross Ashby.

+

+

As for the summer school, i'd love to have a go with the force control (torque sensors) as I think this is rather useful, and it would be great to play around with some computer vision techniques.

+

+

Perhaps these two (+ something else?) could be integrated to solve a task that requires selection of different learned postures (specified by force control) in different environments (determined through computer vision); e.g., reaching towards a coloured object in three or four different positions?

+

Email: christopher.larcombe <at> plymouth.ac.uk

Revision as of 11:29, 19 July 2010

This is a list of participants at the VVV10 summer school - please add yourself (for inspiration, see VV09/VVV08/VVV07/VVV06).

For editing, you can ask for your own username/password (email
paulfitz@liralab.it) or use the username "vvv10" and password
"s*str*" with the first star replaced with "e" and the last star
replaced with "i".

If you would like to, include in your profile a list of possible ideas for collaboration. See also Groups and experiments.

ISLab - Intelligent Systems Lab (ISR/IST, TULisbon)
I'm interested in Cognitive Science and Robotics. My goal at the VVV'10 Summer School is to implement
a proof-of-concept for an online event segmentation (and adaptive strong adaptation) framework (for more
details click here). The result should be the iCub following a ball (on a series of inclined planes,
maybe on a pendulum) with its head - do any of you guys have a colored ball?
Email: bnery <at> isr.ist.utl.pt
Homepage

Alexis Maldonado

I love robots, and during my PhD I have worked together with different ones:
started with medium-sized league RoboCup robots, then a big B21 robot with two
industry 6-DOF arms (Powercubes from Amtec), and finally have the pleasure of
working with the iCub and our self-made platform with Kuka LWR-4 arms and
4-Fingered hands from the DLR: TUM-Rosie in action
I have concentrated in arm/hand movements for grasping. Lately working
on grasping unmodeled objects, using a Mesa SR4k time of flight camera,
and finger torque sensors to adjust the grasp.
This is a video that we submitted to IROS2010 together with a paper:
[link to youtube]
Comparison of two approaches for grasp pose selection: kinematic mesh planner vs statistical
[link to youtube]
Want to see what our robot 'sees' when he grasps?
[link to youtube]
For the summer school, I am interested two-handed manipulation with the iCub.
One idea for the demo is to build stacks of Lego pieces. Please let me know if you
would like to do something in this direction (Manipulation, reaching, tool use).
maldonad_(at)_cs.tum.edu Introduction (pdf file)work-homepage

Federico Ruiz Ugalde

Let's do some lego building!!
A Costa Rican in germany working with robot manipulation. Mechanical models of objects.
Prediction of robot actions on objects. In last summer school we manage to put two lego
pieces together teleoperated. In prior summers schools we implemented a potencial field
base end-effector arm movement system. It can avoid obstacles and follow goal points and
orientations. I'm pretty sure this will be a Summer School as "Tuanis!" as the last three.
email: ruizf<AT>in.tum.de pdf file
for instant messaging: memeruiz<at>gmail.com
homepage

University of Plymouth, Devon, UK
salomon.ramirez-contla plymouth.ac.uk
I am interested in developing a peripersonal space representation for humanoid robots. iCub will use this
representation in object manipulation and avoidance of obstacles within the volume around it that is reachable
with its arms. I am using a bottom-up approach working with computer vision as well. I am from Mexico and
look forward to make of this a great summer school.

I'm working at the University of Plymouth as a Marie Curie Early Stage Researcher
in the European project RobotDoC, that focuses on the development of cognitive processes
in robotic platforms. My research topic is about the grounding of language in humanoid robots,
with particular attention to abstract words.
Email: francesca.stramandinoli <at> plymouth.ac.uk

Chris Larcombe

I am currently in the first year of my PhD, working at Plymouth University (on the ITALK project). I have a background in Computer Science (B.Sc., Exeter, UK) and Evolutionary and Adaptive Systems (M.Sc., Sussex, UK). The latter was focused on intelligence in animals and machines from an embodied/cybernetic/dynamical systems perspective (adaptive behaviour).

Right now I'm researching the issue of scalability in 'teleonomic' (apparently purposeful) adaptive systems, in particular how dynamical systems can learn in real-time, with the help of an 'intelligent other', to synthesise and recombine action primitives in complex ways, in order to solve multiple tasks in complex environments. This should lead to more task non-specific architectures/frameworks.

On a more practical level I am teaching the iCub to solve two object manipulation problems: stacking and sorting three coloured cubes. I'm interested in applying and further developing cybernetics and neocybernetics work, in particular the research of cyberneticist W. Ross Ashby.

As for the summer school, i'd love to have a go with the force control (torque sensors) as I think this is rather useful, and it would be great to play around with some computer vision techniques.

Perhaps these two (+ something else?) could be integrated to solve a task that requires selection of different learned postures (specified by force control) in different environments (determined through computer vision); e.g., reaching towards a coloured object in three or four different positions?