Execution of a manipulation after learning from
demonstration many times requires intricate planning and control
systems or some form of manual guidance for a robot.
Here we present a framework for manipulation execution based
on the so called “Semantic Event Chain” which is an abstract
description of relations between the objects in the scene. It
captures the change of those relations during a manipulation and
thereby provides the decisive temporal anchor points by which a
manipulation is critically defined. Using semantic event chains a
model of a manipulation can be learned. We will show that it is
possible to add the required control parameters (the spatial anchor
points) to this model, which can then be executed by a robot in a
fully autonomous way. The process of learning and execution of
semantic event chains is explained using a box pushing example.