A cognitive robot companion must be able to evolve and learn in an open environment and to interact with
humans. This requires capabilities in terms of decision-making, attribution of intentionality and expression
of robot intentions.

In RA6, a conceptual architecture has been studied that provides a framework that integrates all these
capabilities. This conceptual architecture has been implemented partially in the Key Experiments (RA7)
where several decisional issues linked to human-robot interaction have been illustrated. The contribution
of RA6 is the construction of several high-level decisional control components of the architecture. Studies
on intentionality attribution have also been conducted within this RA and have served as inspiration for
the design of the developed capabilities.

Results from Research Activity 6 have been integrated essentially in Key Experiments 2 ('The Curious
Robot') in close relation with the other project Research Activities.

The issue of architecture have been studied in RA6, but is also related to the integration of the functions (RA7). The challenge of "the" generic architecture for a cognitive robot is daunting. We do not pretend that we have provided a final answer. We have learned from these ambitious objectives of integrated demonstrators and from our coordinated efforts to elaborate these systems some fundamental concepts on how to design an architecture of a Cognitive Robot. This was achieved with a step-by-step procedure where required capabilities from the Cogniron Functions and architecture concepts gained from former implementations and experiments. In the last period, the focus was oriented towards further refinement of ideas and concepts elaborated in the previous periods with an emphasis on making things concrete in order to effectively run and use complete instances and draw lessons.

Two main architectural implementations have been achieved:

KE1 - Memory-focused Instantiation of the Cognitive Architecture

KE2 - A task-oriented architecture for an interactive robot

Besides, several software Tools, and particularly the Go environment, have been developed in the
framework of KE3.

WP6.2 Decision-making for Interactive Human-Robot task achievement

Our efforts toward the development and integration of a scheme where the robot reasons not only reasons
on its own capabilities in a given context but also on a human model have conducted to two main results:

One contribution is a task planner, called "Human-Aware Task Planning", that has been designed
to produce and incrementally update plans for an assistive robot. This planner has been fully
implemented and integrated in KE2 setup.

Another focus is on detection and classification of human-robot interaction states in which we
investigate the use of POMDPs to deal with uncertainty in observation and human robot
interaction.

Human-Aware Task Planning

The key features of the "Human- Aware Task Planner" (HATP) are:

the use of a temporal planning framework with the explicit management of two time-lines (for the
human and the robot),

a hierarchical task structure allowing incremental context-based refinement and fully compatible
with BDI approach adopted at the level of the robot supervisor

a plan elaboration and selection algorithms that searches for plans with minimum cost and that
satisfy a set of so-called "social rules".

The social rules have been designed in order to allow a flexible specification of social conventions in a
"declarative" (e.g. undesirable states) or in "procedural" way (e.g. undesirable sequences or plans
intricacy).

Detection, classification of Human-Robot interaction states

A new approach for detection and classification of robot task states in interaction with humans has been
conducted in a joint LAAS-UniKarl effort. The approach uses a novel service robot reasoning system
which utilises partially observable Markov decision processes (POMDPs) to deal with uncertainty in
observation and human behaviour. A modular Bayesian forward filter allows to detect possible task states
probabilistically and can thus cope with sensor limitations and non-deterministic human behaviour. This
filter, embedded into a hierarchical perceptive architecture, preserves information about sensory
uncertainty while including model based, predictive elements and transforms the perceptions into more
abstract task state representations. Task states are represented symbolically as POMDP states while the
environment and human behaviour are represented by statistical (POMDP) models which are utilised by
the robot when making a decision.

The POMDP belief state is assembled from self-localisation, speech-recognition and human activity
recognition, each containing information about measurement uncertainty. The whole system architecture as well as experiments with human-robot interaction in realistic settings on a physical robot, serving cups
completely autonomously, has been illustrated.

WP6.3 - Intentionality Attribution

Attribution of Intentionality in HRI Proxemics

Based on our previous work on attribution of intentionality using video methodologies, we had established
that participants tended to rate robots with humanoid appearances as more humanlike, as well as more
extravert, agreeable and emotionally stable. Participants also tended to state a preference for interacting
with robots with more humanoid appearance. Based on these results, as well as general research done in
the field of human proxemics (Burgoon & Walther, 1990; Gillespie & Leffler, 1983), we addressed the
question of whether or not these attributions would impact the proxemic aspects of the interaction in a live
trial.

These experiments were conducted jointly with the research activities in RA3. They consisted of the robot
approaching the participants according to three different interaction types as well as different directions.
On the basis of our previous results mentioned in the paragraph above, and human proxemics research, we
hypothesised that if participants attributed more humanlike traits to a humanoid robots, this would also
entail expectancies as to robot behaviour, leading participants to expect behaviour that was more socially
appropriate from the humanoid robot, which in this particular experiment would exhibit itself as
maintaining a further social distance.

Our results from these studies (Koay et al., 2007; Syrdal, Koay et al., 2007) , supported this hypothesis.
An overall effect was found across the experimental conditions, in which participants preferred the
humanoid robot to maintain a further distance from the participants than the mechanoid robot.

On the basis of these results, we did find experimental evidence of a direct link between our previously
discovered questionnaire-based attribution of personality and behavioural preferences within a live trial,
which is translated as behavioural expectations that are similar to those we would have for other humans
in terms of proxemics. This effect was linear and did not interact significantly with individual differences.

Attribution of Intentionality and Perception of Privacy

We also investigated the role of intentionality in respect to how participants viewed the role of a robot
companion with regards to recording and storing information about its users in an exploratory study.
This investigation was based on issues raised in the EURON Roboethics Roadmap (Veruggio, 2006) as
well as more general discussions within the field of HCI (Mayer-Schoenberger, 1997). The main focus of
this exploratory trial was to what extent participants attributed agency towards the robot in terms of
divulging personal information to third parties. This trial was conducted by exposing the participants to an
interaction with a peoplebot robot as well as an experimenter, in which the robot would divulge personal
information about the experimenter during the course of a conversation between the experimenter and
participant.

The results from this exploratory trial suggested that participants found the issue of a personal robot
storing and divulging personal information regarding its users problematic, but also saw the need for
retaining this information. Participants tended to see this issue best resolved by reducing the agency of the
robot to divulge this information by reducing robot autonomy and tying the use of such information
directly to tasks explicitly requested by its users. While this particular aspect of intentionality attribution is
still very much an open field, the results from this study (Syrdal, Walters et al., 2007) suggest that
participants' attributions of robot intentionality have a clear impact, not only on the particulars of given
interactions, but also on how participants perceive the impact of a robot companion on their wider
everyday experience beyond these interactions.

In this film, two levels of plan and robot action representations are shown:
(1) on the bottom-left, one can see a plan as produced by HATP: it's a hierarchical task structure with precedence links at each level and decomposition links from level to a lower level. The leavses correspond to elementary tasks (for HATP) that might me further refined by SHARY when executed. The currently executed task is in green.
(2) the top of the figure shows the current state of execution maintained by SHARY. SHARY traverses and updates the plan tree but also further refines the tasks depending on the actual context. Tasks produced by HATP are represented by diamonds while tasks refined on-line are represented by ellipses. Several types of links are shown with different colors: grey arcs correspond to task decomposition, orange arcs correspond to causal/precedence links. Finally a color code illustrates the stask of a task: green means ``under execution'', red means `` impossible or stopped'' and blue means ``achieved''.

Shary execution of a give object task with a suspension

This film illustrates the internal data structure manipulated by Shary when the robot hands an object to a person. We have chosen here the case where the person is disturbed by a phone call and consequently turns his head away from the robot. (color code used in the previous video is also used for this one)