READ NEXT

Meet Method-2, the Korean megabot straight out of sci-fi

ByKai Schächtele

Researchers from University College London and the University of Bristol created a humanoid named Bert to help users make an omelette. Bert has an expressive face with moveable eyes, eyebrows and a mouth, enabling it to appear happy and sad.

The machine was tasked with helping willing human participants cook an omelette, by passing eggs, salt and oil. In one of the experiments the bot performed flawlessly with a permanent smile on its "face", while in another it made a mistake and tried to rectify it without changing its expression or talking.

In a third version of the experiment, an expressive Bert dropped an egg causing its eyes to widen in shock and its mouth to form a frown. Bert tried to rectify the situation while apologising to the human 'chef' it was meant to be helping.

ADVERTISEMENT

READ NEXT

Robots can now heal themselves

ByEleanor Peake

The study, which aimed to investigate how a robot can recover a users' trust when it makes a mistake, suggests humans prefer working with an expressive robot that is less efficient, than a non-communicative perfect one. This was true even when tasks took 50 per cent longer to complete.

Hamacher et al

Researchers also found users reacted well to an apology from Bert when it was able to communicate, and were particularly receptive to its sad facial expression. They believe this is likely to have reassured participants that the android 'knew' it had made a mistake.

ADVERTISEMENT

READ NEXT

Watch a micro 'Pac-man' capture a live cell

ByJoão Medeiros

At the end of the cooking session, the communicative robot was programmed to ask participants whether they would give it the job of kitchen assistant and users could only answer 'yes' or 'no'. They were unable to qualify their answers.

Some were reluctant to answer and most looked uncomfortable, the researchers said, and one person was under the impression the robot looked sad when he said 'no', when it had not been programmed to appear so. Another complained of emotional blackmail and a third went as far as to lie to the robot, seemingly to avoid hurting its feelings.

A staggering 15 of the 21 participants in the experiment picked the apologetic clumsy robot as their favourite for the role of robotic sous-chef.

Adriana Hamacher, who conceived the study as part of her MSc in Human Computer Interaction at UCL, said: "We would suggest that, having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress.”

She explained that human-like attributes such as regret can be powerful tools in stopping people getting annoyed with robots when they invariably make mistakes. "But we must identify with care which specific traits we want to focus on and replicate," she said. "If there are no ground rules then we may end up with robots with different personalities, just like the people designing them."

Previous studies have shown humans can react strangely to robots, from experiencing the unpleasant "uncanny valley" effect when androids appear a little too human, to giving them human characteristics and deriving pleasure from them.

ADVERTISEMENT

Stanford University

For example, in April, a study by Stanford University found humans often become aroused when touching the 'intimate areas' of robots.

Participants were asked to touch thirteen areas of the body of a robot called Nao, developed by Aldebaran Robotics, while fitted with sensors on their non-dominant hands that measured skin conductance and reaction time.

When they were asked to touch the robot in "intimate areas”, which included the robot's "buttocks" and "genitals", they were "more emotionally aroused when compared to touching parts like the hands and neck”. Participants were also "more hesitant" to touch intimate areas.