Humans Preferred Communicative, Expressive Robots, Even When They Looked Like This

Apparently, in order to make a robot as expressive as possible, researchers decided to give BERT2, a robot assistant a pouty pair of lips and wide eyes with... are those eyebrows?

Researchers from the University College London and the University of Bristol, who dared to use BERT2 here to test whether a communicative and expressive robot would be better for humans to work with, are to blame for my nightmares this week. They did, however, find that a robot partner that apologises after making a mistake was able to gain a human’s trust more successfully, to the point where a human would lie to prevent the robot’s feelings from being hurt.

Participants in a study led by researchers from the two universities were asked to partake in a mock cooking scenario where one of three versions of BERT2 would be handing them ingredients—one which wouldn’t communicate but wouldn’t make any mistakes, one which would interact but would make mistakes, and one who would do both.

“Are you ready for the egg?” the third BERT would ask. It would also apologise when it dropped an egg (it was programmed to) and asked participants at the end whether it did a good job.

The results showed that 15 out of 21 people preferred the third version of the robot, despite the fact that the robot was less efficient. Researchers observed that most of the participants looked uncomfortable when asked, with one even reading that the robot looked sad, even when it hadn’t been programmed to do so, and another lying to the robot.

“We would suggest that, having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress,” said Adriana Hamacher, the lead author on the study. In the event that a robot in the workplace does make mistakes (likely), researchers would have a way to ensure that cooperation between human and robot continues.

“It suggests that a robot effectively demonstrating apparent emotions, such as regret and enthusiasm, and awareness of its error, influences the user experience in such a way that dissatisfaction with its erroneous behavior is significantly tempered, if not forgiven, with a corresponding effect on trust,” researchers wrote.

Researchers around the world have been experimenting with giving robots emotions—or rather, the appearance of emotions. Pepper, for example, is a Japanese robot that can both read a human’s emotions and have its own. But even without Pepper or BERT’s capabilities, we humans tend to find humanity in other things. Anthropomorphism, or the tendency to attribute lifelike qualities to animals or inanimate objects, is an innate quality in humans.

“We see people treating these machines like social actors,” Kate Darling, a specialist in human-robot interactions at MIT, told Wired upon Pepper’s release.

The research will be presented at the IEEE International Symposium on Robot and Human Interactive Communication and will be published by the IEEE. You can read a pre-print copy of the paper here, which is worth the read just for photos of participants’ facial expressions.

Also, here’s earlier footage of BERT2 that I found, because it couldn’t get any creepier.