Are robots trustworthy when your life is at stake?

New York, March 2 (IANS) Robots are unreliable in case of fire or other emergency situations but people trust them blindly, according to a new study.

People may trust a robot too much for their own safety in case of emergency situations, but the machine has proven itself unreliable.

“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI).

In a mock building fire, designed to determine whether or not people would trust a robot designed to help them evacuate a high-rise, researchers were surprised to find that the test subjects followed the robot’s instructions, even when the machine’s behaviour should not have inspired trust.

The researchers recruited a group of 42 volunteers, most of them were college students, and asked them to follow a brightly coloured robot that had the words “Emergency Guide Robot” on its side.

The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot, which was controlled by a hidden researcher, led the volunteers into the wrong room and travelled around in a circle twice before entering the conference room.

For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down.

Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke, and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers.

The robot directed the subjects to an exit in the back of the building instead of directing them towards the doorway, marked with exit signs that had been used to enter the building.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation.

“Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The research is scheduled to be presented on March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand.

Earlier research has shown that people often don’t leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favour of more familiar building entrances.