Robots have the power to influence children’s decisions

“A discussion is required about whether protective measures … should be in place that minimize the risk to children during social child-robot interaction.”

Adults already worry about the peer pressure their children will encounter, and now they may have to worry about them feeling pressured by objects that are not even alive. In a new study in Science Robotics, researchers from Germany and the U.K. found that children are at risk of being influenced by robots.

Robots influence children’s choices, but not adults

To test how we respond to groupthink when a robot is in the room, the researchers set up the popular Asch experiment, which was first done by psychologist Solomon Asch in 1951.

“People often follow the opinions of others and we’ve known for a long time that it is hard to resist taking overviews and opinions of people around us,” Tony Belpaeme, robotics professor at the University of Plymouth and one of the study’s authors, said. “We know this as conformity. But as robots will soon be found in the home and the workplace, we were wondering if people would conform to robots.”

In the vision test, participants in a room have to guess which line on a screen is closest in length to the first and state their answer. The answer is obvious to participants when they are alone, but logic gets muddled when the other participants in the room confidently give the wrong answer.

The researchers applied this same kind of test, but added a Nao humanoid robot into the mix. The Nao looks like a small toy figure you could buy in a children’s department. When the researchers gave adults and children ages seven to nine the same task, adults were able to shake off the robot’s influence, but children were more susceptible.

The children scored an average of 87% on the test when it was a humans-only room, but when robots joined them, their scores were worse, dropping to an average of 75%.

“It shows children can perhaps have more of an affinity with robots than adults, which does pose the question: what if robots were to suggest, for example, what products to buy or what to think?” Belpaeme said. If a robot can pressure a vulnerable child on how to think, the researchers suggest more thought needs to go into how children interact with them.

“A discussion is required about whether protective measures, such as a regulatory framework, should be in place that minimize the risk to children during social child-robot interaction,” the researchers concluded. When a robot could be a child’s future peer, children will need safeguards to make sure their thinking is entirely their own.