Bristol AI experts say rights for robots could be ‘dangerous for humanity’

Leader signs open letter to EU discouraging 'electronic personhood'

18th May 2018

Professor Sanja Dogramadzi of the Bristol Robotics Laboratory is among the 156 artificial intelligence experts to have signed an open letter to the European Commission regarding discussions to grant robots ‘electronic personhood’ status.

In early 2017, a European Parliament report noted that ‘electronic personalities’ should be granted to robots, holding them to account for any property damage or harm against a person they cause.

The open letter (written April 5, 2018) states that ‘electronic personhood’ status is unnecessary and focus should instead be placed on creating an “actionable framework” for AI and robotics. The letter claims that the European Parliament’s discussion is based in an overvaluation of the capabilities of robots and is spurred on by science-fiction and sensationalist press coverage. The danger here is a misrepresentation of how advanced AI technology currently is.

Dogramadzi, known for her work in medical robotics at Bristol Robotics Laboratory at UWE, determines that responsibility for a machine’s actions lies with its human owners and manufacturers, not with the machine itself.

She says: “Our society is responsible that robots equipped with AI operate according to strict safety and ethics rules created by the governing bodies. Creators of AI are solely responsible for its actions.

“I signed the petition because I agreed with its demands – to establish governing principles of AI rather than give AI a legal status.”

She refers to the complications in creating a distinct legal status. As the open letter makes clear, if an ‘electronic person’ model were to exist, it would be inappropriate for it to derive from the Natural Person model or the Legal Entity model (into which companies are valued). They claim the former would grant the robot human rights, while the latter would incorrectly suggest a human being represents and directs it. Neither model is applicable to a robot.

Programmers are liable for a robot’s actions

GWS Robotics’ Philip Graves has said: “To establish such rights for robots could be extremely dangerous for humanity.

“It could elevate robots, which are essentially machines constructed and initially programmed by humans to the status of organic beings over which we have no right of control.

“I believe we should legislate from the standpoint that they are machines, under full human responsibility and without independent rights.”

Insuring a robot to protect against property damage is practical he says, but he warns: “To give true personhood to a robot, which includes rights as well as responsibilities, could be seen as extreme.

“Personhood does not exist in law even for non-human animals or other life forms, whose due rights can be demonstrated from an ethical perspective to be much greater than those of artificially intelligent machines.”

Laws to ensure liability for AI should be carefully considered without risking a misrepresentation of the technology.

James Hacker is interested in digital culture, innovation in the cultural sector and anything slightly unusual. An English undergraduate at Exeter University, he is looking to pursue a career in tech journalism.