AI Rules Are Necessary, Say European Regulators

AI rules will be needed to standardize safeguards and liability, just as global regulations have developed for drones and self-driving cars. Even if sentient robots won’t be around anytime soon, policy makers need to consider human-robot interactions.

A lot of science fiction, from Star Trek and Ex Machina through Westworld, makes it seem like human-level artificial intelligence is around the corner. However, sentient machines are still pretty far away. In the meantime, ethicists and regulators in Europe and elsewhere have begun considering robotics and AI rules.

The most pressing use cases involve autonomous machines in factories and warehouses. As more functions are performed by the onboard (and cloud-based) “brains” of robots, the more we need to anticipate potential outcomes and how to assign legal blame in the case of an accident.

In my previous article, we looked at how the EU may soon create an agency governing safety, liability, and other standards for mobile and social robots and for autonomous vehicles. The concept of electronic “personhood” essentially boils down to the legal status of these machines. Ownership versus independence will be a key factor in determining responsibility for any incident.

Business Takeaways:

Public agencies and private companies need to collaborate on a consistent legal framework for determining responsibility in the case of accidents involving robots and AI.

The novel issue of “robot rights” will require innovative solutions to be enforceable.

The EU is leading efforts to anticipate technology development with robotics and AI rules.

A bright future?

In the 20th century, flight evolved from the introduction of biplanes before World War I through the moon landing and space shuttle. We are on the verge of an evolution of computer science and robotics in which truly autonomous, thinking machines could emerge.

Self-driving cars, such as this Volvo Drive Me, are leading the way for AI rules.

Social robots, self-driving cars, and various applications of machine learning offer many commercial opportunities. However, each of these involves not only technical challenges, but societal ones as well.

Policy makers and businesses need to consider the effects of robotics and AI. If many production and service jobs are replaced through physical and software automation, how will taxation support programs such as Social Security?

Regulations address the physical safety of humans sharing workspaces or roads with robots, but they will also need to account for interactions and employment. Automation has its roots on the manufacturing floor, but now the very nature of work is changing all the way up the corporate ladder.

A legal person is something that is subject to human justice, and I do not believe it is possible in a well-designed system to create the kind of suffering that is intrinsic to human punishment.

In the U.S., the Federal Aviation Administration has required that all drones and drone pilots now be registered. I would expect a similar licensing ecosystem for certain robots and AI. This registration process could be one source of funding for an insurance pool to cover accident liability.

Sorting it all out

Governments must determine how their legal system should be modernized to deal with these potential scenarios up to and including potential personhood for a sentient machine.

Even before then, lawmakers and industry need to come up with a new insurance solution for autonomous vehicles and AI — something akin to “no fault” insurance for accidents when the machine logic is at fault. The question is, how do you fund an insurance system such as this?

One benefit to untangling any eventual “criminal AI” incident should be the ability to get to the machine’s “black box” data recorder, because unlike humans, a robot shouldn’t be able to forget or lie. The facts and data from any potential situation should be available to accident investigators.

This information should include both the sensor data and the logic choices made by the AI. Having this information will make fault determination much more straightforward in the courtroom.

Is a “kill switch” necessary for automation rules?

The European Parliament Committee on Legal Affairs has even gone so far as to propose the requirement of a robot “kill switch.” On the factory floor this has historically been implemented as an emergency stop. “Kill switch” sounds more dramatic, but the question remains whether any robots need some way to be halted in case they go bad.

Global debate requires unified solutions

As robotics and AI companies continue to innovate and produce ever more autonomous machines, it is critical that we understand the impact of laws in various regions. The debate about allowing self-driving to run on the road alongside human drivers has resulted in some cities outlawing these vehicles, while other cities have welcomed them.

Across the broad spectrum of robotic applications, it could be chaos for the manufacturers to have to comply with a variety of local restrictions. This is why it’s important to start now with the exploration of reasonable “rules of the road” for all regions. I hope some uniform and sane guidelines will emerge from the early adopters.

The Market Spec Group was founded to deliver best practices, market insights, and strategic guidance for the industrial automation and service robotics markets. For over 25 years, Mike Oitzman worked for market leading companies such as Adept Technology, Remedy/BMC Software, and Hewlett-Packard. At HP, he helped introduce the first generation of HP’s cloud-based solutions and led the product management team for HP’s largest SaaS solution. Oitzman is a recognized speaker and expert in the “Robots-as-a-Service” and mobile robot markets.