Manage your subscription

Artificial intelligences could be set to be selfish

Published 1 February 2017

From
Tony Castaldo, San Antonio, Texas, US

I can think of two solutions to the major moral dilemma facing autonomous cars. The first is for their owners to have general liability insurance, much as physicians must have cover for everything that might cause a patient injury or death.

Second, if the car is intelligent enough to recognise moral dilemmas, it should be smart enough to know or discover the owner's preference: do they always choose self-preservation, or kids first, or women and children first?

Insurance companies may then choose to set premiums based on the predicted risk of paying damages given what the car was programmed to choose.