The Ethical Problems of the Driverless Car

A situation where Artificial Intelligence really has to decide who will live and who will die is the near future. Who should be saved by a driverless car if the accident is inevitable?

MINSK, BELARUS, February 01, 2019 /24-7PressRelease/ — A situation where Artificial Intelligence really has to decide who will live and who will die is not a topic for theoretical discussions or a plot for science fiction anymore, but the near future. The probability is very low, but with Big Data the event is inevitable.

But now we are interested not so much in the probability, but the ethical solution to the given problem. An accident at a pedestrian crossing. All the objects and people involved are subjects to the laws of physics, but your car is also subject to the Artificial Intelligence control. Let’s consider a problem where the driverless car has only three options:

to kill a mother with a child
to kill an elderly couple
to kill you (for example, getting off the road), although it was you who bought this AI.

What should be done in case the tragedy is inevitable? Who will die? And who has the right to take such a decision?

The question “Who is to be the sacrifice?” takes us to the so-called “Trolley problem” – a thought experiment introduces by the English philosopher Foot. The classical variant of it is the following:

A heavy runaway trolley is moving down the tracks. There are five people lying in the trolley’s way, tied up to the tracks by the insane philosopher. Fortunately, you can pull the lever and the trolley will go onto another, sidetrack. Unfortunately, on this extra track, there is another person, also tied up. What will you do?

The main thing in such a task is not the finding of the exact solution but the way this solution is being found. People choose the variant in accordance with their own moral principles, and the majority sacrifices the life of one single person in order to save more. AI, however, will be guided in its actions by the assessment of the risks and probability. And it is not clear yet how to act under such circumstances.

According to utilitarian ethics, a car must minimize damages, even if such an action will lead to someone’s death. On the other hand, according to Isaac Asimov’s “laws of robotics”, a robot can’t harm a human. So, such AI is in Zugzwang, as each of its actions is the wrong one!

Scientists from Massachusetts Institute of Technology surveyed more than million people from all over the world. What do people think, what is the right thing to do in such situations?

The majority of us sticks to the utilitarian logic leading to fewer victims. Why not take this as a rule which an autopilot will be taught? The thing is, in case we ask the adherents of the damage minimization, whether they are going to buy a Tesla car which in a certain situation is going to sacrifice their lives for the greater good – they usually refuse.

The instinct for self-preservation is impossible to overcome. The ideal variant for people is buying a car which is going to protect them, but others should care for the victim minimization. This alludes us to an older ethical problem called “tragedy of the commons”. Here is its essence. Using the common resource, one strives to gain profit for themselves in the first place and shift the risks on the others. And as a result, everyone loses. In general, humankind regulates such situations with the help of laws. But in the field of Artificial Intelligence ethics, legal norms have just begun to be developed. Currently, a driverless car is most likely to maneuver trying to avoid an accident, even if it is not efficient. As a result, who lives and who dies is defined by the laws of physics and chance, rather than by someone’s decision. It is also an option, but rather out of despair, like, we did our best.

So, the answer to the given question is the following. At the moment, AI can’t sacrifice its passenger, that’s why a pedestrian is going to die. And who is going to jail? This question is also an open one, for no one in corporations wants to go to jail. The driver doesn’t have anything to do in this situation, and it’s too late for the pedestrians to care for this.

P.S. By the way, the cruel, but not the senseless option is to sacrifice those who are of the least use to the society. For more details, you can read the article by Andersen’s expert “Why democracy, when there are data?“. So we can act in the following way – sacrifice those, whose social ranking is lower (who don’t pay off loans on time, buys alcohol too often or speaks ill on the authorities).

—For the original version of this press release, please visit 24-7PressRelease.com here