Ethical dilemmas of a driverless world

University of Auckland's Alex Sims looks at why we can't apply a universal code of ethics to driverless cars

Results from a huge online game designed to show whether people across the globe think the same about decisions a driverless car should take in the face of an unavoidable accident has been published in Nature, the leading international science journal.

The game, Moral Machine Experiment, has echoes of the “trolley problem” which was once just a thought experiment in ethics. The problem goes like this: You see a runaway trolley (in New Zealand we would describe it as a train carriage), moving towards five people lying on train tracks. Next to you is a lever that controls a switch. If you pull the lever the trolley will be diverted onto another set of tracks, saving the five people. But there is one person lying on the other set of tracks and pulling the lever will kill that person. Which one is ethically correct?

Autonomous cars raise the stakes. If a crash is inevitable, for example, an autonomous car's brakes fail, and the car has to choose between running over and killing three elderly people or swerving into a brick wall and killing the car’s occupants, what should the car do? The authors state, quite rightly, that we as a society cannot leave the ethical principles to either engineers or ethicists.

We need rules. It would be unconscionable for people to drive cars that were programmed to ensure the occupant’s safety was put ahead of everyone else. For example, a car cannot be programmed to run three people over to avoid the car’s sole occupant crashing into a parked car.

The Moral Machine Experiment, by MIT (Massachusetts Institute of Technology), collected nearly 40 million decisions from respondents in 233 countries. The project’s scope was ambitious as it sought to explore whether a universal machine ethics was possible. While the authors acknowledged the limitations of the study, for example, significantly more men responded to the survey than women, nonetheless the number of respondents and their spread across the world made a welcome change from experiments carried out on college students in the United States.

The game presented different scenarios which focused on nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status).

Four levels of analysis were used. First, what is the relative importance of the nine factors, when data are aggregated worldwide? Second, does the intensity of each preference depend on respondents’ individual characteristics? Third, can clusters of countries with homogeneous moral preferences be identified? And fourth, do cultural and economic variations between countries predict variations in their moral preferences?

While there were differences between countries, the research found three strong preferences: saving human lives over the lives of animals; sparing more lives, and sparing young lives.

Respondents from individualistic cultures had a stronger preference for sparing the greatest number of people. Those from collectivistic cultures showed a weaker preference for saving younger people, which is not surprising given the respect they show to older members of their society.

The study found that countries could be grouped into three clusters in terms of moral preferences. The first cluster was Western, in addition to New Zealand it included North America, with a number of European countries as well as sub-clusters containing Scandinavian countries and Commonwealth countries. The second was Eastern, including Japan, China, and Taiwan as well as Islamic countries such as Indonesia, Pakistan and Saudi Arabia. The third was Southern, which was the Latin American countries as well as France and those that had a French influence.

There were differences between the clusters which make for fascinating reading. Respondents from countries with a strong rule of law were more likely to spare more characters, to favour humans over non-humans and less likely to favour higher-status over lower status people. Also, respondents from countries with a higher socio-economic inequality (measured by the Gini-coefficient) were more likely to spare higher-status people over lower status people. Thus the danger is that in the latter countries autonomous cars would protect their wealthy owners at the expense of others.

Respondents from individualistic cultures had a stronger preference for sparing the greatest number of people. Those from collectivistic cultures showed a weaker preference for saving younger people, which is not surprising given the respect they show to older members of their society.

Curiously while the authors were seeking to see if a universal machine ethics was possible, and they found there were common preferences, they do not argue for a universal machine ethic, rather that each country will set their own ethics. Moreover, the authors note that ethical preferences should not necessarily dictate the ethical policy adopted, albeit the people’s willingness to buy autonomous vehicles and tolerate their use will depend on the palatability of the rules adopted.

To be sure, the authors are being pragmatic because attempting to impose a universal machine ethics would be difficult, but decisions are going to need to be made, such as whether the lives of a few should be sacrificed to save many. It should not be left open to individual car companies, as has been proposed in the ethical rules proposed in 2017 by the German Ethics Commission on Automated and Connected Driving that simply says that “General programming to reduce the number of personal injuries may be justifiable”.

However, on an equally pragmatic note: what happens when you drive a car from one country to another country with different rules? The car would be required to update its operating system to adjust to the new country’s rules, which would not necessarily always go smoothly.

And what of the law? Ethical rules are interesting as there is often a large difference between ethical and legal rules. Breaking a legal rule can result in many sanctions, for example, being fined and imprisoned and even not being able to travel to certain countries as well as difficulty finding employment and obtaining insurance. Ethical breaches do not incur the same sanctions. While some legal rules are ethically based – such as do not steal – others are not. For example, legally as an employer you are entitled to pay your employees the minimum wage when you know they are going without food so that their children eat, even though ethically you should pay them more as you are making extremely large profits. Thus when the “ethical” rules are determined, these rules must be enshrined in law.

While the ethical issues are problematic it must be borne in mind that currently over one million people die each year as a result of car crashes and tens of millions are injured. Autonomous cars will greatly diminish the number of fatalities and injuries, which can only be a good thing.

Also, we must remember that new technology is always viewed with suspicion. In the United Kingdom, the first cars were limited to a speed of four miles per hour on public highways and two miles an hour in cities, towns and villages. A person was required to walk in front of the car carrying a red flag to warn riders and drivers of horses about the oncoming vehicle.

Newsroom is powered by the generosity of readers like you, who support our mission to produce fearless, independent and provocative journalism.

Comments

Newsroom does not allow comments directly on this website.
We invite all readers who wish to discuss a story or leave
a comment to visit us on Twitter
or Facebook. We also welcome
your news tips and feedback via email: contact@newsroom.co.nz.
Thank you.