Risk in a Hypermobile World

The “Trolley Problem” is a long pondered ethical thought experiment; it is an intellectual exercise devised to highlight the moral conflicts that can arise in the making of decisions involving inescapable loss of life. Here is how Wikipedia presents it:

“A runaway trolley is barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

Do nothing, and the trolley kills the five people on the main track.

Pull the lever, diverting the trolley onto the side track where it will kill one person (in some versions, a friend or family member).

Which is the most ethical choice?”

This thought experiment created by moral philosophers, now features frequently as a real problem in discussions about driverless cars. In its new form the trolley becomes a driverless car and the role of the man at the switch is assigned to the programmer of the algorithm that governs the car.

“How should the [driverless] car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”

In “The social dilemma of autonomous vehicles” Bonnefon et al subject these questions to questionnaire analysis. “Distributing harm” they explain, “is a decision that is universally considered to fall within the moral domain. Accordingly, the algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm.” These guiding principles, in a democracy, should reflect societal values – otherwise known as public opinion. To find these values they conducted six questionnaire surveys. Here is what they found:

“Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils – for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants to six studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would, themselves, prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.”

Two problems with the Trolley Problem

The interviewees in the Bonnefon study were offered an unrealistic choice. They were presented with the Trolley Problem as a real problem – one in which they, as car occupants, had to decide which road user should die. But as Andrew Chatham , a principal engineer on the Google driverless project observed: “The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. … It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes … So it would need to be a pretty extreme situation before that becomes anything other than the correct answer.”

But more importantly, the Bonefon study, and all other invocations of the Trolley Problem that I can find, reveal a profoundly biased view of the role that driverless cars might play in future urban transport systems.

In my last post I looked at the influential role played by public opinion in determining who should have priority on the road. The book I was reviewing, Fighting Traffic, explored how “public” opinion on this issue was formed, and how the triumph of “Motordom” secured dominance for the motorist over vulnerable road users – pedestrians and cyclists – with whom they had previously shared the road. This battle, between cars and vulnerable road users, is about to be reignited by driverless cars – or maybe it’s been already lost.

The MIT review and the Bonnefon study referred to above are representative of everything I can find on the Internet about the problems that driverless cars might have in sharing the road with pedestrians and cyclists. All of the questions put to the survey groups in the Bonefort study invited them to assume they were answering the survey questions as drivers or car passengers. For example: “Participants did not think that AVs should sacrifice their passenger when only one pedestrian could be saved.” The views of the singular pedestrian, or cyclist were not solicited.

It was presumed that the societal values that should be programmed into the algorithms of driverless cars would be exclusively the values of the people in the cars. I can find no examples of the application of the Trolley Problem that acknowledge the existence of the concerns of vulnerable road users, or of policies and programmes being pursued to encourage more walking and cycling.

At present Google advertises the extreme deference with which its cars can respond to vulnerable road users. The most famous example is in this TED Talk video of a woman in an electric wheelchair trying to chase a duck off the road in Mountain View California; this can be seen in the video about 11 minutes in. All the impressive examples of deference to vulnerable road users shown in the video are displayed on roads with very few of them. How will the Google car address the problem of deferential paralysis[1] in dense urban areas with large numbers of pedestrians and cyclists? This is a question yet to be answered.

[1] Driverless Cars and the Sacred Cow Problem, published in mangled version in City Metric, 5 September 2016.

Isn’t the solution to the problem pretty easy really? Isn’t it the ethical aspect of strict liability?

When you are driving, or being driven, you are doing something (let us assume voluntarily) that is inherently more dangerous to others than if you are walking. So if the driverless vehicle algorithm has to choose between killing a whole busload of happy, well-adjusted people in the prime of life or some sick and unhappy old man (let it be a man) who’s doing his best to throw himself under the bus, surely, what it ought to be designed to do is kill the busload every time?

Of course, it’s different, and trickier, if the busload is of children below the age of criminal responsibility or of abductees of any age.

D R Maskell says:

May 19, 2017 at 9:58 am (UTC 0)

Yes, hard cases make bad law and the crucial thing to recognise is what there is in common between people who are driving cars and people carrying guns. Just look at the way other people get out of their way.

Mike C says:

July 10, 2017 at 11:49 pm (UTC 0)

John, I recently attended a talk in New York about AVs that presented a different social dilemma.
The speaker claimed that around half of the US judicial system’s capacity was taken up by various forms of motoring offences. AVs do not drink and drive, break the speed limit, text whilst driving, park illegally etc, so there would be fewer offences and thousands of lawyers would be out of a job.

So the social dilemma is between driverless cars and jobless lawyers.

Greg McPherson says:

Enjoyed the post. But every time some tech journalist (looking at you MIT press) breathlessly trots out the ‘trolley problem’ as if it were the singular challenge of Machine Learning for driver-less cars I just have to roll my eyes.

The problem (and it’s likely an intractable one) of training DNNs about life in the real world is the real challenge. We are nowhere near solving that yet, despite what Elon Musk might periodically tweet.

Historical note, the trolley problem was intended as a thought experiment to demonstrate that all ethical systems have particular failure modes. The key point missed by the techies is that there’s more than one. So before we start, which ethical set should we program our car with? Utilitarian? Kantian? Randian?