Will Self-Driving Cars “Choose” to Kill Cyclists?

A new study sheds light on how self-driving cars will make ethical decisions on the road.

By
Jordan Smith

Nov 8, 2018

Alexander RyuminGetty Images

In March, a self-driving car struck and killed a woman walking her bike across a road in Tempe, Arizona. The car, operated by the ride-hailing company Uber, had a test driver behind the wheel. But according to police, the driver was looking down at her phone until moments before impact, and didn’t hit the brakes until just after the crash.

Elaine Herzberg, 49, was struck at nearly 40 mph. She later died of her injuries at the hospital, becoming the first known pedestrian death caused by an autonomous vehicle.

According to a preliminary report by the National Transportation Safety Board, the car’s sensors detected Herzberg about six seconds before the crash. But it had trouble recognizing her as a person, at first classifying her as an “unknown object,” then as another vehicle, and finally as a bicycle.

The self-driving system would treat each of these scenarios differently, with different expectations for a how to avoid a crash. A little more than a second before impact, the system decided that the car should simply stop. However, Uber had disabled the emergency brakes to prevent sudden stops during test drives, instead relying on human drivers to take over as needed—a risk that, in this case, proved fatal.

In the intervening five seconds, the car’s self-driving technology tried to make “choices” about the best way to avoid a deadly situation. It failed, but the incident points to how autonomous vehicles will ostensibly act in future crash scenarios, as tech improves: Software will gather data via sensors, and the car will take what it deems the best, and safest, course of action.

But what if its choices present an ethical conflict? What if the car must choose between, say, swerving into a bike lane, potentially putting cyclists in danger, or risk a collision that would injure or kill its passengers?

That question—how self-driving tech will distribute risk among everyone on the road—was at the center of a recent study published in the scientific journal Nature. In what they called the “Moral Machine” experiment, researchers used data from an online game (you can play it yourself) that gives players a set of 13 randomly generated ethical dilemmas that self-driving cars may face on the road, and asks them to choose which they think is the best option.

For example, you might have to choose between crashing a car into a barrier, killing everyone inside, or avoiding the barrier but killing pedestrians in the process—a classic example of the so-called trolley problem. You also get personal information, such as age and profession, about each of the potential victims.

A still from the Moral Machine game, which asks users to choose between different courses of action for self-driving cars.

MIT Media Lab

Tough choice, right? But ultimately, real humans will program the self-driving software that makes these decisions—and their attitudes will be reflected in how the technology acts. Researchers said they factored in more than 40 million decisions logged in the game by millions of people from 233 countries.

According to Azim Shariff, an associate professor of psychology at the University of British Columbia and the study’s co-author, the results showed that the moral principles guiding people’s decision-making in the game varied by country. And while the game did not specifically ask about cyclists, he said, the ways players viewed pedestrians and others could illuminate how self-driving cars will eventually be programmed to treat people on bikes.

For this study, people made decisions as uninvolved third parties. But in his previous research, Shariff found that when people imagine themselves as passengers, they would much rather have a self-driving car that prioritizes their own lives over those of pedestrians outside the car—even if the pedestrians outnumber the passengers.

Theoretically, self-driving cars will use sensors to determine who is on the road and process the safest course of action.

Dong WenjieGetty Images

So if those who program self-driving cars can’t imagine themselves as cyclists—if they don’t know or understand how cyclists act—it stands to reason that cyclists won’t be given the same consideration as car passengers when the software has to make quick life-or-death decisions. Which means that cyclists could face a disproportionate risk of injury or death, even when sharing the road with “neutral” nonhuman drivers.

Of course, the promise of autonomous vehicles is that they can react faster than humans, and will never get tired, drunk, or distracted the way people do. Theoretically, this is supposed to mean fewer deaths overall, a point Shariff made sure to emphasize.

“We may see these differences emerge when we look at the ratio of traffic casualties between cyclists and other stakeholders on the road,” he said. “But this just determines the relative level of risk between the different stakeholders on the road. The total risk should decrease and should decrease dramatically—in theory, anyhow.”

Before self-driving cars hit the streets, engineers and programmers will have to decide how the cars distribute risk between everyone on the road. That includes passengers, other drivers, pedestrians, cyclists, and even pets. Shariff said that in cases where a tradeoff is necessary, cyclists may bear more risk than, say, pedestrian children. On the other hand, there may be scenarios where cyclists shoulder less risk than others.

Hopefully, cyclists are in the room while these programming decisions are being made.

A Part of Hearst Digital Media
Bicycling participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.