Well, you do have scenarios like 'tiny child suddenly appearing from among the parked cars on the side of the road'. Does the vehicle swerve into the oncoming traffic on the other lane, or does it run over the kid? There may be time to steer even when there's no time to stop, even if the car is adhering to speed limits.

Everyone wants to be the one person whose life all self-driving cars are designed to spare at all costs, whether they're in them or not; this is no different from what they want from human-driven cars, but people are more comfortable making impossible demands of robots.

Well, then, why not sell that? (That question is rhetorical.) Auction off avoidance priorities, and just have the cars communicate with each other and with the pedestrians' phones to decide who to hit based on a greedy algorithm that steers them away from the highest-priority individuals in the vicinity.

Yes, it's awful in countless ways. But it makes the automated car executives more money than any other solution I've seen so far, so it's probably going to be the one we end up implementing.

Jeez, is this thread gonna go back to normal eventually, or is it gonna keep being a circular debate about game theory? It may have actually transcended being a circle, there seems to be like three sides to this argument already, despite it being game theory...

Ah well. Might as well throw my own perspective into the multi-dimensional ring. Sure, it's almost incoherent that the trolley problem might come up, but it's still useful for political reasons. If someone is told that the car is "suicidal" (bonus points if they don't get the full explanation for what that means), they'll be less likely to buy it. Same for if it's "murderous". Of course, you could just leave the car incapable of weighing the relative value of human lives, but then we will get a self-driving car which plows into a crowd. Roll the dice enough times, you'll get enough simultaneous failures eventually.

Yeah, usually bringing up e.g. the Trolley Problem is the starting point for discussing the ethics but however, people here are completely fixated on whether the actual Trolley Problem, exactly as described would occur. Which entirely misses the point and reduces the entire discussion to inanity. There's no traction there to talk about the ethics issues as they affect robot cars when people fixate on the least possible relevant details. If this is always going to be the way when discussions start then it's probably good that we just go back to just posting "wow super tech" articles and stop trying to have an actual thoughtful discussion about the pros and cons of new technology.

Let me sum it up "for dummies"

- There's a thing called "The Trolley Problem" - perhaps you should look this up and read a bit about it and understand why it exists before critiquing parts of it.

- It's a framework for discussing ethics in any situation that involves trade-offs. The details in the 1967 Trolley Problem formulation aren't important, it's just a shared framework so that people aren't starting from zero each time we discuss trade-off situations: there's a huge body of research related to it, and a lot of data about how people feel very differently when you change the formulation even slightly, even when the "math" works out the same. So new situations are often phrased as being variants of the Trolley Problem so that we can make use off 50 years of ethics research rather than going in blind to every new situation, which wouldn't allow us to tap into 50 years of insights and research on the existing framework. That's the value of looking at new problems in terms of how they related to The Trolley Problem, it's not meant to be a claim that e.g. robot cars will have to picking between killing exactly 5 pedestrians vs 1 pedestrian. That's not a relevant detail.

- Robot cars will need to make ethical trade-off. These could involve whether to harm the driver, vs kill pedestrians, whether to kill the driver vs kill two other drivers, or basically any situation where there's some trade off between harm/death/monetary-loss. Basically, the Trolley Problem framework is just a way to abstract some of the details away.

- Fixating on whether the details of the actual 1967 Trolley Problem, exactly as stated for an uncontrollable train hurtling down the tracks where 5 people are lying on the tracks, is just silly, and it misses the opportunity to start an actual sensible and productive discussion about what ethics we should build into robot cars.

- e.g. robot cars are going to make trade-offs. Someone, not the driver will be the person who programs what variables the car should consider when forced to make some trade-off. Should drivers be allowed to buy "selfish" cars that prioritize their own driver's health above other people's lives? Should we regulate it so that people cannot buy "selfish" cars and must buy "community minded" cars that minimize total harm to all people, even if this means a higher chance of death for the driver? Who decides what system of ethics governs robot cars decision-making in those worst-case scenarios where is you-vs-them?

^ those things in the final point are the only important things to talk about here. Mentioning the Trolley Problem at all was something I did because that's how the TED talk on this was structured. It's just a shared point of reference for starting a discussion on the concept of trade-offs in ethics. Saying that pedestrians aren't bunched up like that so it would never happen is entirely going in the wrong tangent.

Other than how it makes the owner feel, how important is the distinction really? If you own such a car you're unlikely to ever see it have to make such a decision because the programming defaults to "make sure nobody dies", and it's pretty damn good at accomplishing that.

Well if there are 50000 auto deaths (USA) a year now, and if fully optimized robot cars would be 90% safer then there will still be 5000 deaths. If, however, everyone decides to buy "me first" robot cars that prioritizes themselves instead of "collectivist" robot cars that work in a network to minimize total road casualties, even if that means one of the robot cars "self sacrificing" then you might only see an 80% reduction instead, which works out at another 5000 preventable deaths (plus injuries and property damage) because people didn't want to buy completely safe as a whole cars just in case their car would decide they were expendable to the system.

I think a system where all cars are designed to be "selfish" for the occupants of the car, and only the occupants, vs one where all cars were designed to be "selfless" and think about all human lives in their decision making process would be clearly delineated futures, and they'd likely have vastly disparate outcomes and issues to deal with. Hell, if the "selfish" cars were in fact talking in a network, how do we know that they'd benefit from talking honestly with the other cars around them. What if your "Chevy me-me-me 3000" decided to send bogus data about it's movements to other nearby-cars in a near-crash situation because it could 'game the system' to maximize the survival chances for just it's owner, at the expense of the other owners? Once you go down the route of "selfish" robot cars being ok then "gaming the system" becomes a thing - e.g. Prisoner's Dilemma and suboptimal outcomes for all because people didn't choose to cooperate if there was a risk that they could be back-stabbed and lose out, so we all assume everyone is a backstaber to be prudent, and we hide information. The problem is that "selfish" cars will need to be built around these sorts of trade-offs, which demonstrably give very different outcomes to utilitarian outcomes.

However, and there's a real point here, people in the future with robot cars might decide that they're ok with higher driving speeds and more tight cornering, less space between cars because that's a trade-off with convenience and safety that they're willing to make. Robot cars would be safer if driven exactly like we drive now, but as safety margins increase people might be willing to push it thus trading-off some safety for convenience. Hell, most people are already ok with the current level of road fatalities so why wouldn't they push things with robot cars even further? So things like a claimed 90% reduction in fatalities might not actually happen if we decide to push things in other ways because we feel safer.

So no, i don't think saying that it's not worth thinking about because robot cars will be too perfect to care is a good argument. It's highly likely to be a flawed assessment.

(Unrelated Aside: Last night I said 'fantasy tech' and perhaps I should have said 'fantastic tech' instead. But there is also the old quote about how sufficiently advanced tech is indistinguishable from magic, so 'fantasy tech' isn't really that far out of reason...)

True enough on that.

And yeah, as reelya said, it's less of a moral and logic test than an ethical problem, and any fatalities with robot cars at this early stage is going to generate backlash and setbacks in people trusting the technology.

Also i think focusing on the pedestrians / trolley-problem specifics misses the point. The vast majority of road deaths are not pedestrians, the vast majority are deaths of other people who are in cars. So no, saying we can trade off car deaths for pedestrians is a red herring there that took the TP too literally.

So the thing is, and this links back to my previous energy thing (which was in fact a metaphor for pollution if you didn't get that):

- people could buy (A) cars that minimize all road deaths- or people could buy (B) cars that minimize their own chance of death at the expense of everyone else.- almost all people say they want to buy type (B) cars- but they almost all agree that everyone should get type (A), just not them- however they oppose legislation that would make everyone get type (A).- so everyone will probably buy into type (B) cars programmed to focus on "me-first".

- and anyone who then bought a type (A) car is a schmuck, because you have the lone self-sacrificing car among a bunch of assholes who'll run you into a wall to save themselves.

- however, the paradox is each individual driver is in fact statistically less safe because of their adamant demand that their personal safety be ensured by the specific car they bought. The paradox arises in any situation when we're each trying to push extra costs onto other people rather than minimize total costs to all people. Since we're each pushing additional costs onto each other in order to minimize our share of the costs we all end up paying more.

Everyone wants to be the one person whose life all self-driving cars are designed to spare at all costs, whether they're in them or not; this is no different from what they want from human-driven cars, but people are more comfortable making impossible demands of robots.

Well, then, why not sell that? (That question is rhetorical.) Auction off avoidance priorities, and just have the cars communicate with each other and with the pedestrians' phones to decide who to hit based on a greedy algorithm that steers them away from the highest-priority individuals in the vicinity.

Yes, it's awful in countless ways. But it makes the automated car executives more money than any other solution I've seen so far, so it's probably going to be the one we end up implementing.

With any luck that idea would be immediately struck down in courts. This is essentially a "pay-to-not-die" scenario; it violates basic human rights. Yes, it's true we generally have to pay for food, shelter, and other basic necessities, but if we are unable to do so, our rights state that we are provided them nonetheless. If we are unable to pay to not die, we are (usually) assured we won't die anyway. This whole idea circumvents that; what if you're on a minimum wage salary, have to support your family, and thus aren't able to pay for this service? Your chance of death compared to those around you has increased, perhaps greatly.

The reasoning of this idea is like that behind an anarcho-capitalist society. We haven't degraded to that point yet, fortunately, so we won't be seeing it anytime soon.

Everyone wants to be the one person whose life all self-driving cars are designed to spare at all costs, whether they're in them or not; this is no different from what they want from human-driven cars, but people are more comfortable making impossible demands of robots.

Well, then, why not sell that? (That question is rhetorical.) Auction off avoidance priorities, and just have the cars communicate with each other and with the pedestrians' phones to decide who to hit based on a greedy algorithm that steers them away from the highest-priority individuals in the vicinity.

Yes, it's awful in countless ways. But it makes the automated car executives more money than any other solution I've seen so far, so it's probably going to be the one we end up implementing.

you're probably right about that actually, I'd sort of missed this post before, it ties into what I've written too. Some ways this trade-off could be monetized include building it into your insurance premiums. e.g. if you get a "greedy" algorithm such as "steeravoid 3000" which always maximizes your chance of life at the expense of others, then the insurance companies are going to start taking note of which algorithm you've chosen to implement. And then they're going to gouge you on third-party insurance. It would basically be similar for all forms of damage and potential medical expenses, the people who can afford the least will end up with AI algorithms that minimize potential costs for all insured road-users, whereas the rich won't care - they'll happily pay extra insurance to be covered for having an AI that is willing to sacrifice others for their own safety. But it won't be phrased as "willing to run down a woman with a baby, if it protects the car occupant", it will be phrased as "maximize safety at all costs". This AI you bought that "maximizes safety at all costs" might then even make some terrible decisions that the real driver wouldn't have - like running over 20 school children instead of increasing risk for the car occupants.

Car AI is going to be important in cases such as accident forensics, so I imagine a TON of laws are going to be passed so that the choice of AI you have is something you need to tell people about. It's not like linux vs windows on your PC - if you change the code on a life or death machine that can kill other people, the government and insurance agencies will want that information, and they'll legislate to get it. e.g. if they find out you changed your self-driving algorithm to a "greedy" algorithm to avoid damage, and it smashed someone else up, good luck with insurance payouts/premiums. And with full computer logs from both robot cars, if you ran someone off the road to protect yourself, they're going to notice that and make you pay for it.

So there should evolve an eco-system around this where car-makers, AI designers and insurance companies, police and government all play off each other here.

With any luck that idea would be immediately struck down in courts. This is essentially a "pay-to-not-die" scenario; it violates basic human rights. Yes, it's true we generally have to pay for food, shelter, and other basic necessities, but if we are unable to do so, our rights state that we are provided them nonetheless.

Risks are different from necessities. Living in a well-funded, low-crime area isn't a right, even though it affects your chances of dying. Death panels!

So new situations are often phrased as being variants of the Trolley Problem so that we can make use off 50 years of ethics research rather than going in blind to every new situation, which wouldn't allow us to tap into 50 years of insights and research on the existing framework.

It's cute that you think the other people here care to actually educate themselves on the subject. Yeah, there's plenty of information out there, but you don't need that information to form an opinion and then defend it to the death.

Nah, I wasn't expecting them to do that, just pointing out how the specifics that are being discussed were in fact from a 50-year-old academic thought experiment. The TP is analogous to a lot of real-world decisions, but it's a clear mistake to start applying it literally to every new situation, as if every new ethical situation was only analogous to the TP if it involved 5 pedestrians being run over. If we go that route, then something that's supposed to be a tool that lets us abstract ethical problems and decide if they're really comparable or not, gets reduced to discussing inane details instead.