Why do people always assume that the trolley running over one person vs running over 5 is always the best outcome. I say both outcomes are undesirable but one is not better than the other. There are too many other unknowns makes think of this parable.

A farmer and his son had a beloved stallion who helped the family earn a living. One day, the horse ran away and their neighbors exclaimed, “Your horse ran away, what terrible luck!” The farmer replied, “Maybe so, maybe not. We’ll see.”

A few days later, the horse returned home, leading a few wild mares back to the farm as well. The neighbors shouted out, “Your horse has returned, and brought several horses home with him. What great luck!” The farmer replied, “Maybe so, maybe not. We’ll see.”

Later that week, the farmer’s son was trying to break one of the mares and she threw him to the ground, breaking his leg. The villagers cried, “Your son broke his leg, what terrible luck!” The farmer replied, “Maybe so, maybe not. We’ll see.”

A few weeks later, soldiers from the national army marched through town, recruiting all the able-bodied boys for the army. They did not take the farmer’s son, still recovering from his injury. Friends shouted, “Your boy is spared, what tremendous luck!” To which the farmer replied, “Maybe so, maybe not. We’ll see.”

"Less bad" is definitely better. If you're choosing out of two bad options, you want one that does least lasting harm. One person's death is bad enough on its own, but it beats the hell out of five persons' death.

I've finally got around to reading Dan Ariely's The (Honest) Truth About Dishonesty. The research presented in that book shows that one factor that makes people more likely to cheat is if they're a step or more removed from doing the actual cheating.

For example, there was a study done where the participants were asked how likely they thought that the average player, and themselves, would move a golf ball 4 inches to a more advantageous position. The three methods of moving the ball that each participant was presented with were: their golf club, their foot, or directly picking up the ball. The results suggested players using the golf club to the move ball would be more likely to cheat, with one's foot or hand coming second and third respectively. The hypothesized reason for this was that, in using the golf club, the player was not directly in contact with ball and thus somewhat removed from the act of cheating. This allowed them to more easily rationalize their actions.

Could the same sort of mentality be at play here? Throwing a switch is a less direct way of causing the death of a person vs. physically throwing someone off a bridge. In that way, it allows people to more easy rationalize the action, even if the result is the same.

What people should strive for, in Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die.

Easy to say, but much harder in practice when people need to overcome internal biases that they might not even be aware of.

Easy to say, but much harder in practice when people need to overcome internal biases that they might not even be aware of.

Welcome to the world of social justice.

Could the same sort of mentality be at play here?

That's what I was thinking. Pushing the fat man means being directly involved into his death, while switching the railway means that you're basically just moving the bullet slightly, the bullet itself being already in the air set by something else.

Because of course, the greatest philosophical question of our time is why humans aren't as coldly rational as machines

One of the more prominent ones at the moment, I'd say.

and the greatest ethical problem to be solved is how we can fix that.

Fix the fact that we aren't coldly rational? That's not the point - of the problem or of the quote. The point is: can we streamline our consciousness so that people from around the world would yield similar results by default? A very far-fetched question, grant you, and one would argue against such a normalization.

So, you think there's enough of a difference between moral intuition and morality to justify poking fun at the confusion when it happens? I'd never think that, both being seemingly directly related to each other.

You can get a lot of people to agree that torturing suspected terrorists is justifiable so long as it gets you actionable information. That doesn't make torture okay. You can get a lot of people to agree that murder is justifiable in the case where the person killed someone you care about. That doesn't make vigilante justice okay. The entire existence of the field of ethics is predicated on the idea that morals are not intuitive, that we need to look closer and think harder, and that we can't just go with our gut. More generally, philosophy assumes that "common sense" is a contradiction because if all reasonable people agreed on everything, we wouldn't have anything at all to discuss.

It might be a little mean to poke fun here, but when a Harvard professor has based his entire career on the trolley problem, and he hasn't understood the trolley problem making fun seems nicer than the alternative.

The trolley problem was largely invented to illustrate why utilitarianism doesn't make sense, why it isn't valid to say something is moral because you're saving five lives while sacrificing one. There's something fundamentally different about pushing a fat man onto the track vs. flipping a switch to divert a trolley, and there's a difference between examining why those two cases are different, and examining why most people agree that those two cases are different. The former is philosophy, the latter seems a bit like someone trying to construct a science of why the people who disagree with them are irrational.

Another article that seems to think we can treat programs as subjects. AI would be better off without the "AI" moniker I think, because it attracts people who think that software and wetware are comparable

I was hoping for it to go deeper into the mentality of the monks that would've led them to believe the solution they chose to be the correct one. Overall, however, the topic of the article is "This may be our biological base for morality going haywire, and it relates to the prospect of driverless cars".

From the little I know of Buddhism, I think the reason that the monks pick the direct route of pushing the fat man is intention.

In Buddhism, there's a story of a man who killed the captain of a ship because the captain was leading the ship into danger that would kill all the men. As long as the man's intention was to save the crew, then killing the captain wasn't seen as bad.

Using the switch to divert the trolley would be deception, which is not a pure intention.

The idea is that if you're going to do a wrong thing, then don't disguise it by trying to deceive others you did it because that's not a pure intention.

My question would be whether you can program intention, and more importantly, whether that would be a good thing. Perhaps not, because humans having a pure intention is a good thing because it's clear who is taking responsibility. If the car is doing the killing, is it the car (or the owner of the car) that takes responsibility?

My question would be whether you can program intention, and more importantly, whether that would be a good thing.

This is a sentiment I hear most often from people far from the field of artificial intelligence: that building a machine with intent may not be a good thing. I believe it's driven by the same idea as all anti-robotic sentiment: that human life is somehow special and more precious, and that human mind is equally special and precious.

To that, I raise a different question: what is so special about us? Sapience? It is, in fact, special - we know of no other species capable of such progressive reasoning - but why is it so precious that we want to prohibit ourselves from developing something similar by hand?

To answer "Why is it not?" is escaping the question. There's no obvious and reasonable rationale behind having ourselves as the only sapient species beyond our species-wide sense of natural superiority and control over our environment. Let me be clear: we're not talking about killing human beings here - it's a completely different argument on the preciousness of human life. We're talking about making something equally precious in its ability to reason.

Once we build something similarly sapient - let's dub it an artificial general intelligence, or AGI, for simplicity of terminology, though there are similar issues with human cloning - we're letting go of our solitary control of our environment and giving it away, consciously, to a piece of mind we have no sway over - beyond, of course, turning it off, much like with a human being. This makes us no longer the king of the hill when it comes to advanced thought: we'd have to make space - or, worse, give up the position altogether - for something similarly- or more capable than us. This run entirely contrary to our idea of natural superiority as a species, irrational and entirely narcissistic as it is.

But let's put AGI off and talk about something simpler: AI as a sole driver of an automobile, with no support from a human being. It makes all the decisions, communicates with other cars on the road for the optimal routing and does its best to avoid collision when the incident seems imminent. We no longer have control over our environment: we're giving it away completely to an automaton we have no sway over. Naturally, this is terrifying for humanity: that we might submit to an outside intelligence for decisions that we can't review or argue over. We won't be able to even if we wanted to: an AI driver processes environmental information much more swiftly than we could ever hope for at the conscious level.

So, now we have a black box mechanical mind that we have no control over in our daily lives, which drives us around as we need and decides for itself on all the required question: how quickly, using which street and whether to avoid hitting the man crossing the road at an illegal moment. Terrified yet?

However, I think the concept of such a monstrous mechanism is a lie. It's a product of many biases coupled with common misunderstanding of the whole process. We don't usually think about it, instead presenting such mechanisms as holistic, but they're nothing but each part working together. You can only have a murderous, rampaging machine if something in its programming led it to believe this is the most efficient way of solving the problem it was built to solve. That might involve external data messing with native assumptions of the thought system, but that's a different story. Let's just say that, given an intelligence capable of learning from observation, you can teach it anything, and learning what we would consider a bad thing is not the intelligence's fault but the teacher's.

This has gotten way long. To summarize: that we can create something capable of thought is not a bad thing. That we can teach the newly-born intelligence to live according to our values is a distinct possibility reliant entirely on the builders and their intentions. An AI driver is not, in itself, a bad thing: it's how you program it that matters. Can we build one? We absolutely can. Can we build one with intent in its "blood"? We absolutely can - as long as we clearly define what "intent" is and how it is expressed in the machine.

P.S. Sounds like I need to read up on Buddhism. Their concepts about living and deeds are interesting, to say the least.