How much money would you need to be paid to kill a random person? How much money would you need to be paid to rob a bank? Depending on your answer to the trolley problem, how much to go the other way?

Moral positions are generally not thought of as subject to pricing. Giving up ones moral positions in exchange for money is seen as a form of corruption and even as a lack of morals. But any pragmatic, consequentialist morality must take such a payment into account, and acknowledge that it can do enough good (however defined) to outweigh the harm (however defined). There are at least two ways to do this: 1) as above, we can examine how much the moral belief is 'worth', by asking how much it would cost to ignore it for a specific case; and 2) we can ask how much someone violating a moral rule would need to pay to compensate society for the harm they cause.

The questions are uncomfortable, but there isn't anything irrational in using a universal medium of exchange to exchange moral value for other forms of value. And doing so enables us to compare moral beliefs. I propose an analogy: pricing moral beliefs is to the trolley problem as currency mediated transactions are to barter. Trolley problems place two competing outcomes directly against each other, in much the way that barter places two goods against each other. Introducing pricing in either cases allows for all goods or all values to be compared simultaneously.

No amount of money would get me to kill a random person. In fact I think the payment incident discussion would make me even more reluctant. A terrible job and chronic pain coupled with bad timing access to a weapon, well, that's another story.

And I can't see how one could figure out punishment sums of money, unless the immorality or better put only to the extent that the immoral act had a financial impact. And even if we worked with % of income or somehow tried to weight in the actual cost to the immoral person, I can't see how it would work out.

Karpel Tunnel wrote:No amount of money would get me to kill a random person.

It may be true that you value not killing a random person more than literally more liquid value than all of humanity can produce, but I doubt it. In any case, we know from observation that plenty of people do in fact kill random people for substantially less than everything.

If instead it is suggested, as Phyllo does, that one might prefer to die than to kill an innocent, it only entails that that person values their own continued existence less than they value the life of the other (or rather, the moral belief that they shouldn't kill).

Let's put it differently: Take Singer's parable of the drowning child. How much do you sacrifice to save the drowning child? Put differently, how much do you currently contribute to charitable causes that demonstrably save the lives on innocents for around a thousand dollars? If the amount you currently donate to those causes is zero, you probably aren't as committed to not killing people as you claim. Moreover, if you currently choose to buy yourself food instead of contributing every cent you earn to saving those lives until you die from hunger and exhaustion, you probably aren't so strongly committed to dying rather than killing.

It may be that you just aren't a consequentialist, and you base your morality in the opinion of some all loving god who could care less about what happens to anyone around you so long as you don't participate directly in the causal chain that leads to their death. That morality is nonsense for a number of reasons, but that is a different discussion and my argument here doesn't address it.

On the other hand, it may just be that the idea of accepting money in exchange for committing moral wrongs is seen as taboo. That's understandable, because signaling a commitment to moral positions that is strong enough to overcome any personal gain is important for social cohesion and alliance building.

Here is one test: if you think that it's right to pull the lever in the trolley problem, to save five lives by killing one, then why would it be the case that you can't accept $10,000 to kill a random person and then donate that money to a charitable initiative that reliably saves a life for each $1,000 it receives? You would on net save 9 lives, 5 lives better than in the trolley problem. What gives? We're just replacing the trolley switch with a check for killing the one person followed by an alms collection to save the five.

Here's another way to approach the problem, which is probably where we should have started: I think lying is always wrong, but surely you would tell a white lie for $1 million, right? Think of all the orphans you could save! Murder is an extreme case, and when we start there it's easy to take our gut rejection as an indication that there's nothing to this price-of-morality argument. But start with tiny moral wrongs, and (I hope) it's clear that we would take money for small moral wrongs. If nothing else, we can differentiate moral wrongs for which it's not taboo to discuss accepting money to violate, and ones for which it is.

Rereading my previous post, I'd like to clarify that most uses of the word "you" in that post were intended in the generic form. Replacing them with "one" would have been more accurate/diplomatic, but at the cost of readability/zest.

I apologize if it comes off as a personal attack, that is not my intent.

If instead it is suggested, as Phyllo does, that one might prefer to die than to kill an innocent, it only entails that that person values their own continued existence less than they value the life of the other (or rather, the moral belief that they shouldn't kill).

In the OP, you were pricing the dropping of a moral belief. Here, instead, you are putting a price on some "random person" and the person who refuses to take money for killing him. IOW, now there are three prices proposed.

And honestly, is a person who is willing to die rather than kill someone, really thinking in these terms - looking at the value of the other person and his own value - pricing these things out?

You can put a price on all kinds of things, Phyllo. Price just mediates value. Anything you can value you can price.

Suppose you are faced with two possible worlds: World A: a random person is killed and an orphanage gets an X dollar donation; World B: neither of those things happened.

Is there some X where you choose world A? Shouldn't there be?Do you say don't pull the switch in the trolley problem?

My position is that X exists. It's less than a trillion. And furthermore, that X is the price at which you should prefer a world where you are paid to kill a random person. Whatever value X is, you can just take X as payment and give it to the orphanage, and create what, by hypothesis, is the better world. In the event we choose between a better or worse world, it is moral to choose the better world.

And we might not know precisely what X is. We can narrow down on it. It's less than a trillion. It's greater than $1. Greater than $2. $50? $1000? Less than $999 billion? But we don't need to know what X is, or think about what X is, or base our actions off what X is. None of that changes whether or not X exists.

Dan~ wrote:Life is right now about money.

I agree, and I think it's a bad thing as practiced. But I don't think that it has to be a bad thing.

You can put a price on all kinds of things, Phyllo. Price just mediates value. Anything you can value you can price.

Some people will price anything but others do not think in that way. It's not a universal attitude.

Suppose you are faced with two possible worlds:World A: a random person is killed and an orphanage gets an X dollar donation;World B: neither of those things happened.

Is there some X where you choose world A? Shouldn't there be?Do you say don't pull the switch in the trolley problem?

You have brought up the trolley problem a few times now.

The "fat man" version of the trolley problem shows the general aversion to these ideas.

The fat man

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Resistance to this course of action seems strong; when asked, a majority of people will approve of pulling the switch to save a net of four lives, but will disapprove of pushing the fat man to save a net of four lives.[11] This has led to attempts to find a relevant moral distinction between the two cases.

One clear distinction is that in the first case, one does not intend harm towards anyone – harming the one is just a side effect of switching the trolley away from the five. However, in the second case, harming the one is an integral part of the plan to save the five. This is an argument which Shelly Kagan considers (and ultimately rejects) in The Limits of Morality.[12]

A claim can be made that the difference between the two cases is that in the second, you intend someone's death to save the five, and this is wrong, whereas, in the first, you have no such intention. This solution is essentially an application of the doctrine of double effect, which says that you may take action which has bad side effects, but deliberately intending harm (even for good causes) is wrong.

Another distinction is that the first case is similar to a pilot in an airplane that has lost power and is about to crash into a heavily populated area. Even if the pilot knows for sure that innocent people will die if he redirects the plane to a less populated area—people who are "uninvolved"—he will actively turn the plane without hesitation. It may well be considered noble to sacrifice your own life to protect others, but morally or legally allowing murder of one innocent person to save five people may be insufficient justification.

Suppose you are faced with two possible worlds:World A: a random person is killed and an orphanage gets an X dollar donation;World B: neither of those things happened.

Is there some X where you choose world A? Shouldn't there be?

My position is that X exists. It's less than a trillion. And furthermore, that X is the price at which you should prefer a world where you are paid to kill a random person. Whatever value X is, you can just take X as payment and give it to the orphanage, and create what, by hypothesis, is the better world. In the event we choose between a better or worse world, it is moral to choose the better world.

The difference between World A and World B is not just one dead person plus well-off orphans versus one live person plus suffering orphans. World A contains an individual who has sold his beliefs and he has blood on his own hands. He has accepted that a price can be put on anyone/anything and there are other people out there prepared to kill him and those he cares about for a price. You can call World A the "better world" if you like.

phyllo wrote:Some people will price anything but others do not think in that way. It's not a universal attitude.

Willingness to do something is not the same as ability to do something. In theory, there exists a break-even price for anything one values. It may fail in practice for any number of reasons (e.g. transaction costs are too high; the good is not excludable; that price is more than the total value produced by all humans ever; people find this kind of thinking icky; etc.).

phyllo wrote:The "fat man" version of the trolley problem shows the general aversion to these ideas.

But is that general aversion consistent? We know that people's beliefs are often inconsistent depending on how a choice is framed, so it is not a given that people have a general aversion that turns out to be irrational upon examination. Particularly where the general aversion is among non-philosophers in a non-reflective mode, it isn't clear that we should put much weight in the moral consensus.

And one may simply reject consequentialist morality, or prefers not to swallow its hard pills. If the outcome isn't the basis of whether or not an action is moral, then whether or not World A is better than World B is irrelevant to the question of which action is required or permissible. So, maybe put this a different way: if we take as a given that the correct morality is consequentialist and that that entails that we should push the fat man, in that case do you agree that X must exist?

phyllo wrote:The difference between World A and World B is not just one dead person plus well-off orphans versus one live person plus suffering orphans.

One response to this line is to point out that, whatever other consequences you want to load into World A, there should still be an X that outweighs those consequences. Count up all the children in the world who will die of malnutrition in the next ten years, figure out how much they need to not die of malnutrition, plus the cost of distributing that much to each one. Is the life of every child who would die of malnutrition in the next ten years really not worth another source of existential dread and a little blood on your cuffs?

But I prefer another approach: to quote my torts professor, "don't buck the hypo" (not sure if that's original to her). You can make the math not work by adding additional terms, but those aren't the hypothetical being considered. If what you're saying is, "yes, in the case you presented, it's morally permissible to kill for money, but that case could never happen in the real world for [reasons]", then fine, say that. But if you don't agree that the hypo as presented justifies killing for money, then lets keep discussing the hypo as presented before we embellish it.

I think the fat man problem suffers from a similar problem: people implicitly read reality into a fanciful intuition pump. It's actually difficult to conceive of a situation where you know 100% that pushing a fat person in front of a trolley will stop the trolley and save lives, and so even when people's conscious minds acknowledge that's a given here, their moral intuitions can't be readily separated from the real-world in which they were honed, in which the fat man problem is outlandish because there's no one in the world fat enough to stop a speeding trolley. (Maybe a way to test the general aversion under this hypothesis: do a survey where you ask half the people the traditional fat man problem, and ask the other half a variation in which the fat man is sitting above the switch and pushing him off will kill him and hit the switch. If I'm right, there should be less aversion to the latter.)

Karpel Tunnel wrote:No amount of money would get me to kill a random person.

It may be true that you value not killing a random person more than literally more liquid value than all of humanity can produce, but I doubt it.

Why on earth would you doubt it? People have refused to kill people trying to kill them. IOW despite losing everything possible. How much money would it take for you to rape a child, Carleas? I mean, you could use that money to help other children.

In any case, we know from observation that plenty of people do in fact kill random people for substantially less than everything.

Sure. People will kill over a couple of bucks or who drank the last beer. Does this mean the price of a beer is the value of killing someone? Your proposal rests on some kind of at least vague consensus, not only on money as the measure, but what the measures tend to be.

If instead it is suggested, as Phyllo does, that one might prefer to die than to kill an innocent, it only entails that that person values their own continued existence less than they value the life of the other (or rather, the moral belief that they shouldn't kill).

Well, if we are accepting it, as you do here, we are accepting that money is not the correct measure.

Let's put it differently: Take Singer's parable of the drowning child. How much do you sacrifice to save the drowning child? Put differently, how much do you currently contribute to charitable causes that demonstrably save the lives on innocents for around a thousand dollars? If the amount you currently donate to those causes is zero, you probably aren't as committed to not killing people as you claim. Moreover, if you currently choose to buy yourself food instead of contributing every cent you earn to saving those lives until you die from hunger and exhaustion, you probably aren't so strongly committed to dying rather than killing.

So for you not saving someone is the same as not killing someone.

Here's a thought experiment. You need a babysitter or a coworker. You can have a person who does not donate to charities or you can have someone who does donate to charities but who will kill a random person for 1000 dollars. Which category of person would you consider hiring?

Me, there is not a chance in hell I would hire a hit man.

It may be that you just aren't a consequentialist, and you base your morality in the opinion of some all loving god who could care less about what happens to anyone around you so long as you don't participate directly in the causal chain that leads to their death. That morality is nonsense for a number of reasons, but that is a different discussion and my argument here doesn't address it.

Wow, look at the assumptions here1) there are no consequentialist arguments against your position2) all deontologists are theists

On the other hand, it may just be that the idea of accepting money in exchange for committing moral wrongs is seen as taboo. That's understandable, because signaling a commitment to moral positions that is strong enough to overcome any personal gain is important for social cohesion and alliance building.

Ah, there you go, one possible consequentialist argument AND NOTE: IT IS VERY VERY HARD TO TRACK THE CONSEQUENCES of such things. I say this, because most consequentialists tend to treat only those effects that can be tracked as the set of effects and/or show to me a kind of hubris in the ability to track consequences.

Here is one test: if you think that it's right to pull the lever in the trolley problem, to save five lives by killing one, then why would it be the case that you can't accept $10,000 to kill a random person and then donate that money to a charitable initiative that reliably saves a life for each $1,000 it receives? You would on net save 9 lives, 5 lives better than in the trolley problem. What gives? We're just replacing the trolley switch with a check for killing the one person followed by an alms collection to save the five.

I've made it so far without having to make such decisions. And what are the consequences of having these kinds of scenarios BEING A REGULAR PART OF HUMAN INTERACTIONS? Oh, we don't have to think of that. Indirect effects, those that deal with how we think and the way it affects how we views others, oh those are hard to track, so we don't have to consider them.

The idea that decisions and effects can be narrowed down this way, expecially when we are talking about a new system of evaluation behavior, is pathological and confused.

Here's another way to approach the problem, which is probably where we should have started: I think lying is always wrong, but surely you would tell a white lie for $1 million, right? Think of all the orphans you could save! Murder is an extreme case, and when we start there it's easy to take our gut rejection as an indication that there's nothing to this price-of-morality argument. But start with tiny moral wrongs, and (I hope) it's clear that we would take money for small moral wrongs. If nothing else, we can differentiate moral wrongs for which it's not taboo to discuss accepting money to violate, and ones for which it is.[

And again what are the side effects of making this kind of thinking the main guideline in a society? How does that monetary evaluation, when taught to children, when it becomes the common way of evaluating actions in adult society..........how does that affect how we view and then treat each other? Ah, that's hard to figure out, we don't have to think of that.

Morality is like a chess puzzle. Causes and effects can be easily broken down and tracked.

I tell white lies for free.

So, Carleas, in the time you wrote these posts, you could have worked for enough money to help a starving child somewhere. It's nice you never tell white lies

but you just contributed to the starvation of an African child.

Seriously, there is something extremely unpleasant here. Not because of what such thinking is and does.

I mean that from the bottom of both my consequentialist and deontological hearts.

Karpel Tunnel wrote:No amount of money would get me to kill a random person.

It may be true that you value not killing a random person more than literally more liquid value than all of humanity can produce, but I doubt it.

Why on earth would you doubt it? People have refused to kill people trying to kill them. IOW despite losing everything possible. How much money would it take for you to rape a child, Carleas? I mean, you could use that money to help other children.

In any case, we know from observation that plenty of people do in fact kill random people for substantially less than everything.

Sure. People will kill over a couple of bucks or who drank the last beer. Does this mean the price of a beer is the value of killing someone? Your proposal rests on some kind of at least vague consensus, not only on money as the measure, but what the measures tend to be.

If instead it is suggested, as Phyllo does, that one might prefer to die than to kill an innocent, it only entails that that person values their own continued existence less than they value the life of the other (or rather, the moral belief that they shouldn't kill).

Well, if we are accepting it, as you do here, we are accepting that money is not the correct measure.

Let's put it differently: Take Singer's parable of the drowning child. How much do you sacrifice to save the drowning child? Put differently, how much do you currently contribute to charitable causes that demonstrably save the lives on innocents for around a thousand dollars? If the amount you currently donate to those causes is zero, you probably aren't as committed to not killing people as you claim. Moreover, if you currently choose to buy yourself food instead of contributing every cent you earn to saving those lives until you die from hunger and exhaustion, you probably aren't so strongly committed to dying rather than killing.

So for you not saving someone is the same as not killing someone.

Here's a thought experiment. You need a babysitter or a coworker. You can have a person who does not donate to charities or you can have someone who does donate to charities but who will kill a random person for 1000 dollars. Which category of person would you consider hiring?

Me, there is not a chance in hell I would hire a hit man.

It may be that you just aren't a consequentialist, and you base your morality in the opinion of some all loving god who could care less about what happens to anyone around you so long as you don't participate directly in the causal chain that leads to their death. That morality is nonsense for a number of reasons, but that is a different discussion and my argument here doesn't address it.

Wow, look at the assumptions here1) there are no consequentialist arguments against your position2) all deontologists are theists

On the other hand, it may just be that the idea of accepting money in exchange for committing moral wrongs is seen as taboo. That's understandable, because signaling a commitment to moral positions that is strong enough to overcome any personal gain is important for social cohesion and alliance building.

Ah, there you go, one possible consequentialist argument AND NOTE: IT IS VERY VERY HARD TO TRACK THE CONSEQUENCES of such things. I say this, because most consequentialists tend to treat only those effects that can be tracked as the set of effects and/or show to me a kind of hubris in the ability to track consequences.

Here is one test: if you think that it's right to pull the lever in the trolley problem, to save five lives by killing one, then why would it be the case that you can't accept $10,000 to kill a random person and then donate that money to a charitable initiative that reliably saves a life for each $1,000 it receives? You would on net save 9 lives, 5 lives better than in the trolley problem. What gives? We're just replacing the trolley switch with a check for killing the one person followed by an alms collection to save the five.

I've made it so far without having to make such decisions. And what are the consequences of having these kinds of scenarios BEING A REGULAR PART OF HUMAN INTERACTIONS? Oh, we don't have to think of that. Indirect effects, those that deal with how we think and the way it affects how we views others, oh those are hard to track, so we don't have to consider them.

The idea that decisions and effects can be narrowed down this way, expecially when we are talking about a new system of evaluation behavior, is pathological and confused.

Here's another way to approach the problem, which is probably where we should have started: I think lying is always wrong, but surely you would tell a white lie for $1 million, right? Think of all the orphans you could save! Murder is an extreme case, and when we start there it's easy to take our gut rejection as an indication that there's nothing to this price-of-morality argument. But start with tiny moral wrongs, and (I hope) it's clear that we would take money for small moral wrongs. If nothing else, we can differentiate moral wrongs for which it's not taboo to discuss accepting money to violate, and ones for which it is.[

And again what are the side effects of making this kind of thinking the main guideline in a society? How does that monetary evaluation, when taught to children, when it becomes the common way of evaluating actions in adult society..........how does that affect how we view and then treat each other? Ah, that's hard to figure out, we don't have to think of that.

Morality is like a chess puzzle. Causes and effects can be easily broken down and tracked.

I tell white lies for free.

So, Carleas, in the time you wrote these posts, you could have worked for enough money to help a starving child somewhere. It's nice you never tell white lies

but you just contributed to the starvation of an African child.

Seriously, there is something extremely unpleasant here. Not because of what such thinking is and does.

I mean that from the bottom of both my consequentialist and deontological hearts.

Probably. It would be disingenuous to claim that small lies and big ones are separable on the long run, but they may be optically motivated short term. Myopia is a common short term affliction, and J.Stalin's observation puts depth perception. , to it, when he declared that it is very difficult to murder someone close to You , but entirely easy when the victims are in the millions.

More moderately . these optical illusions are hard to notice, except by the cliche that small arrears lead to big crimes.

The value of these beliefs is based partly on the factual worth of human beings . as familiarity with the victim(s) becomes a factor to the one who is doing the evaluation, as basis for murder.

During mass killings during the Holocaust, some friends and relationships were given preferential treatment.

Willingness to do something is not the same as ability to do something. In theory, there exists a break-even price for anything one values. It may fail in practice for any number of reasons (e.g. transaction costs are too high; the good is not excludable; that price is more than the total value produced by all humans ever; people find this kind of thinking icky; etc.).

You appear to think in terms of numbers, so this all seems reasonable to you.

Not everyone thinks in terms of numbers.

But is that general aversion consistent? We know that people's beliefs are often inconsistent depending on how a choice is framed, so it is not a given that people have a general aversion that turns out to be irrational upon examination. Particularly where the general aversion is among non-philosophers in a non-reflective mode, it isn't clear that we should put much weight in the moral consensus.

So you are saying that "ordinary people" really don't understand it ... only philosophers are able to understand and reason it out correctly.

It seems reasonable that someone is repelled by having to physically kill the fat man. That being above and beyond the mathematics of the situation. It also seems reasonable that someone believes that mathematics do not enter into life and death decisions.

So, maybe put this a different way: if we take as a given that the correct morality is consequentialist and that that entails that we should push the fat man, in that case do you agree that X must exist?

You mean if I accept your beliefs that these things can be reduced to numbers and simple math operations of addition and subtraction, then would I agree that X must exist? The answer is embedded in that particular formulation of the question.

One response to this line is to point out that, whatever other consequences you want to load into World A, there should still be an X that outweighs those consequences. Count up all the children in the world who will die of malnutrition in the next ten years, figure out how much they need to not die of malnutrition, plus the cost of distributing that much to each one. Is the life of every child who would die of malnutrition in the next ten years really not worth another source of existential dread and a little blood on your cuffs?

But other consequences are hidden in the original presentation of the options. World A is a place where people will be routinely killed for the benefit of others. That will be the norm and it will be called good. And that's not all, because theft and sales of humans can clearly be justified on the same basis as the killings - for a net benefit. Anything is acceptable as long as you demonstrate the net benefit.

But I prefer another approach: to quote my torts professor, "don't buck the hypo" (not sure if that's original to her). You can make the math not work by adding additional terms, but those aren't the hypothetical being considered. If what you're saying is, "yes, in the case you presented, it's morally permissible to kill for money, but that case could never happen in the real world for [reasons]", then fine, say that. But if you don't agree that the hypo as presented justifies killing for money, then lets keep discussing the hypo as presented before we embellish it.

The hypothetical has been stripped of most of the consequences. It's a sanitized world.

Karpel Tunnel, is your position on the original trolley problem (not the fat man problem) the same as your position on what I'm saying here? If not, how do you distinguish them? Your arguments seem equally applicable ("what are the consequences of having these kinds of scenarios BEING A REGULAR PART OF HUMAN INTERACTIONS", such that everyone around you would be pulling the switch to kill you to save five other people all the time.)

I'm not suggesting any change in social order -- I posted this in the Philosophy forum because it's not a policy proposal. This is a question of morality no more horrible in the asking (and taking no more time/orphan lives in the discussing) than the trolley problem or its more visceral variations.

Meno_, I think your point about the Stalin quote is apt: human cognition is not consistent, we think differently about questions when posed differently, including, as Stalin notes, when they deal with concrete vs. abstract concepts. Our cognition evolved to have a pretty good intuitive grasp of what a single person is, but not at all of what a million people are. We literally engage different brain structures to reason about those two things.

But we can reflect on those differences and see if they're consistent. If the value we place on the life of one person is greater than the value we place on the life of a million people, we know that something is going wrong and we need to examine the intuitions to find out which is correct. If you think that pressing a switch so a train hits one person is OK, but that hitting that person so they press the switch isn't, we can tell there's something more to the story.

phyllo wrote:Not everyone thinks in terms of numbers.

Granted, but that doesn't bear on how well numbers describe the world. You might be an artist of housebuilding, and eyeball every length with perfect precision, and never once resort to a tape measure or calculator. But someone else can still measure every piece you cut, and can say with great confidence what the lengths are and what you would measure the lengths to be if you were to measure them.

Again, this isn't about how people do think, it's about they can think.

phyllo wrote:So you are saying that "ordinary people" really don't understand it ... only philosophers are able to understand and reason it out correctly.

It seems perfectly reasonable that someone is repelled by having to physically kill the fat man.

I'm saying surveys aren't a great way to get at a consistent moral framework. People who haven't analyzed their moral intuitions are not likely to notice if they are inconsistent.

I would also say it seems understandable that people are repelled by the fat man hypo, but not reasonable. See my comments to Meno_ above; people's intuitions are derived from cognitive mechanisms that evolved to solve very different problems from the trolley problem and the fat man variation. They don't tend to think about it, or be bothered by the possibility that it's inconsistent upon analysis. That doesn't mean that it is not inconsistent upon analysis.

phyllo wrote:It's also perfectly reasonable that someone believes that mathematics do not enter into life and death decisions.

Not if they've already happily answered the original trolley problem in favor of pulling the switch, in which case it's special pleading to complain about using mathematics in hypothetical life and death decisions only when we get to a life and death decision that feels icky.

phyllo wrote:But other consequences are hidden in the original presentation of the options...The hypothetical has been stripped of most of the consequences. It's a sanitized world.

These statements conflict, and I agree with the latter. This is a hypothetical limited to its terms, the hidden consequences are removed by hypothesis, we're talking about an artificially pure scenario that gets only at a specific question of morality. The only difference between world A and world B, by hypothesis is that in world A someone is dead and an orphanage has X dollars.

And if your only problem with the hypo is that a more realistic situation would have a whole lot else going on, then it seems like you agree: it is morally permissible to choose World A and to act to bring it about.

as I am approaching this question from the side of Kantian thought, see the latest in my "new theory of space, timethread"...

My approach over there has been via Kant and his three questions...1. what can I know? 2. what ought I do?3. what my I hope?

and my current statements are about the second question..what ought I do? Carleas is offering us one possibility in this question of "what ought I do"?

can we consider morality/ethics in terms of monetary prices?

that is certainly one way to approach this problem.....

what standards should I use to engage with or act with..can I build a morality/ethics system via understand it by monetary prices? How much does being ethical actually costs us? thinking about it in terms of money does raise the question into a newway of thinking about...…….

what ought I do? ought I be ethical and what exactly does that mean? and how do we judge that?

by our actions, we make judgements all the time... we send moneyto the red cross... that is a judgement... and a ethical decision....

upon what criteria should we make such a judgement or ethical decision upon? should we use money or the bible or our own judgement?

the question that Carleas really raises is this, upon what criteria should we make judgement or ethical standards upon? should we use authorityor money or some other standard to make ethical decisions upon....

if is the "fat man problem" then we are using Bentham theory of utilitarianism...the acts we make must be for the greater good.... we decide upon the numberof people who benefit and if the greater number benefit, that should be our actions... so under the "fat man" problem, we decide base on the greater number who benefit... so clearly we toss the fat man into the path to save a greater number.... one dies so that four may live....

but the problem becomes this, we can use several different, alternativeand equally convincing theories upon which to decide this problem....

for example, if the "fat man" were an Einstein.... would we toss Einstein into the path to save 4 rather ordinary people?

who becomes more important, the number of people or the "relative" value of each person and once again, we runinto the problem of how do we judge or create a criteria to decide which one is the answer?

each path is blocked by another consideration of equal value...…

Kropotkin

"Those who sacrifice liberty for securitywind up with neither." "Ben Franklin"

Again, this isn't about how people do think, it's about they can think.

Again, some people can't think it.

I'm saying surveys aren't a great way to get at a consistent moral framework.

But you brought up the trolley problem in the first place.

I would also say it seems understandable that people are repelled by the fat man hypo, but not reasonable. See my comments to Meno_ above; people's intuitions are derived from cognitive mechanisms that evolved to solve very different problems from the trolley problem and the fat man variation. They don't tend to think about it, or be bothered by the possibility that it's inconsistent upon analysis. That doesn't mean that it is not inconsistent upon analysis.

They have valid reasons. They see the two problems as fundamentally different in an important sense. You deny the existence of that difference.

Not if they've already happily answered the original trolley problem in favor of pulling the switch, in which case it's special pleading to complain about using mathematics in hypothetical life and death decisions only when we get to a life and death decision that feels icky.

I was thinking of those who refuse to participate in pulling the switch or pushing the fat man.

And I was also thinking of those who decide on another basis ... for example, Einstein is on one track and a bunch of skinheads on the other. I understand that you would say that's putting a dollar value on those people's lives. But how is the morality "supposed" to work here? ... Each life has equal value, so the greater number saved is the moral decision? Or some people are worth more than others? How do you know their value based on looking at them at a distance on the tracks? That decision would be based solely on visible physical characteristics.

These statements conflict, and I agree with the latter. This is a hypothetical limited to its terms, the hidden consequences are removed by hypothesis, we're talking about an artificially pure scenario that gets only at a specific question of morality. The only difference between world A and world B, by hypothesis is that in world A someone is dead and an orphanage has X dollars.

Person A: What is the value of that person in US dollars?Person B: People can't be valued in terms of dollars.Person A: That's not answering the question. Person B: Obviously.

for example, if the "fat man" were an Einstein.... would we tossEinstein into the path to save 4 rather ordinary people?

who becomes more important, the number of people orthe "relative" value of each person and once again, we runinto the problem of how do we judge or createa criteria to decide which one is the answer?

Right. The merit/value of the people involved is only superficially known by the person at the switch or beside the fat man. How can he assign value to them? Therefore, the trolley problem defaults to "all lives have equal value".

There is another version where the fat man is a villain who has placed the people on the tracks and made the trolley go out of control. What's the decision in that case?

Peter Kropotkin wrote:who becomes more important, the number of people or the "relative" value of each person and once again, we run into the problem of how do we judge or create a criteria to decide which one is the answer?

I intend to offer an answer to this question: we judge by comparing prices.

We can fiddle with a thousand knobs in the thought experiment, and when we adjust the knobs by e.g. putting Einstein on one side and some lowlifes on the other, we can create situations where the question becomes harder (Phyllo makes this point as well). That's not a problem for my proposed solution, though: it's harder to decide whether we want to trade stock X for stock Y when the price of X and Y are nearly equal, than it is when X >> Y. Similarly, we should expect moral hypotheticals to become harder as we fiddle with knobs that make the prongs of the dilemma more balanced, i.e. when we make the moral prices closer to each other.

Here's a variation that's useful for getting at the moral pricing question: suppose there's one person on each side of the tracks, but each has a terminal illness for which we have the cure. Suppose further that one person's treatment will cost $X, and the other's will cost $Y > $X. Shouldn't a consequentialist have an obvious preference about the outcome here?

phyllo wrote:Again, some people can't think it.

Some people are in a coma. Others are sleeping. Clearly the term "non-sequitur" has no meaning because people who are in a coma or sleeping are unable to think it.

Sorry to be glib, but "some people can't think it" is just not relevant to how well math can describe the world. People don't have to understand general relativity or statistics for those things to accurately describe the world. And people don't have to think about how much money something would be worth to them for there to be an amount of money that that thing is worth to them. If someone thinks "I'd rather die than kill a stranger", it follows from that that the price of their life is less than the price of their killing a stranger. We don't need to know what either price is, and the person doesn't need to think about it explicitly for it to be the case: the person values A over B, price mediates value, therefore the price of A is higher than the price of B.

But let's set this question even further aside, and just ignore the people who can't think this way, and ignore whatever their moral obligations might be. If someone is mathy and familiar with pricing things, is it morally permissible for that person to do the mathing and pricing and come up with a break-even price where they can accept money to kill a stranger conditional on their then donating it to some value-maximizing cause?

Carleas wrote:[S]urveys aren't a great way to get at a consistent moral framework.

phyllo wrote:But you brought up the trolley problem in the first place.

This is interesting. I think of the trolley problem as useful primarily as an intuition pump, and that was its original use. I know it is widely used in surveys to look at moral intuitions among the lay public, but that wasn't the context in which I learned about it, and not the reason I bring it up now. I invoke it as an intuition pump for moral intuitions about consequentialism and the role of action in morality.

Sorry for any confusion that's causing; the problem has a dual life in abstract moral philosophy and in various areas of social science, and I should have clarified what role it was playing here.

phyllo wrote:They see the two problems as fundamentally different in an important sense. You deny the existence of that difference.

Of course the problems are different. The question is whether their solutions are commensurable, and I hold that they are. If price can mediate how I value chewing gum and an appendectomy and my daughter's education and how much pain I'm in from that car accident, then it can mediate both pulling the switch and pushing the fat man too. The problems are different, maybe "fundamentally" (whatever function that plays here), but so long as they are forms of value they can be mediated by price.

phyllo wrote:Person A: What is the value of that person in US dollars?Person B: People can't be valued in terms of dollars.Person A: That's not answering the question. Person B: Obviously.

1) I took you to be taking the position that World A is not better than World B. But if Person B here is you, shouldn't you be agnostic between them? If you can't value a person in dollars, then you can't know which is better. Your answer to the question, "Is X dollars worth more than P's life" must be "mu": the answer isn't yes or no, because yes or no both imply weighing the value of a person against the value of dollars.

2) This isn't anything like how I read this part of our exchange to now. I read it more like:Person A: Is World A better than World B?Person B: World A-plus-a-bunch-of-other-things is really really bad!Person A: That's not answering the question. Person B: Here's a dialog we didn't have.

Again, this isn't about how people do think, it's about they can think.

Again, some people can't think it.

Or/and 'would prefer not to'. I could think of my wife, when she is crying, as a chemical machine, mull over which hormones are most likely in action, consider the event as the mammalian limbic system...etc. But I prefer not to. I'll do it as a thought experiment. I am curious about biology and neuroscience in particular. But it's not how I want to conceive of other humans. Nor do I wish to think of them or my acts in relation to them in monetary terms. And when I do, as a thought experiment, I do not always come up with sums. For some things yes, others no.

Frther I think that dehumanizing conceptions of others and interpersonal dynamics have effects that are detrimental AND also hard to quantify. Consequentialist think we should always do the math. I know sometimes we have to follow out gut.