If price can mediate how I value chewing gum and an appendectomy and my daughter's education and how much pain I'm in from that car accident, then it can mediate both pulling the switch and pushing the fat man too.

Would you care to price those things for us? And maybe price them again tomorrow to see if the prices change.

And please tell me how I ought to price your pain from a car accident, since I know nothing at all about it.

I don't know anything about the fat man. What price should I put on him?

The problems are different, maybe "fundamentally" (whatever function that plays here), but so long as they are forms of value they can be mediated by price.

"Fundamentally" since it's at the core of the ethical question rather than some extraneous addition.

I took you to be taking the position that World A is not better than World B.

My position was that some critical consequences were removed from the options. Even accepting that the situation will be simplified for the sake of argument, it was too simplified. Since you insist that I respond to the two options as you gave them, my current position is that the example is absurd.

Your answer to the question, "Is X dollars worth more than P's life" must be "mu": the answer isn't yes or no, because yes or no both imply weighing the value of a person against the value of dollars.

"Mu" suggests a legitimacy in the question which warrants a paradoxical yes/no/both/neither answer. I don't see that it applies here.

Sorry to be glib, but "some people can't think it" is just not relevant to how well math can describe the world.

You haven't shown any math that describes the morality of the world. All you have stated is that you think that some X price exists.

People don't have to understand general relativity or statistics for those things to accurately describe the world.

The difference it that the physical world functions independently of what people think so they don't need to understand it. On the other hand, morality is constructed by and for people - it's not independent of their thoughts. If people are expected to calculate prices in order to act morally, then how can they be moral if they don't think in terms of prices and calculations? They can't.

Or/and 'would prefer not to'.I could think of my wife, when she is crying, as a chemical machine, mull over which hormones are most likely in action, consider the event as the mammalian limbic system...etc. But I prefer not to. I'll do it as a thought experiment. I am curious about biology and neuroscience in particular. But it's not how I want to conceive of other humans. Nor do I wish to think of them or my acts in relation to them in monetary terms. And when I do, as a thought experiment, I do not always come up with sums. For some things yes, others no.

Frther I think that dehumanizing conceptions of others and interpersonal dynamics have effects that are detrimental AND also hard to quantify. Consequentialist think we should always do the math. I know sometimes we have to follow out gut

I think people will do that for various reasons ...- the numbers are impossible to determine correctly

- the numbers don't capture the complexity of the world. This is like someone who rejects materialism because he believes that there is more to the world than material.

But some people just don't make evaluations by using numbers and mathematics. If you try to explain things to them in terms of numbers, they don't understand what you are saying. They're not stupid. They're successful in their professions and their social lives.

Ecmandu wrote:A stranger by definition has no calculation of price less than the sum of the worlds prices.

I reject whatever definition leads to this conclusion.

Let's test it: I saw a random guy on the street with a cough today. I take as a given that there is a nonzero possibility that he's dying of some treatable disease and money can save his life. If, as you allege, there's a nonzero chance that his life is worth infinity, you should be willing to give him all your money on the off chance that he is that priceless person and he is dying from a treatable disease from which only money can save him. Nonzero times nonzero times infinity equals infinity, so contributing everything you have won't quite balance the scales, but it's a start. Deal? You can mail it to me, I'll pass it along. (And of course, I might be lying and there was no guy, but there's a nonzero chance I'm telling the truth, and another nonzero times infinity is still infinity.)

Karpel Tunnel wrote:I'll do it as a thought experiment.

Great, looking forward to it!

phyllo wrote:Since you insist that I respond to the two options as you gave them, my current position is that the example is absurd.

I must not be following you. Let me make this simpler, to make sure we're on the same page. We have two worlds A and B, which are identical except that in World A, a random person is dead while an effective and life saving charity has 1 trillion extra dollars that will be used to save lives at roughly $1000/life. Your position is that asking which world is better is absurd? It's unanswerable absurd, not-good-enough-for-mu absurd? Is that really what you're claiming?

Doctors charge money to save peoples lives, right? Hospitals charge money, food sellers charge money to people who are starving, foreign aid costs money, inoculations cost money. People in the real world put actual dollar values on the lives of random people, in order to calculate risk and safety and insurance. That's not just a hypothetical thing that could happen, it does happen, it's a totally mundane fact about the world: lives are assigned prices.

And your position, if I'm reading you right, is that despite that it actually happens daily all over the world, because this hypothetical world includes a random person's death, you can't even deign to unask the question of whether or not a world in which we can expect millions of lives to be improved is better than a world in which we don't?

I must not be following you. Let me make this simpler, to make sure we're on the same page. We have two worlds A and B, which are identical except that in World A, a random person is dead while an effective and life saving charity has 1 trillion extra dollars that will be used to save lives at roughly $1000/life. Your position is that asking which world is better is absurd? It's unanswerable absurd, not-good-enough-for-mu absurd? Is that really what you're claiming?

The question is absurd because the options are absurd.

You've removed important consequences from the options.

You've formulated the options in such a way that people will tend to vote for World A which supports your thesis. You're pushing them into that direction.

World A: a random person is killed and an orphanage gets an X dollar donation

Define random as you mean here. Does it mean that we would have no idea who it is we are killing before the contract was accepted and signed?How could anyone make a decision based on not knowing?Would we not have to know who it is so that we could estimate their value in relation to the amount of money (which would have to be known (given to the orphanage) albeit I was raised in one.I have a few people in mind who would not be worth the cost of a pizza. The world would be a much safer and happier place without them.

Perhaps it could only be when we have been able to make the decision based on some high form of altruism (or what we ourselves at least base on such) ~~ after all, as Nietzsche said "Love is beyond good and evil".

Anyway, this is something which would have to be well thought out, at least to me. We have to measure the sacrifice if we are caught. As a mother, it would be my children who were being sacrificed and making the sacrifice being without my presence if I were caught. Murder is still illegal despite who it is who might be killed and a jury could convict or perhaps not ~~ based on who the murderee lol might be.

We never really know how our actions or behavior might affect us or others in the short or long term and very often we do not even consider these things.

I have a few people in mind who would not be worth the cost of a pizza. The world would be a much safer and happier place without them.

Would you kill them for money?

No it would not be for me. I cannot see myself doing that but then again how can we ever really know unless faced with something like this though we would like to believe otherwise. I am not so money hungry. The money would go to the orphanage or to St. Jude's or to rescue animals.

I might have to stipulate in a contract that it would not go to me - just those places - just in case.

“How can a bird that is born for joySit in a cage and sing?” ― William Blake

If you enjoy killing, does the buyer need to pay less to keep things moral?

Psychopaths will be well off, in any case, even if they have to tally up a few more murders.

I suppose the price would also have to be higher, like if they wanted you to kill your kid or your mother and you also, coincidentally, felt affection for these family members.

But there is always a price Carleas and I guess most people would go for.

And again, while standing feeling a little guilty at mom's cemetary plot, you can comfort yourself with the fact that you got a bigger sum of money which you can give to doctors without borders to help even more children get rid of harelips or even life-saving operations. Even, with the extra sum, spend a little on yourself, a vacation, perhaps. I mean, one death in the family, 6 kids saved, and the family-murder bonus could go to a week in Barcelona. Still a net gain folr others.

People killing loved ones or random people will have no negative side effects on how we bond and function as societies.

And then there's the bonuses for raping members of your own family.

There's always a price that convinces. If you think you wouldn't rape your own daughter, you just don't realize how tempting a billion dollars is. Your self-assessment must be wrong.

Arcturus Descending wrote:Define random as you mean here. Does it mean that we would have no idea who it is we are killing before the contract was accepted and signed?How could anyone make a decision based on not knowing?

Yes, by random I mean there is an equal probability of it being any person. It's the same 'random person' that stands on one side in the trolley problem, and five of whom stand on the other side. The people being saved by the charity are also random people.

And you can make the decision without know who they are specifically because expected value is well defined, and because whatever the expected value of 1 person, the expected value of 5 people is 5 x [expected value of 1].

We can plug in specific people to change the question, but that's just a different question.

I think the original idea was not a random person, but rather an anonymous stranger, a 'known-unknown' person. I don't think this changes the math, but feel free to substitute if it keeps the question on track.

Carleas, you assume that the non-zero in randomness must be another person ... what if the infinitely valuable person was you or I ?

It's defined as random after all. Why should an infinitely valuable person give power to those who again, seek to kill it. See, in your thought experiment, everyone, including us, is wearing the mask of randomness. So as a cost benefit analysis, it makes no sense for anyone to give all their sustanence to another random person.

The difference between World A and World B is not just one dead person plus well-off orphans versus one live person plus suffering orphans. World A contains an individual who has sold his beliefs and he has blood on his own hands. He has accepted that a price can be put on anyone/anything and there are other people out there prepared to kill him and those he cares about for a price. You can call World A the "better world" if you like.

Later Carleas wrote

We have two worlds A and B, which are identical except that in World A, a random person is dead while an effective and life saving charity has 1 trillion extra dollars that will be used to save lives at roughly $1000/life. Your position is that asking which world is better is absurd? It's unanswerable absurd, not-good-enough-for-mu absurd? Is that really what you're claiming?

Perhaps some folks believe that a life, any life, is priceless and not interchangeable with another life. Is this a thought experiment in how to be evil and justify it?

As Karpel Tunnel said the psychopaths would get rich, their workloads would be tremendous especially if they were willing to off people for $1.99 without any donations to charity. Why better, save any lives through a charity if lives aren't priceless? Letting people parish due to their poor luck and lot in life would be extremely cost efficient. In world A the mindset that possibly everyone is expendable for possibly a 1 cent payment does make a life saving charity absurd. That type of mentality was not advertised in world B. World B would be better for everyone for everyone would have greater odds of surviving without the rampant kill for a buck mentality.

I AM OFFICIALLY IN HELL!

I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy.

Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat.

phyllo wrote:The vanilla trolley problem does not appear to have hidden consequences in the options. At least, I don't see them.

I'm sympathetic to the 'hidden consequences' argument, but that argument does not make the question meaningless or unanswerable or absurd. Hidden consequences distinguish the hypothetical world in which anything is possible from the real world. So what you're saying in appealing to it seems to be, yes, in the hypothetical, we should kill the person if offered a trillion dollars, but in the real world we shouldn't because XYZ.

I feel like you're resisting that pretty strongly, but the rejection of an absurd hypo is missing the point. Look at Hillary Putnam's twin earth thought experiment, it's as absurd as can be and it doesn't matter, because it helps to isolate certain concepts.

phyllo wrote:[The vanilla trolley problem] doesn't suggest that killing the people on the tracks is good. It doesn't suggest that random people ought to be killed in the future.

First, the original problem does suggest that killing the one person is good, at least to a consequentialist who values human life: It is a moral good to cause the death of one person who would not die but for your intervention in order to save five people who will die but for your intervention.

Second, the problem I'm proposing doesn't suggest anything about the future. Let's concede, if you require it, that this will be just the worst if it happens all the time, and just mentally insert into the hypo whatever additional props you need to limit it to a one-time offer to you and only you.

phyllo wrote:The vanilla trolley problem does not appear to have hidden consequences in the options. At least, I don't see them.

I'm sympathetic to the 'hidden consequences' argument, but that argument does not make the question meaningless or unanswerable or absurd. Hidden consequences distinguish the hypothetical world in which anything is possible from the real world. So what you're saying in appealing to it seems to be, yes, in the hypothetical, we should kill the person if offered a trillion dollars, but in the real world we shouldn't because XYZ.

I feel like you're resisting that pretty strongly, but the rejection of an absurd hypo is missing the point. Look at Hillary Putnam's twin earth thought experiment, it's as absurd as can be and it doesn't matter, because it helps to isolate certain concepts.

phyllo wrote:[The vanilla trolley problem] doesn't suggest that killing the people on the tracks is good. It doesn't suggest that random people ought to be killed in the future.

First, the original problem does suggest that killing the one person is good, at least to a consequentialist who values human life: It is a moral good to cause the death of one person who would not die but for your intervention in order to save five people who will die but for your intervention.

Second, the problem I'm proposing doesn't suggest anything about the future. Let's concede, if you require it, that this will be just the worst if it happens all the time, and just mentally insert into the hypo whatever additional props you need to limit it to a one-time offer to you and only you.

In World B, Joe Random had some kind of "right" to exist and to be free of harm. He doesn't have that in World A.

I see that as very important - more important than the math.

It's not stated but it's there.

If you lose it once, then it's very hard or perhaps impossible to get it back.

Ecmandu wrote:All cost benefit analysis would be null and void if no future for anyone existed after the event.

To clarify, I'm just not trying to extend the analysis to rearranging society so that what we're talking about happens all the time. I see the questions of "Should you do X in this one-off situation" and "Should we as a society make doing X a regular part of our everyday lives" as separate questions that it is consistent to answer differently.

phyllo wrote:In World B, Joe Random had some kind of "right" to exist and to be free of harm. He doesn't have that in World A.

Sure, but the same is true if you pull the switch in the vanilla problem, right?

Karpel Tunnel wrote:No amount of money would get me to kill a random person.

It may be true that you value not killing a random person more than literally more liquid value than all of humanity can produce, but I doubt it. In any case, we know from observation that plenty of people do in fact kill random people for substantially less than everything.

If you can enjoy your wealth after having obtained it by killing a random person, your life must have been supremely shitty beforehand.That indeed there are a lot of such humans is reason I rank all other mammals above humans (in general) qua degree of sentience.

Spending wealth badly or selfishly is a separate moral question. If rather than "enjoy[ing] your wealth", you use that wealth to do more good than you have done wrong, you can leave the world better off for having done that wrong, and a consequentialist should conclude that the transaction was a good thing, i.e. if World A is better than World B, a consequentialist should be OK with someone taking actions that lead to World A instead of World B. If feeling bad about it weighs against World A, increase X to compensate.