Against utilitarianism

Since impossible thought experiments that include unbelievably advanced aliens and pick at bizarre edge cases are acceptable in the relevant branch of philosophy…

You are a young parent. You have no siblings, and you and your spouse were both rendered infertile by a rare but otherwise asymptomatic disease right after the birth of your first child. An unbelievably advanced alien named Angarag appears in your bedroom and makes you an offer.

Angarag reads your mind and computes the coherent extrapolation of your favored version of utilitarianism, then models human psychology at such a high resolution that he can identify the smallest possible increase in aggregate utility—the Planck utilon—and then he proves this all to you. You have no reason to doubt Angarag. You do not doubt Angarag.

Angarag then makes you an offer. He has computed the loss of aggregate utility that would result from him brutally murdering your only child; if you let him do it, he will act in such a way as to set aggregate utility to one Planck utilon above where it would be in the counterfactual world where he doesn’t brutally murder your only child. By, if your preferred version of utilitarianism is into this sort of thing, wireheading lots and lots of chickens.

Whether or not you accept the offer, he will, after either killing your only child and wireheading some chickens or whatever, used his advanced alien technology to teleport away from Earth and never interact with it again.

Remember: if Angarag doesn’t kill your child, aggregate utility after he stops interacting with Earth will be N; but if he does, aggregate utility after he stops interacting with Earth will be N plus one Planck utilon.

You have no reason to doubt Angarag. You do not doubt Angarag. What do you do?

If you’re a utilitarian, you have to let Angarag kill your child. I hope it can go without saying that this is wrong.

Share this:

Like this:

Related

26 responses to “Against utilitarianism”

Pretty sure this is racist. How can you possibly justify placing your interests and the interests of your child and immediate circle of acquaintances over the eudaimonia of a bunch of chickens? In this, the most Current of all Years, the only response I can conceive is wow. Just wow.

Well, your response to utilitarianism was to assert the basis of utilitarianism, this being that good is objectively knowable and not dependent on circumstance and society. From this epistemological basis, we get modernity and liberalism.
If you don’t start from this epistemological basis, and instead accept that people can’t objectively know what is good, but must learn within a shared culture and political structure that precedes them, then you enter dark territory.

Don’t think there are enough chickens on Earth to compensate for utilons lost via child-murder. Regardless, this seems pretty simple: the /moral/ thing to do (contingent on Angarag telling the truth about utilon setting etc.) would be to accept the offer. However, pretty much no one would actually do this and I wouldn’t blame them whatsoever. You would have to be entirely selfless, which nobody is.

This is essentially a retelling of Torture vs. Dust Specks with the would-be tortured making the decision.

See, I don’t think it’s isomorphic at all. It’s not a trolley problem either. The social connections are completely different: in the dust speck problem, it’s you vs. lots of random people; in the trolley problem, it’s one random person vs. five random people; but here, it’s your child vs. lots and lots of chickens. That makes a difference.

You posted this before, and all the utilitarians were like “Well seems really suspicious and all but if it really is as it says then yes, I’d take it”

A perhaps more interesting less strawman siuation for utilitarians to consider:
There’s a group that promises to stamp out child molestors, rapists, murderers, cannibals, and hypocrites who criticize others while engaging in the activities they critcize for their own benefit secretly. They are wildly successful and stamp out about 50,001 child molestations a year, 100 cannibalizations etc. However, as they get huge amounts of credit for stopping all these crimes and scumbags, they themselves engage in 50,000 child molestations a year, 100 cannibalizations etc. , also enjoying the fact that they’re getting away with this while being heralded great moral reformers. There is a very marginal utility increase overall.

Hell, we could extend this to them actually not stopping any child molestations at all, but increasing them three fold, but convince so many people that they’re doing good that they increase total utilons or utility or whatever.

What about a group that has so many orgies and parties for 300 years that it barely offsets with 1 positive utilon that causes all of the world to revert to the civilization level of 5000 BC for the rest of mankind’s existence, vs a world of sustained technological, scientific, philosophical, ethical, moral, spiritual etc. growth but not as much orgies so minus 1 utilon?

Well yeah, but couldn’t you overload the hedonism to the point where the utilons or whatever you call it outwayed the philosophical and spiritual growth? Isn’t that the whole point, that ultimately everything is interchangeable and it all depends on how people subjectively value stuff or whatever?

I like how this piece suggests that if you were able to “replace” the child by making another, it would be okay to sacrifice them for the greater good. This says quite a lot about your moral assumptions, namely, that there is one thing that trumps the maximization of utility, and does so absolutely: the biological imperative of passing on your genes.
We can agree that utilitarianism leads to pervese conclusions, but your implied alternative leads to even greater perversion. Taken to the extreme, the supreme biological imperative could justify just about anything. Except complete cosmicide.
Now maybe I’m jumping to conclusions, so for the sake of clarity, answer me this: if the child was also infertile, would that change things?

Even if it’s only slightly worse that Angarag killing your child means the end of your entire line, it’s still worse, so it goes in the thought experiment. The point is to make things as inconvenient as possible for utilitarianism. Ideally, the thought experiment would hit even more non-utilitarian buttons than it does.

“it’s still worse”, yes, but is it demonstrably worse in a non-utilitarian sense? This is anything but obvious. Furthermore, insinuating that the continuation of the line is a valid objection to utilitarian considerations, especially in an attempt to hit as many buttons as possible, counterbalances and diminishes the impact of uncontroversially non-utilitarian arguments of virtue, dignity and autonomy. Which make for the much stronger case that you shouldn’t even kill a random homeless guy in order to have Angarag wirehead humans.

It’s a clever little scenario and all, but it still accepts utilitarianism’s false and ridiculous premise. The whole system is a quantophrenic cargo cult. “Utility” is not a measurable, quantifiable thing; pretending that it is and then worshiping it certainly makes moral decisions easy and paints them in a pleasant coating of science and objectivity, but it’s all still a ludicrous lie.

There’s no such thing as a “util,” much less a “Planck utilon.” I like nectarines. I also like sport fishing. Do I like hooking a brook trout exactly 4.89 times as much as biting into a ripe nectarine? Who the hell knows! I sure can’t say that, and that’s within the confines of one man’s mind. How can one compare the utility I get from fly fishing with the utility that, say, a devout Pakistani youth gets from memorizing a surah, or an ant gets from finding a grain of sugar, or a chicken from having its pleasure center electrically stimulated? It is all utter nonsense.

The whole philosophy is nothing but Enlightenment hubris. It invents an imaginary quantitative measure, then optimizes for it. Because it involves math and numbers, it must all be very scientific, accurate, objective, and modern, right?

Yes, that’s another objection, but utilitarianism fails even if you assume that utility is measurable and quantifiable, and it’s useful and common to reply to an opponent who believes four things you don’t that, even if you grant the first three things, the fourth one still doesn’t hold. I don’t even believe that thought experiments are useful in human-scale ethics, but I can still provide one, because the people I’m arguing against do.

The Less Wrong crowd are suckers for a good thought experiment, no question. Remember Yudkowski’s “Timeless Decision Theory?” He’s immensely proud of his over-elaborate solution to the pressing issue of omniscient gods flying around and offering people semi-paradoxical choices. We’d all better adopt it immediately or else we won’t be acting “rationally.”

Good chance somebody else has mentioned this already, but of course this does bring to mind God, Abarahm and his Son. Lot’s of differences as well, though. God had already done many good things to Abraham and his family, there is the possibility that Abraham could have one or more additional children, in the end it turned out God was testing Abraham’s faith, etc.

The moral is different there, since the point is that The Lord’s commands resist rational interpretation and you have to submit to His power, see also Job. Wisdom is about accepting the limits of human cleverness, while Nydwracu’s construction is the opposite.

Nihilism is true. It’s neither wrong for Angarag to kill the child, nor wrong for you to refuse Angarag.

Realistically, games are not played once, though. Iterated prisoner’s dilemma. Angarag comes to most families, possibly every family. So every child is killed in favour of chickens. At length, no entities which appreciate utilitarianism continue to exist, only chickens – who are similarly unable to reproduce, as wireheading is non-adaptive.

I guess this is a proof that chicken welfare is irrelevant. Holding chicken welfare above moral-agent welfare leads to there being no moral agents, whereupon welfare is not upheld. Result: time-integral total is finite instead of infinite. I’m pretty sure I can extend this to humans, though.