I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

Some comments:

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be "acclimating" to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

I think I understand why one should derive the conclusion to torture one person, given these premises.

What I don't understand is the premises. In the article about scope insensitivity you linked to, it was very clear that the scope of things made things worse. I don't understand why it should be wrong to round down the dust speck, or similar very small disutilities, to zero - Basically, what Scott Clark said: 3^^^3*0 disutilities = 0 disutility.

Rounding to zero is odd. In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?

It is also in violation of the structure of the thought experiment - a dust speck was chosen as the least bad bad thing that can happen to someone. If you would round it to zero, then you need to choose slightly worse thing - I can't imagine your intuitions will be any less shocked by preferring torture to that slightly worse thing.

It seems to have been. Since the criteria for the choice was laid out explicitly, though, I would have hoped that more people would notice that the thought experiment they solved so easily was not actually the one they had been given, and perform the necessary adjustment. This is obviously too optimistic - but perhaps can serve itself as some kind of lesson about reasoning.

I conceed that it is reasonable within the constraints of the thought experiment. However, I think it should be noted that this will never be more than a thought experiment and that if real world numbers and real world problems are used, it becomes less clear cut, and the intuition of going against the 50 years torture is a good starting point in some cases.

It's odd. If you think about it, Eliezer's Argument is absolutely correct. But it seems rather unintuitive even though I KNOW it's right. We humans are a bit silly sometimes. On the other hand, we did manage to figure this out, so it's not that bad.

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.

Sum(1/n^2, 1, 3^^^3) < Sum(1/n^2, 1, inf) = (pi^2)/6

So an algorithm like, "order utilities from least to greatest, then sum with a weight if 1/n^2, where n is their position in the list" could pick dust specks over torture while recommending most people not go sky diving (as their benefit is outweighed by the detriment to those less fortunate).

This would mean that scope insensitivity, beyond a certain point, is a feature of our morality rather than a bias; I am not sure my opinion of this outcome.

That said, while giving an answer to the one problem that some seem more comfortable with, and to the second that everyone agrees on, I expect there are clear failure modes I haven't thought of.

Edited to add:

This of course holds for weights of 1/n^a for any a>1; the most convincing defeat of this proposition would be showing that weights of 1/n (or 1/(n log(n))) drop off quickly enough to lead to bad behavior.

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

And why should they consider 3^^^^3 differently, if their function asymptotically approaches a limit? Besides, human utility function would take the whole, and then perhaps consider duplicates, uniqueness (you don't want your prehistoric tribe to lose the last man who knows how to make a stone axe), and so on, rather than evaluate one by one and then sum.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

The false allure of oversimplified morality is in ease of inventing hypothetical examples where it works great.

One could, of course, posit a colder planet. Most of the population would prefer that planet to be warmer, but if the temperature rise exceeds 5 Celsius, the gas hydrates melt, and everyone dies. And they all have to decide at one day. Or one could posit a planet Linearium populated entirely by people that really love skydiving, who would want to skydive everyday but that would raise the global temperature by 100 Celsius, and they'd rather be alive than skydive every day and boil to death. They opt to skydive at their birthdays at the expense of 0.3 degree global temperature rise, which each one of them finds to be an acceptable price to pay for getting to skydive at your birthday.

But still, WHY is torture better? What is even the problem with the speck dusts? Some of the people who get speck dust in their eyes will die in accidents caused by the dust particles? Is this why speck dust is so bad? But then, have we considered the fact that speck dust may save an equal amount of people, who would otherwise die? I really don´t get it and it bothers me alot.

I disagree. From my moral standpoint AND from my utility function whereas I am a bystander and perceive all humans as a cooperating system and want to minimize the damages to it, I think that it is better for 10^30 persons to put up with 1 second of intense pain compared to a single one who have to survive a whole minute. It is much, much more easy to recover from one second of pain than from being tortured for a minute.

And spec dust is virtually harmless. The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.

And 10^30 is really a lot of people. That's what Eliezer meant with "scope insensitivity". And all of them would be really grateful if you spared them their second of pain. Could be worth a minute of pain?

You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50/50.
Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.

I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.

Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn't make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.

If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.

Agree, having lived in chronic pain supposedly worse than untrained childbirth, I'd say that even an hour has a really seriously different possibility in terms of capacity for suffering than a day, and a day different from a week. For me it breaks down somewhere, even when multiplying between the 10^15 for 1 day and 10^21 for one minute. You can't really feel THAT much pain in a minute that is comparable to a day, even orders of magnitude? Its just qualitatively different. Interested to hear pushback on this

We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second.

I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn't mean we can't find exponential factors that dominate it at every point at least along the "less than 50 years" range.

Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?

I'd much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don't have any mechanism which could compound their suffering. They aren't even different subjectivities. I don't see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can't even tell subjectively how redundant it's hardware is.

Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something's still experiencing pain but it's not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.

It's not a continuum fallacy because I would accept "There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T" as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well.

Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?

I'm not sure what you mean by this. I don't believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that's ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.

Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?

I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.

This is where the argument for choosing torture falls apart for me, really. I don't think there is any number of people getting dust specks in their eyes that would be worse than torturing one person for fifty years. I have to assume my utility function over other people is asymptotic; the amount of disutility of choosing to let even an infinity of people get dust specks in their eyes is still less than the disutility of one person getting tortured for fifty years.

I'm not sure what you mean by this. I don't believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that's ridiculous.

I think he's questioning the idea that two people getting dust specks in their eyes is twice the disutility of one person getting dust specks, and that is the linearity he's referring to.

Personally, I think the problem stems from dust specks being such a minor inconvenience that it's basically below the noise threshold. I'd almost be indifferent between choosing for nothing to happen or choosing for everyone on Earth to get dust specks (assuming they don't cause crashes or anything).

There's the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly's statement:

"There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T"

We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that

"For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds."

Sure, the value of A may be larger than 10^100... But then, 3^^^3 is already vastly larger than 10^100. And if it weren't big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post...

Well, you basically have to concede that "torture" wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.

The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky's original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.

In other words, the "torture one person rather than allow 3^^^3 dust specks" wins, quite predictably, if and only if it is true that that the 'pain' component of the utility function is measured in one and only one dimension.

So the question is, basically, do you measure your utility function in terms of a single input variable?

If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity... or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.

If you don't, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.

It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate.

One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose "utilons," and one for... call them "red flags." As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you've accumulated.

The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)... but the overall weighted average of all human moral reasoning suggests that people who think they've done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever.

Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making.

The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.

Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself.

Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice more.

The world population circa 1000 was about 300 million (roughly,) so we estimate that this process would kill 600 million people.

Now consider as an alternative, said aliens simply killing everyone, all at once. 300 million dead.

Which outcome is worse?

If harm is strictly linear, we would expect that one death plus one death is exactly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we inoculate ourselves against hyperbolic discounting...

Well, the "linear harm" theory smacks into a wall. Because it is very credible to claim that the extinction of the human species is much worse than merely twice as bad as the extinction of exactly half the human species. Many arguments can be presented, and no doubt have been presented on this very site. The first that comes to mind is that human extinction means the loss of all potential future value associated with humans, not just the loss of present value, or even the loss of some portion of the potential future.

We are forced to conclude that there is a "total extinction" term in our calculation of harm, one that rises very rapidly in an 'inflationary' way. And it would do this as the destruction wrought upon humanity reaches and passes a level beyond which the species could not recover- the aliens killing all humans except one is not noticeably better than killing all of them, nor is sparing any population less than a complete breeding population, but once a breeding population is spared, there is a fairly sudden drop in the total quantity of harm.

Now, again, in itself this does not strictly invalidate the Torture/Specks argument. Assuming that the harm associated with human extinction (or torturing one person) is any finite amount that could conceivably be equalled by adding up a finite number of specks in eyes, then by definition there is some "big enough" number of specks that the aliens would rationally decide to wipe out humanity rather than accept that many specks in that many eyes.

But I can't recall a similar argument for nonlinear harm measurement being presented in any of the comments I've sampled, so I wanted to mention it.

But I thought it was interesting and couldn't recall seeing it elsewhere.

I mean, suppose that you got yourself a function that takes in a description of what's going on in a region of spacetime, and it spits out a real number of how bad it is.

Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That's much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).

One thing that function can't do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.

Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other.

The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person.

What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don't know how bad being in prison is, but it probably becomes much worse than I imagine if you're there for 50 years, and we don't think about that at all when arguing (or voting) about prison sentences.

My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter's Law: however bad you imagine it to be, it's worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I've yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.

My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment - that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree - you can adjust to being in intense suffering but that doesn't make the intense suffering go away. That's why I think its a special class of states of being - one that invokes action. What do people think?

Okay, here's a new argument for you (originally proposed by James Miller, and which I have yet to see adequately addressed): assume that you live on a planet with a population of 3^^^3 distinct people. (The "planet" part is obviously not possible, and the "distinct" part may or may not be possible, but for the purposes of a discussion about morality, it's fine to assume these.)

Now let's suppose that you are given a choice: (a) everyone on the planet can get a dust speck in the eye right now, or (b) the entire planet holds a lottery, and the one person who "wins" (or "loses", more accurately) will be tortured for 50 years. Which would you choose?

If you are against torture (as you seem to be, from your comment), you will presumably choose (a). But now let's suppose you are allowed to blink just before the dust speck enters your eye. Call this choice (c). Seeing as you probably prefer not having a dust speck in your eye to having one in your eye, you will most likely prefer (c) to (a).

However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3. But since the lottery proposed in (b) only offers a 1/3^^^3 probability of being picked for the torture, (b) is preferable to (c).

Then, by the transitivity axiom, if you prefer (c) to (a) and (b) to (c), you must prefer (b) to (a).

However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3.

Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.

Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.

Well, I mean, obviously a single person can't be kidnapped more than once every 50 years (assuming that's how long each torture session lasts), and certainly not several times a day, since he/she wouldn't have finished being tortured quickly enough to be kidnapped again. But yes, the general sentiment of your comment is correct, I'd say. The prospect of a planet with daily kidnappings and 50-year-long torture sessions may seem strange, but that sort of thing is just what you get when you have a population count of 3^^^3.

However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3.

And the time spent setting up a lottery and carrying out the drawing also increases the probability that someone else gets captured and tortured in the intervening time, far more than blinking would. In fact, the probability goes up anyway in that fraction of a second, whether you blink or not. You can't stop time, so there's no reason to prefer (c) to (b).

In fact, the probability goes up anyway in that fraction of a second, whether you blink or not.

Ah, sorry; I wasn't clear. What I meant was that blinking increases your probability of being tortured beyond the normal "baseline" probability of torture. Obviously, even if you don't blink, there's still a probability of you being tortured. My claim is that blinking affects the probability of being tortured so that the probability is higher than it would be if you hadn't blinked (since you can't see for a fraction of a second while blinking, leaving you ever-so-slightly more vulnerable than you would be with your eyes open), and moreover that it would increase by more than 1/3^^^3. So basically what I'm saying is that P(torture|blink) > P(torture|~blink) + 1/3^^^3.

The choice comes down to dust specks at time T or dust specks at time T + dT, where the interval dT allows you time to blink. The argument is that in the interval dT, the probability of being captured and tortured increases by an amount greater than your odds in the lottery.

It seems to me that the blinking is immaterial. If the question were whether to hold the lottery today or put dust in everyone's eyes tomorrow, the argument should be unchanged. It appears to hinge on the notion that as time increases, so do the odds of something bad happening, and therefore you'd prefer to be in the present instead of the future.

The problem I have is that the future is going to happen anyway. Once the interval dT passes, the odds of someone being captured in that time will go up regardless of whether you chose the lottery or not.

If I told you that a dust speck was about to float into your left eye in the next second, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are implicitly acknowledging that you prefer not getting specked to getting specked, and thereby conceding that getting specked is worse than not getting specked. If you would take it full in the eye, well... you're weird.

It's not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.

Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being's life in torture.

Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.

That's ridiculous. So mild pains don't count if they're done to many different people?

Let's give a more obvious example. It's better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.

Scaling down, we can say that it's better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.

Keep repeating this in your head(see how consistent it feels, how it makes sense).

Now just extrapolate to the instance that it's better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn't good enough because pain.[ (people on earth) * (pain from hair rip) ] < pain.[(people in New York) * (pain of being nuked) ]. The math doesn't add up in your straw man example, unlike with the actual example given.

I think Okeymaker was actually referring to all the people in the universe. While the number of "people" in the universe (defining a "person" as a conscious mind) isn't a known number, let's do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn't nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker's modus tollens and reply that I would prefer to nuke New York.)

Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.

By our universe you do not mean only the observable universe, but include the level I multiverse

then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.

Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don't see why I should change it. "I don't like the conclusions!!!" is not a valid objection.

If people in charge reasoned that way we might have harmageddon in no time.

If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we'll have larger problems than the potential nuking of New York.

Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.

Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however... Should we:

a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?

or should we

b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?

Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.

Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.

If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.

Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.

The trouble is that there is a continuous sequence from

Take $1 from everyone

Take $1.01 from almost everyone

Take $1.02 from almost almost everyone

...

Take a lot of money from very few people (Denmark)

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. Can you say that?

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad).

(Of course, it's not always clear which choice the answer is--hence why so many argue over it--but the answer has to be, in principle, either "yes" or "no".)

To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad).

My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.

You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. (emphasis mine)

YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.

I don´t follow the sequence because I don´t know where the critical limit is.

You may not know exactly where the limit is, but the point isn't that the limit is at some exact number, the point is that there is a limit. There's some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?

Now, do you have any actual argument as to why the 'badness' function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?

I don't think you do. This is why this stuff strikes me as pseudomath. You don't even state your premises let alone justify them.

What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own - where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to "torture for 50 years" and "dust specks" so this generally makes sense at all.

The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don't think there should ever be a point where you can go "Meh, not much of a big deal, no matter how many more people suffer."

If however the number of possible distinct people should be finite - even after taking into account level II and level III multiverses - due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.

Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there's only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).

I don't think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn't feel any stronger because there's more 'copies' of it running in perfect unison, it can't even tell the difference. It won't affect the subjective experience if the CPUs running the same computation are slightly physically different).

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it's going to keep growing without a limit, but that's simply not true.

I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m1,m2,... such that C(dustspeck,m_j) > jε.

Besides which, even if I had somehow messed up, you're not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.

Well, in my view, some details of implementation of a computation are totally indiscernible 'from the inside' and thus make no difference to the subjective experiences, qualia, and the like.

I definitely don't care if there's 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I'm not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?

Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they're 'important', there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.

Consider the flip side of the argument: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of being tortured for 50 years?

We take much greater risks without a moment's thought every time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both paralyze you and cause incredible pain to you for the rest of your life may be very small; but it's probably not smaller than 1 in 10^100, let alone than 1 in 3^^^3.

I think that's what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.

I find it hard to believe that you believe that. Under that metric, for example, "pick a thousand happy people and kill their dogs" is a completely neutral act, along with lots of other extremely strange results.

Or, for a maybe more dramatic instance: "Find the world's unhappiest person and kill them". Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world's unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world -- including the soon-to-be-ex-unhappiest-person -- is extremely happy and very much wishes to go on living.

The specific problem which causes that is that most versions of utilitarianism don't allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.

Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it's completely morally OK to do very bad things to huge numbers of people - in fact, it's no worse than radically improving huge numbers of lives - as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.

You can attempt to mitigate this property with too-clever objections, like "aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all". I don't think that actually works, but didn't want it to obscure the point, so I picked "kill their dog" as an example, because it's a clearly bad thing which definitely doesn't bump anyone to the bottom.