Today's post is a tad gloomier than usual, as I measure such things. It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me. Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading. (Unless they have something to protect, including their own life.)

So! Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong. Not as the result of any explicit propositional verbal belief. More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.

Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.

But we don't live in that world. We live in the world beyond the reach of God.

It's been a long, long time since I believed in God. Growing up in an Orthodox Jewish family, I can recall the last remembered time I asked God for something, though I don't remember how old I was. I was putting in some request on behalf of the next-door-neighboring boy, I forget what exactly—something along the lines of, "I hope things turn out all right for him," or maybe "I hope he becomes Jewish."

I remember what it was like to have some higher authority to appeal to, to take care of things I couldn't handle myself. I didn't think of it as "warm", because I had no alternative to compare it to. I just took it for granted.

Still I recall, though only from distant childhood, what it's like to live in the conceptually impossible possible world where God exists. Really exists, in the way that children and rationalists take all their beliefs at face value.

In the world where God exists, does God intervene to optimize everything? Regardless of what rabbis assert about the fundamental nature of reality, the take-it-seriously operational answer to this question is obviously "No". You can't ask God to bring you a lemonade from the refrigerator instead of getting one yourself. When I believed in God after the serious fashion of a child, so very long ago, I didn't believe that.

Postulating that particular divine inaction doesn't provoke a full-blown theological crisis. If you said to me, "I have constructed a benevolent superintelligent nanotech-user", and I said "Give me a banana," and no banana appeared, this would not yet disprove your statement. Human parents don't always do everything their children ask. There are some decent fun-theoretic arguments—I even believe them myself—against the idea that the best kind of help you can offer someone, is to always immediately give them everything they want. I don't think that eudaimonia is formulating goals and having them instantly fulfilled; I don't want to become a simple wanting-thing that never has to plan or act or think.

So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers. Even a Friendly AI might not respond to every request.

But clearly, there exists some threshold of horror awful enough that God will intervene. I remember that being true, when I believed after the fashion of a child.

The God who does not intervene at all, no matter how bad things get—that's an obvious attempt to avoid falsification, to protect a belief-in-belief. Sufficiently young children don't have the deep-down knowledge that God doesn't really exist. They really expect to see a dragon in their garage. They have no reason to imagine a loving God who never acts. Where exactly is the boundary of sufficient awfulness? Even a child can imagine arguing over the precise threshold. But of course God will draw the line somewhere. Few indeed are the loving parents who, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it.

What if you build your own simulated universe? The classic example of a simulated universe is Conway's Game of Life. I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of "physical law". Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward. Other cellular automata would make it simpler.

Could you, by creating a simulated universe, escape the reach of God? Could you simulate a Game of Life containing sentient entities, and torture the beings therein? But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors. If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene. God being omnipresent, there is no refuge anywhere for true horror: Life is fair.

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?

Not even God can modify the answer to this question, unless you believe that God can implement logical impossibilities. Even as a very young child, I don't remember believing that. (And why would you need to believe it, if God can modify anything that actually exists?)

What does Life look like, in this imaginary world where every step follows only from its immediate predecessor? Where things only ever happen, or don't happen, because of the cellular automaton rules? Where the initial conditions and rules don't describe any God that checks over each state? What does it look like, the world beyond the reach of God?

That world wouldn't be fair. If the initial state contained the seeds of something that could self-replicate, natural selection might or might not take place, and complex life might or might not evolve, and that life might or might not become sentient, with no God to guide the evolution. That world might evolve the equivalent of conscious cows, or conscious dolphins, that lacked hands to improve their condition; maybe they would be eaten by conscious wolves who never thought that they were doing wrong, or cared.

If in a vast plethora of worlds, something like humans evolved, then they would suffer from diseases—not to teach them any lessons, but only because viruses happened to evolve as well, under the cellular automaton rules.

If the people of that world are happy, or unhappy, the causes of their happiness or unhappiness may have nothing to do with good or bad choices they made. Nothing to do with free will or lessons learned. In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average. Who prevents it? God would prevent it from ever actually happening, of course; He would at the very least visit some shade of gloom in the Khan's heart. But in the mathematical answer to the question What if? there is no God in the axioms. So if the cellular automaton rules say that the Khan is happy, that, simply, is the whole and only answer to the what-if question. There is nothing, absolutely nothing, to prevent it.

And if the Khan tortures people horribly to death over the course of days, for his own amusement perhaps? They will call out for help, perhaps imagining a God. And if you really wrote that cellular automaton, God would intervene in your program, of course. But in the what-if question, what the cellular automaton would do under the mathematical rules, there isn't any God in the system. Since the physical laws contain no specification of a utility function—in particular, no prohibition against torture—then the victims will be saved only if the right cells happen to be 0 or 1. And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that. So the victims die, screaming, and no one helps them; that is the answer to the what-if question.

Could the victims be completely innocent? Why not, in the what-if world? If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods.

Is this world starting to sound familiar?

Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited: Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?

For so many lives and so much loss to turn on a single event, seems disproportionate. The Divine Plan ought to make more sense than that. You can believe in a Divine Plan without believing in God—Karl Marx surely did. You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum. It ought not to be allowed. It's too disproportionate. Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.

But in the world beyond the reach of God, there isn't any clause in the physical axioms which says "things have to make sense" or "big effects need big causes" or "history runs on reasons too important to be so fragile". There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.

The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe. Many who are atheists, still think as if certain things are not allowed. They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect. But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors. There is no particular empirical justification that I happen to have heard of, for doubting this. The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.

But why not? What prohibits it?

In the God-universe, God prohibits it. To recognize this is to recognize that we don't live in that universe. We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else. Whatever physics says will happen, will happen. Absolutely anything, good or bad, will happen. And there is nothing in the laws of physics to lift this rule even for the really extreme cases, where you might expect Nature to be a little more reasonable.

Reading William Shirer's The Rise and Fall of the Third Reich, listening to him describe the disbelief that he and others felt upon discovering the full scope of Nazi atrocities, I thought of what a strange thing it was, to read all that, and know, already, that there wasn't a single protection against it. To just read through the whole book and accept it; horrified, but not at all disbelieving, because I'd already understood what kind of world I lived in.

Once upon a time, I believed that the extinction of humanity was not allowed. And others who call themselves rationalists, may yet have things they trust. They might be called "positive-sum games", or "democracy", or "technology", but they are sacred. The mark of this sacredness is that the trustworthy thing can't lead to anything really bad; or they can't be permanently defaced, at least not without a compensatory silver lining. In that sense they can be trusted, even if a few bad things happen here and there.

The unfolding history of Earth can't ever turn from its positive-sum trend to a negative-sum trend; that is not allowed. Democracies—modernliberal democracies, anyway—won't ever legalize torture. Technology has done so much good up until now, that there can't possibly be a Black Swan technology that breaks the trend and does more harm than all the good up until this point.

There are all sorts of clever arguments why such things can't possibly happen. But the source of these arguments is a much deeper belief that such things are not allowed. Yet who prohibits? Who prevents it from happening? If you can't visualize at least one lawful universe where physics say that such dreadful things happen—and so they do happen, there being nowhere to appeal the verdict—then you aren't yet ready to argue probabilities.

Could it really be that sentient beings have died absolutely for thousands or millions of years, with no soul and no afterlife—and not as part of any grand plan of Nature—not to teach any great lesson about the meaningfulness or meaninglessness of life—not even to teach any profound lesson about what is impossible—so that a trick as simple and stupid-sounding as vitrifying people in liquid nitrogen can save them from total annihilation—and a 10-second rejection of the silly idea can destroy someone's soul? Can it be that a computer programmer who signs a few papers and buys a life-insurance policy continues into the far future, while Einstein rots in a grave? We can be sure of one thing: God wouldn't allow it. Anything that ridiculous and disproportionate would be ruled out. It would make a mockery of the Divine Plan—a mockery of the strong reasons why things must be the way they are.

You can have secular rationalizations for things being not allowed. So it helps to imagine that there is a God, benevolent as you understand goodness—a God who enforces throughout Reality a minimum of fairness and justice—whose plans make sense and depend proportionally on people's choices—who will never permit absolute horror—who does not always intervene, but who at least prohibits universes wrenched completely off their track... to imagine all this, but also imagine that you, yourself, live in a what-if world of pure mathematics—a world beyond the reach of God, an utterly unprotected world where anything at all can happen.

If there's any reader still reading this, who thinks that being happy counts for more than anything in life, then maybe they shouldn't spend much time pondering the unprotectedness of their existence. Maybe think of it just long enough to sign up themselves and their family for cryonics, and/or write a check to an existential-risk-mitigation agency now and then. And wear a seatbelt and get health insurance and all those other dreary necessary things that can destroy your life if you miss that one step... but aside from that, if you want to be happy, meditating on the fragility of life isn't going to help.

What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Nature's little challenges aren't always fair. When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That's how it is for people, and it isn't any different for planets. Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against: Absolute, utter, exceptionless neutrality.

Knowing this won't always save you. It wouldn't save a twelfth-century peasant, even if they knew. If you think that a rationalist who fully understands the mess they're in, must surely be able to find a way out—then you trust rationality, enough said.

Some commenter is bound to castigate me for putting too dark a tone on all this, and in response they will list out all the reasons why it's lovely to live in a neutral universe. Life is allowed to be a little dark, after all; but not darker than a certain point, unless there's a silver lining.

Still, because I don't want to create needless despair, I will say a few hopeful words at this point:

If humanity's future unfolds in the right way, we might be able to make our future light cone fair(er). We can't modify fundamental physics, but on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe. There's a lot of stuff out there that we can't touch—but it may help to consider everything that isn't in our future light cone, as being part of the "generalized past". As if it had all already happened. There's at least the prospect of defeating neutrality, in the only future we can touch—the only world that it accomplishes something to care about.

Someday, maybe, immature minds will reliably be sheltered. Even if children go through the equivalent of not getting a lollipop, or even burning a finger, they won't ever be run over by cars.

And the adults wouldn't be in so much danger. A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn't seem so harsh, would be only another problem to be solved.

The problem is that building an adult is itself an adult challenge. That's what I finally realized, years ago.

If there is a fair(er) universe, we have to get there starting from this world—the neutral world, the world of hard concrete with no padding, the world where challenges are not calibrated to your skills.

Not every child needs to stare Nature in the eyes. Buckling a seatbelt, or writing a check, is not that complicated or deadly. I don't say that every rationalist should meditate on neutrality. I don't say that every rationalist should think all these unpleasant thoughts. But anyone who plans on confronting an uncalibratedchallenge of instant death, must not avoid them.

What does a child need to do—what rules should they follow, how should they behave—to solve an adult problem?

Depends on the version of Buddhism and who you ask... but yes, even the utter destruction of the mind.

Of course, 'utter destruction' is not a well-defined term. Depending on who you ask, nothing in Buddhism is ever actually destroyed. Or in the Dust hypothesis, or the Library of Babel... the existence of the mind never ends, because we've never beaten our wives in the first place.

Summary: "Bad things happen, which proves God doesn't exist." Same old argument that atheists have thrown around for hundreds, probably thousands, of years. The standard rebuttal is that evil is Man's own fault, for abusing free will. You don't have to agree, but at least quit pretending that you've *proven* anything.

"In sober historical fact", clear minds could already see in 1919 that the absurdity of the Treaty of Versailles (with its total ignorance of economic realities, and entirely fueled by hate and revenge) was preparing the next war -- each person (in both nominally winning and nominally defeated countries) being put in such unendurable situations that "he listens to whatever instruction of hope, illusion or revenge is carried to him on the air".

This was J.M. Keynes writing in 1919, when A. Hitler was working as a police spy for the Rechswehr, infiltrating a tiny party then named DAP (and only later renamed to NDA); Keynes' dire warnings had nothing specifically to do with this "irrelevant" individual, which he had no doubt never even heard about -- there were plenty of other matches ready to set fire to a tinderbox world, after all; for examle, at that time, Benito Mussolini was a much more prominent figure, a well known and controversial journalist, and had just founded the "Fasci Nazionali di Combattimento".

So your claim, that believing the European errors in 1919 made another great war extremely likely, "is an unreasonable belief", is absurd. You weaken your interesting general argument by trying to support it with such tripe; "inevitable" is always an overbid, but to opine that the situation in 1919 made another great war all too likely within a generation, quite independently of what individuals would be leading the various countries involved, is perfectly reasonable.

Keynes's strong and lucid prose could not make a difference in 1919 (even though his book was a best-seller and may have influenced British and American policies, France was too dead-set in its hate and thirst for revenge) -- but over a quarter of a century later, his ideas prevailed: after a brief attempt to de-industrialize Germany and push it back to a pastoral state (which he had already argued against in '19), ironically shortly after Keynes' death, the Marshall Plan was passed (in rough outline, what Keynes was advocating in '19...) -- and we *didn't* get yet another great european war after that.

Without Hitler, but with Versailles and without any decent reconstruction plan after the Great War, another such great war WAS extremely likely -- it could have differed in uncountable details and even in strategic outline, from the events as they actually unfolded, just like the way a forest fire in dry and thick woods can unfold in many ways that differ in detail... but what exact match or spark lights the fire is in a sense a detail -- the dry and flame-prone nature of the woods makes a conflagration far too likely to avoid it by removing one specific match, or spark: there will be other sparks or matches to play a similar role.

The claim isn't that Germany would have been perfectly fine, and would never have started a war or done anything else extreme. And the claim is not that Hitler trashed a country that was ticking along happily.

The claim is that the history of the twentieth century would have gone substantially differently. World War II might not have happened. The tremendous role that Hitler's idiosyncrasies played in directing events, doesn't seem to leave much rational room for determinism here.

Well, the raise of fascism and anti-Semitism in Europe at that time was wide-spread. It was not just a man. From the Dreyfus affair in France, to Mussolini and Franco, to the heated rivalries between the fascists leagues and the popular in France, ... the whole of Europe after WW1 and unfair Versailles treaty, then the disaster of the 1929 crisis, was a fertile land for all fascist movements.

World War II feels much more like a "natural consequence" of previous events (WW1, Versailles treaty, 1929 crisis) and general historical laws (that "populist" politicians thrive when the economical situation is bad), than of a single man. It would have been different with different leaders in the various major countries involved, sure. If Leon Blum helped Republican Spain against Franco instead of letting them stand alone, things could have changed a lot. And many other events could have gone differently - of course, without Hitler, it would have been different.

But different enough so WWII wouldn't occur ? Very unlikely to me - not impossible, but very unlikely with only a single turning point.

Reminds me of this: "Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

But my question would be: Is the universe of cause and effect really so less safe than the universe of God? At least in this universe, someone who has an evil whim is limited by the laws of cause and effect, e.g. Hitler had to build tanks first, which gave the allies time to prepare. In that other universe, Supreme Being decides he's bored with us and zap we're gone, no rules he has to follow to achieve that outcome.

So why is relying on the goodness of God safer than relying on the inexorability of cause and effect?

Given how widespread white nationalism is in America, (i.e. it's a common phenomenon) and how intimately tied to fascism it is, I think that there's a substantial chance that the leader that would have taken Hitler's place would have shared his predilection for ethnic cleansing, even if not world domination.

It looks more and more like all of this 'Friendly AI' garbage is just a reaction to Eliezer's departure from Judaism.

Which is not to say that this hasn't been obvious for a long time. But this is the closest Eliezer's ever come to simply acknowledging it.

He's already come to terms with the fact that reality doesn't conform to his ideas of what it should be. Now he just has to come to terms with the fact that his ideas do not determine what reality should be.

There isn't going to be a magical sky daddy no matter what we do. There's no such thing as magic. There's no such thing as 'Friendly'. It's not possible to make reality a safe and cozy haven, and trying to make it so will only cause harm.

There isn't going to be a magical sky daddy no matter what we do. There's no such thing as magic. There's no such thing as 'Friendly'. It's not possible to make reality a safe and cozy haven, and trying to make it so will only cause harm.

All major human goals have trying to make reality more of a safe and cozy haven. This is true not just for things like trying to make Friendly AI, and cryonics, that seem far off, but also simple everyday things like discovering new antibiotics or trying to find cures for diseases.

The concept of "should" is not one the universe recognizes; it exists only in the human mind. So yes, his ideas do determine what should be.

Besides, "life sucks, let's fix it" and "God doesn't exist, let's build one" are far more productive viewpoints than "life sucks, deal with it" and "God doesn't exist, how terrible", even if they never amount to as much as they hope to. The idea that they "will only cause harm" is incredibly nebulous, and sounds more like an excuse to accept the status quo than a valid argument.

How so? "No good will come of this" is an incredibly old argument that's been applied to all kinds of things, and as far as I know rarely has a specific basis. What aspect of his argument am I missing?

I fail to see how the age of the argument is relevant. And it was not an argument, it was a proposition.

Caledonian was asserting that "trying to make reality a safe and cozy haven will only cause harm". This is a fairly well-specified prediction (to the extent that one can observe whether or not X is "trying to" Y in general) and may be true or false. It is not an excuse, nor an argument, nor particularly nebulous.

Though as I mentioned, in general (if taken strictly) assertions that a real-world action will have precisely one sort of effect are false.

The age of the proposition and the ease with which it can be applied to a variety of situations is an indication that, when such a proposition is made, it should be examined and justified in more detail before being declared a valid argument. Causing harm, given the subject matter, could mean a variety of things from wasted funds to the death of the firstborn children of every family in Egypt. Lacking anything else in the post to help determine what kind and degree of harm was meant or even where the idea that failed attempts will be harmful came from, the original assertion comes across, to me, as a vague claim meant to inspire a negative reaction. It may be true or false, but the boundaries of "true" are not very clearly defined.

I understand that it is probably wrong, and I understand that you know that too. I'm discussing this because I want to know if I'm doing something wrong when determining the validity of an argument. We also seem to be using different definitions of "argument"; I merely see it as a better-sounding synonym of proposition. No negative connotations were meant to be invoked.

An argument is a series of statements ("propositions") that are intended to support a particular conclusion. For example, "Socrates is a man. All men are mortal. Therefore, Socrates is mortal." Just as one sentence is not a paragraph, one proposition is not an argument.

There is no question of whether "trying to make reality a safe and cozy haven will only cause harm" is a valid argument because it's not an argument at all. This is an argument:

If we try to make reality a safe and cozy haven, then we will only cause harm.

We are trying to make reality a safe and cozy haven

Therefore, we will only cause harm.

Note that this is a valid argument; the truth of the conclusion follows necessarily from the truth of its premises. If you have any problems with it, it is with its soundness, the extent to which the propositions presented are true. It sounds like you think the first proposition is false, but you are claiming Caledonian made an invalid argument instead. If that is the case, you're making a category mistake.

And now we're disputing definitions. I was using argument to mean what you've defined as propositions; it was a mistake in labeling, but the category is the same. Regardless, the falseness of his proposition is not an issue. The issue I have is that his initial proposition, though it may possibly be true, has a wide range of possible truenesses, no indication which trueness the poster was aiming for, and may very possibly have been made without a particular value of potential truth in mind. If that's soundness, then yeah, I took issue with the soundness of his proposition.

The issue I have is that his initial proposition, though it may possibly be true, has a wide range of possible truenesses, no indication which trueness the poster was aiming for, and may very possibly have been made without a particular value of potential truth in mind.

I don't see how that's the case. It seems very specific to me. In the statement "X will only cause Y" are you confused about the meaning of X, Y, "will only cause", or something else I'm missing? (X="trying to make ... reality a safe and cozy haven", Y="harm")

I take issue with Y. "Harm", though it does have a definition, is a very, very broad term, encompassing every negative eventuality imaginable. Saying "X will cause stuff" only doubles the number of applicable outcomes. That does not meet my definition of "specific".

Remove whatever cultural or personal contextual trappings you find draped over a particular expression of Buddhism, and you'll find it very clear that Buddhism does "allow" that, or more precisely, un-asks that question.

As you chip away at unfounded beliefs, including the belief in an essential self (however defined), or the belief that there can be a "problem to solved" independent of a context for its specification, you may arrive at the realization of a view of the world flipped inside-out, with everything working just as before, less a few paradoxes.

The wisdom of "adult" problem-solving is not so much about knowing the "right" answers and methods, but about increasingly effective knowledge of what *doesn't* work. And from the point of view of any necessarily subjective agent in an increasingly uncertain world, that's all there ever was or is.

By the way, I should clarify that my total disagreement with your thesis on WW2 being single-handedly caused by A. Hitler does in no way imply disagreement with your more general thesis. In general I do believe the "until comes steam-engine-time" theory -- that many macro-scale circumstances must be present to create a favorable environment for some revolutionary change; to a lesser degree, I also do think that _mostly_, when the macro-environment is ripe, one of the many sparks and matches (that are going off all the time, but normally fizz out because the environment is NOT ripe) will tend to start the blaze. But there's nothing "inevitable" here: these are probabilistic, Bayesian beliefs, not "blind faith" on my part. One can look at all available detail and information about each historical situation and come to opine that this or that one follows or deviates from the theory. I just happen to think that WW2 is a particularly blatant example where the theory was followed (as Keynes could already dimly see it coming in '19, and he was NOT the only writer of the time to think that way...!); another equally blatant example is Roman history in the late Republic and early Empire -- yes, many exceptional individuals shaped the details of the events as they unfolded, but the nearly-relentless march of the colossus away from a mostly-oligarchic Republic and "inevitably" towards a progressively stronger Principate looms much larger than any of these individuals, even fabled ones like Caesar and Octavian.

But for example I'm inclined to think of more important roles for individuals in other historically famous cases -- such as Alexander, or Napoleon. The general circumstances at the time of their accessions to power were no doubt a necessary condition for their military successes, but it's far from clear to me that they were anywhere close to *sufficient*: e.g., without a Bonaparte, it does seem quite possible to me that the French Revolution might have played itself out, for example, into a mostly-oligarchic Republic (with occasional democratic and demagogic streaks, just like Rome's), without foreign expansionism (or, not much), without anywhere like the 20 years of continuous wars that in fact took place, and eventually settling into a "stable" state (or, as stable as anything ever is in European history;-). And I do quite fancy well-written, well-researched "alternate history" fiction, such as Turtledove's, so I'd love to read a novel about what happens in 1812 to the fledgling USA if the British are free to entirely concentrate on that war, not distracted by Napoleon's last hurrahs in their backyard, because Napoleon was never around...;-) [To revisit "what if Hitler had never been born", btw, if you also like alternate history fiction, Stephen Fry's "Making History" can be recommended;-)]

After Napoleon, France was brought back to the closest status to pre-Revolutionary that the Powers could achieve -- and ("inevitably" one might say;-) 15 years later the Ancien Regime crumbled again; however, that time it gave birth somewhat peacefully to a bourgeois-dominated constitutional monarchy (with no aggressive foreign adventures, except towards hopefully-lucrative colonies). Just like the fact that following Keynes' 1919 advice in 1947 did produce lasting peace offers some support to Keynes' original contention, so the fact that no other "strong-man" emerged to grab the reins in 1830 offers some support to the theory that there way nothing "inevitable" about a military strong man taking power in 1799 -- that, had a military and political genius not been around and greedy for power in '99, France might well have evolved along different and more peaceful lines as it later did in '30. Of course, one can argue endlessly about counterfactuals... but should have better support before trying to paint a disagreement with oneself as "absurd"!-)

BTW, in terms of human death and suffering (although definitely not in terms of "sheer evil" in modern ethical conception), the 16 years of Napoleon's power were (in proportion to the population at the time) quite comparable to, or higher than, Hitler's 12; so, switching from Hitler to Napoleon as your example would not necessarily weaken it in this sense.

I thought I already knew all this, but this post has made me realize that I've still, deep down, been thinking as you describe - that the universe can't be *that* unfair, and that the future isn't *really* at risk. I guess the world seems like a bit scarier of a place now, but I'm sure I'll go back to being distracted by day-to-day life in short order ;).

As for cryonics, I'm a little interested, but right now I have too many doubts about it and not enough spare money to go out and sign up immediately.

Ian C., that is half the philosophy of Epicurus in a nutshell: there are no gods, there is no afterlife, so the worst case scenario is not subject to the whims of petulant deities.

If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0. If there is any probability of your annihilation, no matter how small, you will not survive for an infinite amount of time. That is what happens in an infinite amount of time: everything possible. If all your backup plans can fail at once, even at P=1/(3^^^3), that number will come of eventually with infinite trials.

What's the point of despair? There seems to be a given assumption in the original post that:

1) there is no protection, universe is allowed to be horrible --> 2)lets despair

But number 2 doesn't change 1 one bit. This is not a clever argument to disprove number 1. I'm just saying despair is pointless if it changes nothing. It's like when babies cry automatically when something isn't the way they like because they are programmed to by evolution because this reliably attracted the attention of adults. Despairing about the universe will not attract the attention of adults to make it better. We are the only adults, that's it. I would rather reason along the lines of:

1) there is no protection, universe is allowed to be horrible --> 2)what can I do to make it better

Agreed with everything else except the part where this is really sad news that's supposed to make us unhappy.

In a Universe beyond the reach of God, who is to say that the first civilization technologically advanced enough to revive you will not be a "death gives meaning to life" theocracy which has a policy of reviving those who chose to attempt to escape death in order to submit them and their defrosted family members to 1000 years of unimaginable torture followed by execution?

Sure, there are many reasons to believe such a development is improbable. But you are still rolling those dice in a Universe beyond God's reach, are you not?

Putting aside the fact that theocracy doesn't really lend itself to technological advancement, the utilitity * likelihood of living longer outweighs the (dis)utility * likelihood of being tortured for 1000 years.

Don't get bored with the small shit. Cancers, heart disease, stroke, safety engineering, suicidal depression, neurodegenerations, improved cryonic tech. In the next few decades I'm probably going to see most of you die from that shit (and that's if I'm lucky enough to persist as an observer), when you could've done a lot more to prevent it, if you didn't get bored so easily of dealing with the basics.

For the benefit of those who haven't been following along with Overcoming Bias, I should note that I actually intend to fix the universe (or at least throw some padding atop my local region of it, as disclaimed above) - I'm not just complaining here.

"If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0. If there is any probability of your annihilation, no matter how small, you will not survive for an infinite amount of time. That is what happens in an infinite amount of time: everything possible. If all your backup plans can fail at once, even at P=1/(3^^^3), that number will come of eventually with infinite trials."
Zubon, this seems to assume that the probabilities in different periods are independent. It could be that there is some process that allows the chance of obliteration to decline exponentially (it's easy enough to imagine a cellular automata world where this could be) with time, such that the total chance of destruction converges to a finite number. Of course, our universe's apparent physics seem to preclude such an outcome (the lightspeed limit, thermodynamics, etc), but I wouldn't assign a probability of zero to our existing as simulations in a universe with laws permitting such stability, or to (improbable) physics discoveries permitting such feats in our apparent universe.

Without Hitler it's likely Ludendorf would have been in charge and things would have been even worse. So perhaps we should be grateful for Hitler!

I gather there are some Orthodox Jews involved in Holocaust denial and were in Iran for that, but this post gets me to thinking that there should be more of them if they really believe in a benevolent and omnipotent God that won't allow sufficiently horrible things to happen.

How widespread is white nationalism in America? I would think it's one of the least popular things around, although perhaps I'm taking the Onion too seriously.

There is no evil. There is neutrality. The universe isn't man's fault; it isn't anyone's fault.

I'm not at all saddened by these facts. My emotional state is unaltered. It's because I take them neutrally.

I've experienced severe pain enough to know that
A) Torture works. Really. It does. If you don't believe it, try it. It'll be a short lesson.
B) Pain is not such a big deal. It's just an avoid-this-at-all-cost -signal. Sure, I'm in agony, sure, I'd hate to remain in a situation where that signal doesn't go away, but it still is just a signal.

Perhaps as you look at some spot in the sky, they've already - neutrality allowing - tamed neutrality there; made it Friendly.

More parents might let their toddler get hit by a car if they could fix the toddler afterwards.

There are an awful lot of types of Buddhism. Some allow mind annihilation, and even claim that it should be our goal. Some strains of Epicurianism hold that mind annihilation is a) neutral, and b) better than what all the religions believed in. Some ancient religions seemed to believe in the same awful universal fate as quantum immortality believers do, e.g. eternal degeneration, progressively advanced Alzheimers forever more or less. Adam Smith suggests that this is what most people secretly believe in.

It would take quite a black swan tech to undo all the good from tech up to this point. UFAI probably wouldn't pass the test, since without tech humans would go extinct with a smaller total population of lives lived anyway. Hell worlds seem unlikely. 1984 or the Brave New World (roughly) are a bit more likely, but is it worse than extinction? I don't generally feel that way, though I'm not sure.

Good post, but how to deal with this information so that it is not so burdensome: Conway himself, upon creating The Game of Life, didn't believe that the cellular automaton could 'live' indefinitely, but was proven wrong shortly after his games creation by the discovery of the glider gun. We cannot assume that the cards were dealt perfectly and the universe or our existence is infinite, but we can hope that the pattern we have put down will continue to stand the test of time. Belief that we are impervious to extinction or that the universe will not ultimately implode leaving everything within no choice but to be squished into a single particle can only do us harm as we try to create new things and discover ways to transcend this existence. Hope that we will make it and that there is ultimately a way off of what may be a sinking ship is what keeps us going.

I don't understand why the end of the universe bugs people so much. I'll just be happy to make it to next decade, thanks very much. When my IQ rises a few thousand points, I'll consider things on a longer timescale.

Alas, most people on the planet either:
1. haven't heard of cryonics / useful life extension,
2. don't take it seriously,
3. have serious misunderstandings about it, or
4. reject it for social reasons.

"What can a twelfth-century peasant do to save themselves from annihilation? Nothing."

She did something. She passed on a religious meme whose descendents have inspired me, in turn, to pass on the idea that we should engineer a world that can somehow reach backward to save her from annihilation. That may not prove possible, but some possibilities depend on us for their realization.

A Jewish prophet once wrote something like this: "Behold, I will send you Elijah the prophet before the coming of the great and dreadful day of the Lord: And he shall turn the heart of the fathers to the children, and the heart of the children to their fathers, lest I come and smite the earth with a curse." The Elijah meme has often turned my heart toward my ancestors, and I wonder whether we can eventually do something for them.

Unless we are already an improbable civilization, our probable future will be the civilization we would like to become only to the extent that such civilizations are already probable. The problem of evil is for the absolutely omnipotent God -- not for the progressing God.

And I do quite fancy well-written, well-researched "alternate history" fiction, such as Turtledove's, so I'd love to read a novel about what happens in 1812 to the fledgling USA if the British are free to entirely concentrate on that war, not distracted by Napoleon's last hurrahs in their backyard, because Napoleon was never around...

Nitpick:

The "War of 1812" was basically an offshoot of the larger Napoleonic Wars; Britain and France were both interfering with the shipping of "neutral" nations, such as the United States, in order to hurt their enemy. After France dropped its restrictions (on paper, at least), the United States became a lot less neutral, and James Madison and Congress eventually declared war on Britain. (Several years earlier, Jefferson, in response to the predations of the two warring European powers, got Congress to pass an Embargo Act that was to end foreign trade for the duration of the war, so as to keep the U.S. from getting involved. It didn't work out so well.)

In other words, without Napoleon, there probably wouldn't have been a War of 1812 at all.

What I don't understand is that we live on a planet, where we don't have all people with significant loose change

A) signing up for cryonics
B) super-saturating the coffers of life-extensionists, extinction-risk-reducers, and AGI developers.

Instead we currently live on a planet, where their combined (probably) trillions of currency units are doing nothing but bloating as 1s and 0s on hard drives.

Can someone explain why?

Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?

It is a strange thing. I often feel the impulse to not believe that something would really be possible - usually when talking about existential risks - and I have to make a conscious effort to suppress that feeling, to remind myself that anything the laws of physics allow is possible. (And even then, I often don't succeed - or don't have the courage to entirely allow myself to succeed.)

A) Torture works. Really. It does. If you don't believe it, try it. It'll be a short lesson.

That depends on what you're trying to use it for. Torture is very good at getting people to do whatever they believe will stop the torture. For example, it's a good way to get people to confess to whatever you want them to confess to. Torture is a rather poor way to get people to tell you the truth when they have motive to lie and verification is difficult; they might as well just keep saying things at random until they say something that ends the torture.

Consequentialist: Is it a fair universe where the wealthy live forever and the poor die in the relative blink of an eye? It seems hard for our current society to look past that when setting public policy. This doesn't necessarily explain why there isn't more private money put to the purpose, but I think many of the intelligent and wealthy at the present time would see eternal life quests as a millennial long cliche of laughable selfishness and not in tune with leaving a respectable legacy.

Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?

That's probably not the explanation, since there are many millions of atheists who heard about cryonics and/or extinction risks.
I figure the actual explanation is a combination of conformity, the bystander effect, the tendency to focus on short term problems, and the Silliness Factor.

I can only speak for myself on this, but wouldn't sign up for cryonics even if it were free, because I don't want to be revived in the future after I'm dead. (Given the choice, I would rather not have existed at all. However, although mine was not a life worth creating, my continued existence will do far less harm than my abrupt death.)

This is roughly equivalent to stating you don't want to be revived after you fall asleep tonight. If revival from cryosuspension is possible, there is no difference. You want to wake up tomorrow (if you didn't really, there are many easy ways for you to remedy that), therefore you want to wake up from cryonic suspension. You would rather fall asleep tonight than die just before it, therefore you would/should, rationally speaking, take free cryonics.

There's a corallary mystery category which most of you fall into: why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds? Where's the fight against a global ban on reproductive human cloning? Where's the fight to increase legal organ markets? Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation? Until you square aware your own repugnancy bias based inaction, criticisms of that of the rest of population on topics like cryonics reads as incoherent to me as debating angels dancing on the heads of pins. My blog shouldn't be so anomolous in seeking to overcome repugnancy bias to maximize persistence odds. Where are the other anonymous advocates? Our reality is the Titanic -who want to go down with the ship for the sake of a genetic aesthetic -because your repugnancy bias memes are likely to only persist in the form of future generations if you choose to value it over your personal persistence odds.

To show that hellish scenarios are worth ignoring, you have to show not only that they're improbable, but also that they're improbable enough to overcome the factor (utility of oblivionish scenario - utility of hellish scenario)/(utility of heavenish scenario - utility of oblivionish scenario), which as far as I can tell could be anywhere between tiny and huge.

As for global totalitarian dictatorships, I doubt they'd last for more than millions of years without something happening to them.

Steve Sailer is also widely read among conservative (and some other) elites, and there's a whole network of anonymous bloggers associated with him.

"Where's the fight against a global ban on reproductive human cloning?"
Such bans have been fought, primarily through alliance with those interested in preserving therapeutic cloning.

"Where's the fight to increase legal organ markets?"
Smart people can go to Iran, where legal markets already exist.

"Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation?"
There are some defenses of such practices, but it's not obviously a high-return area to invest your energies in, given the alternatives. A more plausible route would be suggesting handy experiments to Chinese partners.

I can only speak for myself on this, but wouldn't sign up for cryonics even if it were free, because I don't want to be revived in the future after I'm dead.

I would probably sign up for cryonics if it were free, with a, "do not revive sticker" and detailed data about me so that future brain studiers would have another data point when trying to figure out how it all works.

I don't wish that I hadn't been born, but I figure I have a part to play a purpose that no one else seems to be doing. Once that has been done, then unless something I see need doing and is important and sufficiently left field for no one else to be doing, I'll just potter along doing random things until I die.

"I figure I have a part to play a purpose that no one else seems to be doing"

How do you figure that? Aren't you a materialist? Or do you just mean that you might find a niche to fill that would be satisfying and perhaps meaningful to someone? I'm having trouble finding a non-teleological interpretation of your comment.

"If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods."

While I of course I agree with the general sentiment of the post, I don't think this argument works. There is a relevant quote by John McCarthy:

"In the 1950s I thought that the smallest possible (symbol-state
product) universal Turing machine would tell something about the
nature of computation. Unfortunately, it didn't. Instead as simpler
universal machines were discovered, the proofs that they were
universal became more elaborate, and do did the encodings of
information." (http://cs.nyu.edu/pipermail/fom/2007-October/012141.html)

One might add that the existence of minimalistic universal machines tell us very little about the nature of metaphysics and morality also. The problem is that the encodings of information gets very elaborate: a sentient being implemented in Life would presumably take terabytes of initial state, and that state would be encoding some complicated rules for processing information, making inferences, etc etc. It is those rules that you need to look at to determine if the universe is perfectly unfair or not.

Who knows, perhaps there is a deep fundamental fact that it is not possible to implement sentient beings in a universe where the evaluation rules don't enforce fairness. Or, slightly more plausible, it could be impossible to implement sentient tyrants who don't feel a "shade of gloom" when considering what they've done.

Neither scenario sounds very plausible, of course. But in order to tell whether such fairness constraints exist or not, the 3 rules of Life itself are completely irrelevant. This can be easily seen, since the same higher level rules could be implemented on top of any other universal machine equally easily. So invoking them do not give us any more information.

Doug, Will: There is no fundamental difference between being revived after dying, waking up after going to sleep, or receiving neurotransmitter in a synapse after it was released. There is nothing special about 10^9 seconds as opposed to 10^4 seconds or 10^-4 seconds. Unless, of course, these times figure into your morality, but these are considerations far out of scope of ancestral environments humans evolved in. This is a care where unnatural category meets unnatural circumstances, so figuring out a correct answer is going to be difficult, and relying on intuitively reinforced judgment would be reckless.

I do think we get a little: if such constraints exist, they are a property of the patterns themselves, and not a property of the low-level substrate on which they are implemented. If such a thing were true in this world, it would be a property of people and societies, not a metaphysical property. That rules out a lot of religion and magical thinking, and could be a useful heuristic.

Not that I am willing to sign up for cryonics but I don't see this as a problem.

Presumably some monkeys will be placed on ice at some point in the testing of defrosting and you will not be defrosted until they are sure that the defrosting side does not cause brain damage. Also presumably there should be some way of determining if brain damage has occurred before defrosting happens and hopefully no one is defrosted that has brain damage until a way to fix the brain damage has been discovered.

I suppose that if the brain damage could be fixed you might lose some important information which does leave the question if you are still you. However if you believe that you are still yourself with the addition of new information, such as is received each day just by living, then you should likewise believe that you will still be yourself if information is lost. Also one of the assumptions of Cryogenics is that the human lifespan will have been greatly expanded so if you have major amnesia from the freezing you can look at it as trading your current life up to the point of freezing for one that is many multiple in length.

This is assuming that cryogenics works as intended, of which point I am not convinced of.

Eliezer, I think there's a slight inconsistency in your message. On the one hand, there are the posts like this, which can basically be summed up as: "Get off your asses, slackers, and go fix the world." This is a message worth repeating many times and in many different ways.

On the other hand are the "Chosen One" posts. These posts talk about the big gaps in human capabilities - the idea being that some people just have an indefinable "sparkliness" that gives them the power to do incredible things. I read these posts with uneasiness: while agreeing with the general drift, I think I would interpret the basic observations (e.g. CEOs really are smarter than most other people) in a different way.

The inconsistency is that on the one hand you're telling people to get up and go do something, because the future is uncertain and could be very, very good or very, very bad; but on the other hand you're essentially saying that if a person is not a Chosen One, there's not much he can really contribute.

So, what I'd like to see is a discussion of what the rank-and-file members of Team Rational should be doing to help (and I hope that involves more than donating lots of money to SIAI).

"So, what I'd like to see is a discussion of what the rank-and-file members of Team Rational should be doing to help (and I hope that involves more than donating lots of money to SIAI)."
How 'rank-and-file' are we talking here? With what skillset, interests, and level of motivation?

It is extraordinarily difficult to figure out how to use volunteers. Almost any nonprofit trying to accomplish a skilled-labor task has many more people who want to volunteer their time than they can use. The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share.

I'm surprised by the commenters who cannot conceive of a future life that is more fun than the one they have now - who can't imagine a future they would want to stick around for. Maybe I should bump the priority of the Fun Theory sequence.

"The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share."

There's always Amazon's Mechanical Turk (https://www.mturk.com/mturk/welcome). It's an inefficient use of people's time, but it's better than just telling people to go away. If people are reluctant to donate money, you can ask for donations of books- books are actually a fairly liquid asset (http://www.cash4books.net/).

@Hidden: just a "typical" OB reader, for example. I imagine there are lots of readers who read posts like this and say to themselves "Yeah! There's no God! If we want to be saved, we have to save ourselves! But... how...?" Then they wake up the next day and go to their boring corporate programming jobs.

@pdf23ds: This feels like tunnel vision. Surely the problem SIAI is working on isn't the ONLY problem worth solving.

@Eliezer: I recognize that it's hard to use volunteers. But members of Team Rational are not herd thinkers. They probably don't need to be led, per se - just kind of nudged in the right direction. For example, if you said, "I really think project X is important to the future of humanity, but it's outside the scope of SIAI and I don't have time to dedicate to it", probably some people would self-motivate to go and pursue project X.

The obvious example of a horror so great that God cannot tolerate it, is death - true death, mind-annihilation. I don't think that even Buddhism allows that.
This is sort of a surprising thing to hear from someone with a Jewish religious background. Jews spend very little attention and energy on the afterlife. (And your picture of Buddhism is simplistic at best, but other people have already dealt with that). I've heard the interesting theory that this stems from a reaction against their Egyptian captors, who were of course obsessed with death and the afterlife.

Religion aside, I truly have trouble understanding why people here think death is so terrible, and why it's so bloody important to deep-freeze your brain in the hopes it might be revved up again some time in the future. For one thing, nothing lasts forever, so death is inevitable no matter how much you postpone it. For another, since we are all hard-core materialists here, let me remind you that the flow of time is an illusion, spacetime is eternal, and the fact that your own personal self occupies a chunk of spacetime that is not infinite in any direction is just a fact of reality. It makes about as much sense to be upset that your mind doesn't exist after you die as it does to be upset that it didn't exist before you were born. Lastly, what makes you so damn important that you need to live forever? Get over yourself. After you die, there will be others taking over your work, assuming it was worth doing. Leave some biological and intellectual offspring and shuffle off this mortal coil and give a new generation a chance. That's how progress gets made -- "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it". (Max Planck, quoted by Thomas Kuhn)

"but on the other hand you're essentially saying that if a person is not a Chosen One, there's not much he can really contribute."

Do you think there aren't at least a few Neos whom Eliezer, and transhumanism in general, hasn't reached and influenced? I'm sure there are many, though I put the upper limit of number of people capable of doing anything worthwhile below 1M (whether they're doing anything is another matter). Perhaps the figure is much lower. But the "luminaries", boy, they are rare.

Millions of people are capable of hoovering money well in excess of their personal need. Projects aiming for post-humanity only need to target those people to secure unlimited funding.

"what makes you so damn important that you need to live forever? Get over yourself. After you die, there will be others taking over your work, assuming it was worth doing. Leave some biological and intellectual offspring and shuffle off this mortal coil and give a new generation a chance"

I vehemently disagree. What makes me so damn important, huh? What makes you so damn unimportant that you're not even giving it a try? The answer to both of these: You, yourself; you make yourself dman important or don't. Importance and significance are self-made. No one can give them to you. You must earn them.

There are damn important people. Unfortunately most of them were. Think of the joy if you could revive the best minds who've ever walked the earth. If you aren't one of them, try to become one.

Mtraven: "I truly have trouble understanding why people here think death is so terrible [...] [S]ince we are all hard-core materialists here, let me remind you that the flow of time is an illusion, spacetime is eternal [...]"

I actually think this one goes the other way. You choose to live right now, rather than killing yourself. Why not consistently affirm that choice across your entire stretch of spacetime?

Even if you're only capable of becoming an average, main sequence star, and not a quasistellar object outshining billions of others, what you must do is to become that star and not remain unlit. Oftentimes those who appear to shine brightly do so only because there's relative darkness around.

What if Eliezers weren't so damn rare; what if there were 100,000 x "luminaries"; which Eliezer's blog would you read?

"Important to whom?"
Important to the development of the universe. It's an open-ended project where we, its sentient part, decide what the rewards are, we decide what's important. I've come to the conclusion that optimizing, understanding, and controlling that which is (existence) asymptotically perfectly, is the most obvious goal. Until we have that figured out, we need to stick around.

Quite the contrary. I'd prefer it be so that Eliezer is a dime a dozen. It's the relative darkness around that keeps him in the spotlight. Is suspect there's nothing special - in the Von Neumann sense - about this chap, just that I haven't found anyone like him so far. Care to point some others like him?

Eliezer, if that last comment was in response to mine it is a disappointingly obtuse misinterpretation which doesn't engage with any of the points I made. "Life" is worth something; that doesn't mean that striving for the infinite extension of individual lives should be a priority.

I'm surprised by the commenters who cannot conceive of a future life that is more fun than the one they have now - who can't imagine a future they would want to stick around for. Maybe I should bump the priority of the Fun Theory sequence.

I a different type of fun helping people perform a somewhat meaningful* task than I do when I am just hanging out, puzzle solving, adventure sports or going on holiday. I have a little nagging voice asking, "What was the point of that". Which needs to be placated every so often, else the other types of fun lose their shine.

If I'm revived into a culture that has sufficient technology to revive me, then it is likely that it will not need any help that I could provide. My choices would be, no longer be myself by radically altering my brain to be of some use to people or immerse myself in make work fantasy tasks. If I pick the first, it makes little difference to my current self whether it is radically altered me or someone else being useful. The second choice is also not appealing, it would require lying to myself sufficiently well in order to fool my pointfulness meter.

You gain experience and new neuron connections all the time, do these things not make you to be yourself? If you are not yourself after gaining experience then the "you" that finishes this sentence is not the "you" that started it, may that "you" rest in peace. Further, I wear glasses which thing augments my abilities greatly, do the glasses make me a different "me" then I would be if glasses had not been invented? If not how is it different then adding new neurons to the brain?

Further, is learning new things not a meaningful experience to you? If you are required to learn lots of new things shouldn't that make the experience more enticing, especially if one knew one would have the time to both learn whatever one wished and to apply what one had learned.

Who knows, perhaps there is a deep fundamental fact that it is not possible to implement sentient beings in a universe where the evaluation rules don't enforce fairness. Or, slightly more plausible, it could be impossible to implement sentient tyrants who don't feel a "shade of gloom" when considering what they've done. Neither scenario sounds very plausible, of course.

The rule of thumb is: if you can imagine it, you can simulate it (because your brain is a simulator). The simulation may not be easy, but at least it's possible.

You name specific excuses for why life in the future will be bad for you. It sounds like you see the future as a big abandoned factory, where you are a shadow, and the strange mechanisms do their spooky dance. Think instead of what changes could make the future right specifically for you, with tremendous amount of effort applied to this goal. You are just a human, so attention your comfort can get starts far above the order of whole of humanity thinking about every tiny gesture to make you a little bit more comfortable for millions of years, and thinking about it from your own point of view. There is huge potential for improvement in the joy of life, just like human intelligence is nowhere near the ten orders of magnitude from the top, human condition is nowhere near the most optimal. You can't trust your estimate of how limited the future will be, how impossible it will be for the future to find a creative solution to your problem. Give it a try.

"The claim isn't that Germany would have been perfectly fine, and would never have started a war or done anything else extreme. And the claim is not that Hitler trashed a country that was ticking along happily.

The claim is that the history of the twentieth century would have gone substantially differently. World War II might not have happened. The tremendous role that Hitler's idiosyncrasies played in directing events, doesn't seem to leave much rational room for determinism here."

I disagree. Hitler did not departure very far from the general beliefs of the time. The brand of socialism and nationalism that became what Hitler preached had been growing in prominence for decades in academia and in the middle class consciousness. The alliance between the socialists and the conservatives against the liberals probably would have happened whether Hitler was at the top or not.

So you don't think you could catch up? If you had been frozen somewhere between -10000 and -100 years and revived now, don't you think you could start learning what the heck it is people are doing and understand nowadays? Besides a lot of the pre-freeze life-experience would be fully applicable to present. Everyone starts learning from the point of birth. You'd have headway compared to those who just start out from nothing.
There are things we can meaningfully contribute to even in a Sysop universe, filled with Minds. We, after all, are minds, too, which have the inherent quality of creativity - creating new, evermore elegant and intricate patterns - at whatever our level is, and self-improvement; optimization.

This is a big do-it-yourself project. Don't complain about there not being enough opportunities to do meaningful things. If you don't find anything meaningful to do, that's your failure, not the failure of the universe. Searching for meaningful problems to solve is part of the project.

I find Eliezer's (and many of the others here) total and complete obsession with the "God" concept endlessly fascinating. I bet you think about "God" more often than the large majority of the nominally religious. This "God" fellow has seriously pwn3d your wetware. . .

Giant cheesecake fallacy. If future could do everything you wanted to do, it doesn't mean it would do so. Especially if it will be bad for you. If future decides to let you work on a problem, even though it could solve it without you, you can't apply to the uselessness of your action: if future refuses to perform it, only you can make a difference. You can grow to be able to vastly expand the number of things you will be capable of doing, this source never dwindles. If someone or something else solved a problem, it doesn't necessarily spoil the fun for everyone else for all eternity. Or it was worth spoiling the fun, lifting poverty and disease, robbing you of the possibility of working on those things yourself. Take joy in your personal discoveries. Inequality can only be bad because you are not all you could be, not because there are things greater than you. Seek personal growth, not universal misery. Besides, doing something "inherently worthwhile" is only one facet of life, so even if there wouldn't be a solution to that, there are other wonders worth living for.

Imagine a priest in the temple of Zeus, back in Ancient Greece. Really ancient. The time of Homer, not Archimedes. He makes how best to serve the gods the guiding principle of his life. Now, imagine that he is resurrected in the world of today. What do you think would happen to him? He doesn't speak any modern language. He doesn't know how to use a toilet. He'd freak out at the sight of a television. Nobody worships the gods any more. Our world would seem not only strange, but blasphemous and immoral, an abomination that ought to be destroyed. His "head start" has given him far more anti-knowledge than actual knowledge.

Of course, he'd know things about the ancient world that we don't know today, but, even twenty years after he arrived, would you hire him, or a twenty-year-old born raised in the modern world? From almost every perspective I could think of, it would be better to invest resources in raising a newborn than to recreate and rehabilitate a random individual from our barbaric past.

----

Me, to The Future: Will you allow me to turn myself into the equivalent of a Larry Niven-style wirehead?
The Future: No.
Me: Will you allow me to end my existence, right now?
The Future: No.
Me: Then screw you, Future.

Doug: From almost every perspective I could think of, it would be better to invest resources in raising a newborn than to recreate and rehabilitate a random individual from our barbaric past.

No, for him it won't be better. Altruistic aspect of the humane morality will help, even if it's more energy-efficient to incinerate you. For that matter, why raise a newborn child instead of making a paperclip?

In the interest of helping folks here to "overcome bias", I should add just how creepy it is to outside observers to see the unswervingly devoted members of "Team Rational" post four or five comments to each Eliezer post that consist of little more than homilies to his pronouncements, scattered with hyperlinks to his previous scriptural utterances. Some of the more level-headed here like HA have commented on this already. Frankly it reeks of cultism and dogma, the aromas of Ayn Rand, Scientology and Est are beginning to waft from this blog. I think some of you want to live forever so you can grovel and worship Eli for all eternity. . .

"I think some of you want to live forever so you can grovel and worship Eli for all eternity"

The sentence is funnier when one knows that Eli is God in some languages. (not Eliezer but Eli)

I would change it to be that reason or rationalism (or Bayesianism) is the object of worship and Eliezer is the Prophet. It certainly makes pointing out errors (in reasoning) in some of the religion posts a less enticing proposition.

However, it does seem that not everyone is like that. Also, if actual reason is trusted and not dogmatic assertions that such and such is reasonable then error will eventually give way to truth. I certainly believe Eliezer to be mistaken about some things and missing influential observations about others but, for the most part, what he advocates doing with respect to not shutting down thinking because you disagree with things and other such things is correct.

If ones faith in whatever one believes in is so fragile that it can not be questioned then that is precisely when ones faith needs to be questioned. One should discover what ones beliefs actually are and what is essential to those beliefs and then see if the questions are bothersome. If so then one should face the questions head on and figure out why and what they do to ones beliefs.

I'd like a future where people were on a level with me, so I could be of some meaningful use.

However a future without massive disparities of power and knowledge between myself and the inhabitants, would not be able to revive me from cryo sleep.

I already guessed that might be the wish of many people. That's one reason why I would like to acquire the knowledge to deliberately create a single not-person, a Very Powerful Optimization Process. What does it take to not be a person? That is one of those moral questions that runs into empirical confusions. But if I could create a VPOP that did not have subjective experience (or the confusion we name subjective experience), and did not have any pleasure or pain, or valuation of itself, then I think it might be possible to have around a superintelligence that did not, just by its presence, supersede us as an adult; but was nonetheless capable of guarding the maturation of humans into adults, and, a rather lesser problem, capable of reviving cryonics patients.

If there is anything in there that seems like it should be impossible to understand, then remember that mysteries exist in the map, not in the territory.

The only thing more difficult than creating a Friendly AI, involving even deeper moral issues, is creating a child after the right and proper fashion of creating a child. If I do not wish to be a father, I think that it is acceptable for me to avoid it; and certainly the skill to deliberately not create a child would be less than the skill to deliberately create a healthy child.

So yes I do confess: I wish to create something that has no value of itself, so that it can safeguard the future and our choices, without, in its own presence, automatically setting the form of humankind's adulthood according to the design decisions of a handful of programmers, and superseding our own worth. An emergency measure should do as little as possible, and everything necessary; if I can avoid creating a child I should do so, if only because it is not necessary.

The various silly people who think I want to keep the flesh around forever, or constrain all adults to the formal outline of an FAI, are only, of course, making things up; their imagination is not wide enough to understand the concept of some possible AIs being people, and some possible AIs being something else. A mind is a mind, isn't it? A little black box just like yours, but bigger. But there are other possibilities than that, though I can only see them unclearly, at this point.

Every person on the planet who is trying to act somewhat like an adult will find they are no longer needed to do what is necessary. It doesn't matter that they are obsoleted by a process rather than a person, they are still obsolete. This seems like a step back for the maturation of humanity as a whole. It does not encourage taking responsibility for our actions. Life or life decisions are not so important, or meaningful.

You see it as the only way for humanity to survive, if I bought that I may support your vision, even though I would not necessarily want to live through it.

On a side note, do any of your friends jokingly call you Hanuman? The allusion is unfair, I'll grant you, he is far more insane and cruel than you, but the vision and motivation is eerily similar, on the surface details at least.

"Chad: if you seriously think that Turing-completeness does not imply the possibility of sentience, then you're definitely in the wrong place indeed."

gwern: The implication is certainly there and it's one I am sympathetic with, but I'd say its far from proven. The leap in logic there is one that will keep the members of the choir nodding along but is not going to win over any converts. A weak argument is a weak argument, whether you agree with the conclusion reached by that argument -- it's better for the cause if the arguments are held to higher standards.

"If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0."

You're only correct if the probability is constant with respect to time. Consider, however, that some uncertain events have a non-zero probability even if *infinite* time passes. For example, random walks in three dimensions (or more) are not guaranteed to meet their origin again, even over infinite time:

gwern: The implication is certainly there and it's one I am sympathetic with, but I'd say its far from proven.

1) Consciousness exists.
2) There are no known examples of 'infinite' mathematics in the universe.
3) It is therefore more reasonable to say that consciousness can be constructed with non-infinite mathematics than to postulate that it can't.

Disagree? Give us an example of a phenomenon that cannot be represented by a Turing Machine, and we'll talk.

I may hold a different belief but this is certainly a working hypothesis and one that should be explored to the fullest extent possible. That is I am not inclined to believe that we are Turing machines but I could be wrong on this as I do not know it to be the case. The hypothesis that we are Turing machines is one that should be explored as fully as possible. If we are not Turing machines then exploring the hypothesis that we are is worth pursuing as it will get us closer to understanding what it is we are.

Turing machines rely on a tape of infinite length at least in conception. I imagine the theory has been looked at with tapes of finite length?

Eliezer: imagine that you, yourself, live in a what-if world of pure mathematics

Isn't this true? It seems the simplest solution to "why is there something rather than nothing". Is there any real evidence against our apparently timeless, branching physics being part of a purely mathematical structure? I wouldn't be shocked if the bottom was all Bayes-structure :)

RI, it shouldn't literally be Bayes-structure because Bayes-structure is about inference is about mind. I have certainly considered the possibility that what-if is all there is; but it's got some problems. Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically - to suppose that it is necessary just because you can't find yourself not believing it, hardly unravels the mystery. And much worse, it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one (why our memories are so highly ordered).

Consequentialist, I just had a horrifying vision of a rationalist cult where the cultists are constantly playing pranks on the Master out of a sincere sense of duty.

I worry a bit less about being a cult leader since I noticed that Paul Graham also has a small coterie following him around accusing him of being a cult leader, and he's not even trying to save the world. In any case, I've delivered such warnings as I have to offer on both the nature of cultish thinking, and the traps that await those who are terrified of "being in a cult"; that's around as much as I can do.

Eliezer,
I'm a little disappointed, frankly. I would have thought you'd be over both God and the Problem of Evil by now. Possibly it goes to show just how difficult it is for people raised as (or by) theists to kill God in themselves.

But possibly you'll get there as you go along. I'd tell you what that was like but I don't know myself yet.

In an argument that is basically attempting to disprove the existence of God, it seems a little disingenuous to me to include premises that effectively rule out God's existence. If you aren't willing to at least allow the possibility of dualism for the sake of argument, then why bother talking about God at all?

Also, I am not sure what your notion of "infinite" mathematics is about. Can you elaborate or point me to some relevant resources?

Well, there's also the perspective of the newborn and the person it grows up into; if we consider that perspective, it probably would prefer that it exists. I don't want The Future to contain "me"; I want it to contain someone better than "me". (Or at least happier, considering that I would prefer to not have existed at all.) And I really doubt that my frozen brain will be of much help to The Future in achieving that goal.

They don't have to look good, they just have to beat the probabilities of your mind surviving the alternatives. Current alternatives: cremation, interment, scattering over your favourite football pitch. Currently I'm wavering between cryonics and Old Trafford.

Eliezer, I'm ridiculously excited about the next fifty years, and only slightly less excited about the fun theory sequence. Hope it chimes with my own.

Excellent post, agree with every single line of it. It's not depressing for me -- I went through that depression earlier, after finally understanding evolution.

One nitpick -- I find the question at the end of the text redundant.

We already know that all this world around us is just an enormous pattern arising out of physically determined interactions between particles, with no 'essence of goodness' or other fundamental forces of this kind.

So the answer to your question seems obvious to me -- if we don't like patterns we see around us (including us ourselves, no exceptions), all we need to do is to use physics to arrange particles into a certain pattern (for example superintelligence) that, in future, produces patterns desirable for us. That's all.

On the existential question of our pointless existence in a pointless universe, my perspective tends to oscillate between two extremes:

1.) In the more pessimistic (and currently the only rationally defensible) case, I view my mind and existence as just a pattern of information processing on top of messy organic wetware and that is all 'I' will ever be. Uploading is not immortality, it's just duplicating that specific mind pattern at that specific time instance. An epsilon unit of time after the 'upload' event that mind pattern is no longer 'me' and will quickly diverge as it acquires new experiences. An alternative would be a destructive copy, where the original copy of me (ie. me that is typing this right now) is destroyed after or at the instance of upload. Or I might gradually replace each synapse of my brain one by one with a simulator wirelessly transmitting the dynamics to the upload computer until all of 'me' is in there and the shell of my former self is just discarded. Either way, 'I' is destroyed eventually--maybe uploading is a fancier form of preserving ones thoughts for posterity as creating culture and forming relationships is pre-singularity, but it does not change the fact that the original meatspace brain is going to eventually be destroyed, no matter what.

The second case, what I might refer to as an optimistic appeal to ignorance, is to believe that though the universe appears pointless according to our current evidence, there may be some data point in the future that reveals something more that we are ignorant to at the moment. Though our current map reveals a neutral territory, the map might be incomplete. One speculative position taken directly from physics is the idea that I am a Boltzmann Brain. If such an idea can be taken seriously (and it is) than surely there are other theoretically defensible positions where my consciousness persists in some timeless form one way or another. (Even Bostrom's simulation argument gives another avenue of possibility)

I guess my two positions can be simplified into:
1.) What we see is all there is and that's pretty fucked up, even in the best case scenario of a positive singularity.

2.) We haven't seen the whole picture yet, so just sit back, relax, and as long as you have your towel handy, don't panic.

Regardless of whether you want to argue that being in a cult might be ok or not anything to worry about, the fact is this sort of thing doesn't look good to other people. You're going to win many converts -- at least the kind you want -- by continuing to put on quasi-religious, messianic airs, and welcoming the sort of fawning praise that seems to come up a lot in the comments here. There's obviously some sharp thinking going on in these parts, but you guys need to pay a bit more attention to your PR.

The request that we should 'fix the world' suggests that a.)we know that it is broken and b.)we know how to fix it; I am not so sure that this is the case. When one says 'X is wrong/unfair/undesirable etc., one is more often than not actually making a statement about one's state of mind rather than the state of reality i.e., one is saying 'I think or feel that X is wrong/unfair/undesirable'. Personally, I don't like to see images of suffering and death but I'm not sure that my distaste for suffering and death is enough to confidently assert that they are wrong or that they should be avoided. For example, without the facility for pain that leads to suffering we probably wouldn't make it past infancy and without death the world would be even more overpopulated than it is currently. No matter how rigorous and free from preconditioned religious thinking our reasoning is, 'what we would like to see' is still a matter of personal taste to some extent. Feeling and reasoning ineract in such an intricate and inseparable way that, while one may like to think one has reached conclusions about right/wrong/good/bad/just/fair etc. in a wholly dispassionate and rational way, it is likely that personal feelings have slipped in there unnoticed and unquestioned and added a troublesome bias.

You've said the bit about Paul Graham twice now in this thread; do you actually consider that good reasoning, or are you merely being flip? Paul Graham's followers may or may not be cultish to some degree, but that doesn't bear on the question of whether your own promotional strategies are sound ones. Let me put it this way: you will need solid, technically-minded, traditionally-trained scientists and engineers in your camp if you ever hope to do the things you want to do. The mainstream science community, as a matter of custom, doesn't look favorably upon uncredentialed lone wolves making grandiose pronouncements about "saving the world." This smacks scarily of religion and quackery. Like it or not, credibility is hugely important; be very careful about frittering it away.

I take your point...if your point is 'we gotta start somewhere'. Nontheless, the use of 'obviously' is problematic and misleading. To whom is it obvious? To you? Or perhaps you and your friends, or you and other people on the internet who tend to think in the same way as you and with whom you generally agree? Don't get me wrong, I have a very clear idea of what I think is crap (and I strongly suspect it'd be similar to yours) and I'm just as keen to impose my vision of the 'uncrap' on the world as the next person. However, I can't help but be troubled by the thought that the mass murder of jews, gypsies, the mentally retarted and homosexuals was precipitated by the fact that Hitler et al thought it was 'obvious' that they were crap and needed fixing.

Those who (want to) understand and are able, joyously create things that have always existed as potentials.

Those who don't (want to) understand and can't do anything real, make stuff up that never was possible and never will be.

The former last forever in eternal glory, spanning geological timescales and civilizations, for the patterns they create are compatible with the structure of the universe and sustained by it, while oblivion is reserved for the latter.

In an argument that is basically attempting to disprove the existence of God, it seems a little disingenuous to me to include premises that effectively rule out God's existence.

How exactly can you construct a disproof of X without using premises that rule out X? That's what disproving is.

Non-infinite mathematics: also known as finite mathematics, also known as discrete mathematics. Non-continuum. Not requiring the existence of the real numbers.

To the best of our knowledge, reality only seems to *require* the integers, although constructing models that way is a massive pain. If you can give us an example of a physical phenomenon that cannot be generated by a Turing Machine's output -- just one example -- then I will grant that we have no grounds for presuming human cognition can be so constructed. Also, you'll win several Nobel Prizes and go down in history as one of the greatest scientist-thinkers ever.

You can accept X as a premise and come to a contradiction of X with other accepted premises. Coming to something that seems absurd may also be grounds for doubting X, but doesn't disprove X. It might also be possible to prove that X and ~X are consistent with the other premises, which if the desire is to disprove X should be enough to safely ignore the possibility X is correct without further information.

I think for the Turing Machine part of this P/NP would need to be resolved first so he would also win 1 million dollars (or if P=NP then depending on his preferences he might not want to publish and use his code to solve every open question out there and get himself pretty much as much money as he wished)

Caledonian was a combination contrarian and curmudgeon back in the OvercomingBias days, and hasn't been around in years; so you probably won't get a direct reply.

However, if I understand this comment correctly as a follow-up to this one, you may want to look into the Church-Turing Thesis. The theory "physics is computable" is still somewhat controversial, but it has a great deal of support. If physics is computable, and humans are made out of physics, then by the Church-Turing Thesis, humans are Turing Machines.

I am actually familiar with the Church-Turing Thesis, as well as both Godel's incompleteness proof and the Halting problem. The theory that humans are Turing machines is one that needs to be investigated.

is creating a child after the right and proper fashion of creating a child. If I do not wish to be a father, I think that it is acceptable for me to avoid it; and certainly the skill to deliberately not create a child would be less than the skill to deliberately create a healthy child.

I would agree, I am not trying to create a child either. I'm trying to create brain stuff, and figure out how to hook it up to a human so that it becomes aligned to that humans brain. Admittedly it is giving more power to children, but I think the only feasible way to get adults is for humans to get power and the self-knowledge of what we are, and grow up in a painful fashion. We have been remarkably responsible with nukes, and I am willing to bet on humanity becoming more responsible as it realises how powerful it really is. You are and SIAI are good data point for this supposition.

After my degree going back 9 years or so I spent some time thinking about seedAIs of a variety of fashions, not calling them such. I could see no way for a pure SeedAI to be stable, unless it starts off perfect. To exist is to be imperfect, from everything I have learnt of the world. You always see the edited highlights of peoples ruminations, so when you see us disagree you think we have not spent time on it.

That is also probably part of why I don't want to be in the future, I see it as needing us to be adults and my poor simian brain is not suited to perpetual responsibility or willing to change to be so.

>How exactly can you construct a disproof of X without using
>premises that rule out X? That's what disproving is.

Sure, a mathematical proof proceeds from its premises and therefore any results achieved are entailed in those premises. I am not sure we are really in the real of pure mathematics here but I probably should have been more precise in my statement. In a non-mathematical discussion, a slightly longer chain of reasoning is generally preferred -- starting with the premise that dualism is false is a little uncomfortably close to starting with the premise that God doesn't exist for my taste.

>If you can give us an example of a physical phenomenon that
>cannot be generated by a Turing Machine's output -- just
>one example -- then I will grant that we have no grounds
>for presuming human cognition can be so constructed.

For the record (not that I have any particular standing worthy of note): I am not a dualist and I believe 100% that human cognition is a physical phenomenon that could be captured by a sufficiently complex Turing Machine. Can I prove this to be the case? No I can't and I don't really care to try -- and it's likely "above my level". The only reason I piped up at all is because I think strawman arguments are unconvincing and do a disservice to everyone.

>Also, you'll win several Nobel Prizes and go down in history
>as one of the greatest scientist-thinkers ever.
>I'm not holding my breath.

Don't worry -- I have no such aspiration, so you can comfortably continue with your respiration.

The various silly people who think I want to keep the flesh around forever, or constrain all adults to the formal outline of an FAI, are only, of course, making things up; their imagination is not wide enough to understand the concept of some possible AIs being people, and some possible AIs being something else.

Presuming that I am one of these "silly people": Quite the opposite, and it is hard for me to imagine how you could fail to understand that from reading my comments. It is because I can imagine these things, and see that they have important implications for your ideas, and see that you have failed to address them, that I infer that you are not thinking about them.

And this post reveals more failings along those lines; imagining that death is something too awful for a God to allow is incompatible with viewing intelligent life in the universe as an extended system of computations, and again suggests you are overly-attached to linking agency and identity to discrete physical bodies. The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely; this, also, is evidence of not thinking deeply about identity in deep time. The way you speak about the danger facing you - not the danger facing life, which I agree with you about; but the personal danger of death - suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after it. It seems most likely to me either that you're intentionally concealing that the good outcomes of your program still involve the "deaths" of all humans, or that you just haven't thought about it very hard.

What I've read of your ideas for the future suffers greatly from your not having worked out (at least on paper) notions of identity and agency. You say you want to save people, but you haven't said what that means. I think that you're trying to apply verbs to a scenario that we don't have the nouns for yet.

It is extraordinarily difficult to figure out how to use volunteers. Almost any nonprofit trying to accomplish a skilled-labor task has many more people who want to volunteer their time than they can use. The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share.

The SIAI is Eliezer's thing. Eliezer is constitutionally disinclined to value the work of other people. If the volunteers really want to help, they should take what I read as Eliezer's own advice in this post, and start their own organization.

it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one

I didn't think it would solve all our questions, I just wondered if it was both the simplest solution and lacking good evidence to the contrary. Would there be a higher chance of being a Boltzmann brain in a universe identical to ours that happened to be part of a what-if-world? If not, how is all this low-entropy around me evidence against it?

Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically

How would our "Block Universe" look different from the inside if it was a what-if-Block-Universe? It all adds up to...

suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after it

Eliezer, doesn't "math mysteriously exists and we live in it" have one less mystery than "math mysteriously exists and the universe mysteriously exists and we live in it"? (If you don't think math exists it seems like you run into indispensability arguments.)

IIRC the argument for a low-entropy universe is anthropic, something like "most non-simple universes with observers in them look like undetectably different variants of a simple universe rather than universes with dragons in them".

in any comparison of all possible combinations of bit/axiom strings up to any equal finite (long) length (many representing not only a world but also (using 'spare' string segments inside the total length) extraneous features such as other worlds, nothing in particular, or perhaps 'invisible' intra-world entities), it is reasonable to suppose that the simplest worlds (ie those with the shortest representing string segments) will occur most often across all strings, since they will have more 'spare' irrelevant bit/axiom combinations up to that equal comparison length, than those of more complex worlds (and so similarly for all long finite comparison lengths).

Thus out of all worlds inhabitable by SAS's, we are most likely to be in one of the simplest (other things being equal) - any physics-violating events like flying rabbits or dragons would require more bits/axioms to (minimally) specify their worlds, and so we should not expect to find ourselves in such a world, at any time in its history.

Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]

Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.

Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]

Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.

"Death" as we know it is a concept that makes sense only because we have clearly-defined locuses of subjectivity.

If we imagine a world where

- you can share (or sell) your memories with other people, and borrow (or rent) their memories

- most of "your" memories are of things that happened to other people

- most of the time, when someone is remembering something from your past, it isn't you

- you have sold some of the things that "you" experienced to other people, so that legally they are now THEIR experiences and you may be required to pay a fee to access them, or to erase them from your mind

- you make, destroy, augment, or trim copies of yourself on a daily basis; or loan out subcomponents of yourself to other people while borrowing some of their components, according to the problem at hand, possibly by some democratic (or economic) arbitration among "your" copies

- and you have sold shares in yourself to other processes, giving them the right to have a say in these arbitrations about what to do with yourself

- "you" subcontract some of your processes - say, your computation of emotional responses - out to a company in India that specializes in such things

- which is advantageous from a lag perspective, because most of the bandwidth-intensive computation for your consciousness usually ends up being distributed to a server farm in Singapore anyway

- and some of these processes that you contract out are actually more computationally intensive than the parts of "you" that you own/control (you've pooled your resources with many other people to jointly purchase a really good emotional response system)

- and large parts of "you" are being rented from someone else; and you have a "job" which means that your employer, for a time, owns your thoughts - not indirectly, like today, but is actually given write permission into your brain and control of execution flow while you're on the clock

- but you don't have just one employer; you rent out parts of you from second to second, as determined by your eBay agent

- and some parts of you consider themselves conscious, and are renting out THEIR parts, possibly without notifying you

- or perhaps some process higher than you in the hierarchy is also conscious, and you mainly work for it, so that it considers you just a part of itself, and can make alterations to your mind without your approval (it's part of the standard employment agreement)

- and there are actually circular dependencies in the graph of who works for whom, so that you may be performing a computation that is, unknown to you, in the service of the company in India calculating your emotional responses

- and these circles are not simple circles; they branch and reconverge, so that the computation you are doing for the company in India will be used to help compute the emotions of trillions of "people" around the world

Phil: [. . .] In such a world, how would anybody know if "you" had died?

Perhaps anyone else knowing whether you're alive or dead wouldn't matter. You die when you lose sufficient component magnitudes and claim strengths on your components. If you formulate the sufficient conditions, you know what counts as death for your decisions, thus for you. If you formulate the sufficiency also as instance in a greater network, you and others know what counts as death for you. In either case, unless you're dying to be suicidally abstract, you're somebody and you know what it means for you to die.

What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.

There are a variety of proposals floating about for ways to get the benefits of competition without actually having competition. The problem with competition is that it opens the doors to many moral problems. Eliezer may believe that correct Bayesian reasoners wonât have these problems, because they will agree about everything. This ignores the fact that it is not computationally efficient, physically possible, or even semantically possible (the statement is incoherent without a definition of âagentâ) for all agents to have all available information. It also ignores the fact that randomness, and using a multitude of random starts (in competition with each other), are very useful in exploring search spaces.

I don't think we can eliminate competition; and I don't think we should, because most of our positive emotions were selected for by evolution only because we were in competition. Removing competition would unground our emotional preferences (eg, loving our mates and children, enjoying accomplishment), perhaps making their continued presence in our minds evolutionarily unstable, or simply superfluous (and thus necessarily to be disposed of, because the moral imperative I have most confidence that a Singleton would follow is to use energy efficiently).

The concept of a singleton is misleading, because it makes people focus on the subjectivity (or consciousness; I use these terms as synonyms) of the top level in the hierarchy. Thus, just using the word Singleton causes people to gloss over the most important moral questions to ask about a large hierarchical system. For starters, where are the locuses of consciousness in the system? Saying âjust at the topâ is probably wrong.

Imagining a future that isnât ethically repugnant requires some preliminary answers to questions about consciousness, or whatever concept we use to determine what agents need to be included in our moral calculations. One line of thought is to impose information-theoretical requirements on consciousness, such as that a conscious entity has exactly one possible symbol grounding connecting its thoughts to the outside world. You can derive lower bounds for consciousness from this supposition. Another would be to posit that the degree of consciousness is proportional to the degree of freedom, and state this with an entropy measurement relating a processesâ inputs to its possible outputs.

Having constraints such as these would allow us to begin to identify the agents in a large, interconnected system; and to evaluate our proposals.

I'd be interested in whether Eliezer thinks CEV requires a singleton. It seems to me that it does. I am more in favor of an ecosystem or balance-of-power approach that uses competition, than a totalitarian machine that excludes it.

>To exist is to be imperfect
A thing that that philosophical types like to do that I dislike is making claims about about what it is to exist in general, claims that presumably would apply to all minds or 'subjects', when in fact those claims concern at most only the particular Homo Sapiens condition, and are based only on the experiences of one particular Homo Sapiens.

>However, I can't help but be troubled by the thought that the
>mass murder of jews, gypsies, the mentally retarted and
>homosexuals was precipitated by the fact that Hitler et al
>thought it was 'obvious' that they were crap and needed fixing.
To point out the obvious, Alice at least judges Hitler's actions as crap, and judges the imposition on the world of values that have not been fully considered as crap, and would like to impose those value on the world. This is was a good post on the subject: http://www.overcomingbias.com/2008/08/invisible-frame.html

A thing that that philosophical types like to do that I dislike is making claims about about what it is to exist in general, claims that presumably would apply to all minds or 'subjects', when in fact those claims concern at most only the particular Homo Sapiens condition, and are based only on the experiences of one particular Homo Sapiens.

My claim is mainly based on physics of one sort of another. For one the second law of thermodynamics. All systems will eventually degrade to whatever is most stable. Neutrons, IIRC. And unless a set of neutrons in thermodynamic equilibria happen to be your idea of perfection, or your idea of perfection is impermanent, then my statement stands.

Another one is acting, a quantum system decoheres or splits the universe into two possible worlds. The agent doesn't know which of the possible worlds it is in (unless it happens to have a particle in super position with the decohering system), so has to split the difference and act as if it could be in either. As such it is imperfect.

What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.

The idea of one big organism is not really that it will be, in some sense "optimal". It's more that it might happen - e.g. as the result of an imbalance of the powers at the top.

We have very limited experience with political systems. The most we can say is that so far, we haven't managed to get communism to be as competitive as capitalism. However, that may not matter. If all living system fuse, they won't have any competition, so how well they operate together would not be a big issue.

In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.

In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.

We're talking about competition between optimization processes. What would it mean to be a simulation of a computation? I don't think there is any such distinction. Subjectivity belongs to these processes; and they are the things which must compete. If the winner could be determined by a simpler computation, you would be running that computation instead; and the hypothetical consciousness that we were talking about would be that computation instead.

If the winner could be determined by a simpler computation, you would be running that computation instead [...]

Well, that's the point. Usually it can be, and often we're not. There's a big drive towards virtualising combat behaviour in nature. Deer snort at each other, sea lions bellow - and so on: signalling who is going to win without actually fighting. Humans do the same thing with national sports - and with companies - where a virtual creature dies, and the people mostly walk away. But we are still near the beginning of the curve. There are still many fights, and a lot of damage done. Huge improvements in this area could be made.

Tim - I'm asking the question whether competition, and its concomitant unpleasantness (losing, conflict, and the undermining of CEV's viability), can be eliminated from the world. Under a wide variety of assumptions, we can characterize all activities, or at least all mental activities, as computational. We also hope that these computations will be done in a way such that consciousness is still present.

My argument is that optimization is done best by an architecture that uses competition. The computations engaged in this competition are the major possible loci for consciousness. You can't escape this by saying that you will simulate the competition, because this simulation is itself a computation. Either it is also part of a possible locus of consciousness, or you have eliminated most of the possible loci of consciousness, and produced an active but largely "dead" (unconscious) universe.

Alex, I admit I hope the fawning praisers, who are mostly anonymous, are Eliezer's sockpuppets. Rather than a dozen or more people on the internet who read Eliezer's posts and feel some desire to fawn. But it's mostly an aesthetic preference -I can't say it makes a real difference in accomplishing shared goals, beyond being a mild waste of time and energy.

Aren't you a bit biased here? If one expresses positive views about Eliezer, that's fawning, obsequiousness, or other rather exaggerated word, but negative views and critique is just business as usual. As usual.

It would be better if talking about people ceased and ideas and actions got 100% attention.

Remove the talk about people from politics and what's left? Policies? I don't know what the people/policies ratio in political discussion in the media is, but often it feels like most of the time is spent on talking about the politicians, not about policies. I guess it's supposed to be that way.

My argument is that optimization is done best by an architecture that uses competition.

Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.

Remember the old joke: "Why is there only one Monopolies Commission?"

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.

Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.

If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world devoid of consciousness.

If they do incorporate agents, then for them not to be "harmed", they need not to feel bad if their trial fails. What would it mean to build agents that weren't disappointed if they failed to find a good optimum? It would mean stripping out emotions, and probably consciousness, as an intermediary between goals and actions. See "dead world" above.

Besides being a great horror that is the one thing we must avoid above all else, building a superintelligence devoid of emotions ignores the purpose of emotions.

First, emotions are heuristics. When the search space is too spiky for you to know what to do, you reach into your gut and pull out the good/bad result of a blended multilevel model of similar situations.

Second, emotions let an organism be autonomous. The fact that they have drives that make them take care of their own interests, makes it easier to build a complicated network of these agents that doesn't need totalitarian top-down Stalinist control. See economic theory.

Third, emotions introduce necessary biases into otherwise overly-rational agents. Suppose you're doing a Monte Carlo simulation with 1000 random starts. One of these starts is doing really well. Rationally, the other random starts should all copy it, because they want to do well. But you don't want that to happen. So it's better if they're emotionally attached to their particular starting parameters.

It would be interesting if the free market didn't actually reach an optimal equilibrium with purely rational agents, because such agents would copy the more successful agents so faithfully that risks would not be taken. There is some evidence of this in the monotony of the movies and videogames that large companies produce.

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? I don't think there are any such conditions that don't require stripping your superintelligence of most of the possible niches where smaller consciousnesses could reside inside it.

In a finite universe world there are no true turing machines, as there are no infinite tapes; thus if you are going to be assigning some philosophical heft to turing-completeness you are being a bit sloppy, and should be saying "show me something that provably cannot be computed by a finite state machine of any size".

Yes, some Buddhist sects allow for complete annihilation of self. Most of the Zen sects, actually. No gods, no afterlife, just here-and-now, whichever now you happen to be considering. Reincarnation is simply the reconfiguration of you, moment by moment, springing up from the Void (or the quantum foam, if you prefer), each moment separate and distinct from the previous or the subsequent. Dogen (and Bankei and Huineng, for that matter) understood the idea of Timeless Physics very well.

I have never come across anyone who could present a coherent and intelligible definition for the word that didn't automatically render the referent non-existent.

Before we try to answer the question, we need to establish that the question is a valid one. 'How many angels can dance on the head of a pin?' is not one of the great mysteries, because the question is only meaningful in a context of specific, unjustifiable beliefs. Eliminate those beliefs and there's no more question.

Note: I'm an atheist who, like you, agrees that there's no divine plan and that, good or bad, shit happens.

That said, I think there's a hole in your argument. You're convincing when you claim that unfair things happen on Earth; you're not convincing when you claim there's no afterlife where Earthly-unfairness is addressed.

Isn't that the whole idea (and solace) of the afterlife? (Occam's Razor stops me from believing in an afterlife, but you don't delve into that in your essay.) A theist could easily agree with most of your essay but say, "Don't worry, all those Holocaust victims are happy in the next world."

The same holds for your Game of Life scenario. Let's say I build an unfair Game of Life. I construct rules that will lead to my automata suffering, run the simulation, automata DO suffer, and God doesn't appear and change my code.

Okay, but how do I know that God hasn't extracted the souls of the automata and whisked them away to heaven? Souls are the loophole. Since you can't measure them (since they're not part of the natural world but are somehow connected to the natural world), God can cure unfairness (without messing with terrestrial dice) by playing right by souls.

My guess is that, like me, you simply don't believe in souls because such a belief is an arbitrary, just-because-it-feels-good belief. My mind -- trained and somehow naturally conditioned to cling to Occam's Razer -- just won't go there.

So you don't think you could catch up? If you had been frozen somewhere between -10000 and -100 years and revived now, don't you think you could start learning what the heck it is people are doing and understand nowadays? Besides a lot of the pre-freeze life-experience would be fully applicable to present. Everyone starts learning from the point of birth. You'd have headway compared to those who just start out from nothing.
There are things we can meaningfully contribute to even in a Sysop universe, filled with Minds. We, after all, are minds, too, which have the inherent quality of creativity - creating new, evermore elegant and intricate patterns - at whatever our level is, and self-improvement; optimization.

No, you couldn't. Someone from 8,000BC would not stand a chance if they were revived now. The compassionate thing to do, really, would be to thaw them out and bury them.

Yes, they would be worse off than children. Don't underestimate the importance of development when it comes to the brain.

Minds don't have the "inherent quality of creativity". Autistics are the obvious counterexample.

if I could create a VPOP that did not have subjective experience (or the confusion we name subjective experience)

The confusion we name subjective experience? TBH Eli, that sounds like neomystical crap. See below.

I have never come across anyone who could present a coherent and intelligible definition for the word that didn't automatically render the referent non-existent.

Qualia are neural representations of certain data. They can induce other neurological states, creating what we know as the first-person. So what? I don't see why so called reductionists quibble over this so much. They exist, just get over it and study it if you really want to.

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? [...]

Nature excels at building large-scale cooperative structures. Multicellular organisms are one good example, and the social insects are another.

If the evidence for the superiority of competition over cooperation consists of "well, nobody's managed to get cooperation in the dominant species working so far", then it seems to me, that's a pretty feeble kind of evidence - offering only extremely tenuous support to the idea that nobody will ever get it to work.

The situation is that we have many promising examples from nature, many more promising examples from history, a large trend towards globalisation - and a theory about why cooperation is naturally favoured.

Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.

Maybe - but test failures are typically not a sign that you need to bin the offending instance. Think of how programmers work. See some unit test failures? Hit undo a few times, until they go away again. Rarely do you need to recycle on the level of worms.

I find it strange how atheists always feel able to speak for God. Can you speak for your human enemies? Can you even speak for your wife, if you have one? Why would you presume to think you can say what God would or wouldn't allow?

Abe: I find it strange how atheists always feel able to speak for God.

Sometimes, they're not trying to speak for God, as they're not first assuming that an ideally intelligent God exists. Rather, they're imagining and speaking about the theist assumption that an ideally intelligent God exists, and then they carefully draw inferences which tend to end up incoherent on that grounding. However, philosophy of religion reasonably attempts coherence, and not all atheists are completely indifferent toward it.

Why, because it's a meaningful definition - and people are generally referring to something utterly meaningless? If you want me to define what people, in general, are talking about then of course I can't give a meaningful definition.

But I contend that this is meaningful, and it is what people are referring to - even if they don't know how to properly talk about it.

Imagine person A says that negative numbers are not even conceptually possible, or that arithmetic or whatever can't be performed with them. Person B contends otherwise. Person A asks how one could possibly add negative numbers, and B responds with a lecture about algebraic structures. A objects, "But people aren't generally referring to algebraic structures when they talk about maths, etc. I wasn't talking about that, I was talking about (4-7) and (-2*14) and how these things make no sense."

Well I contend that even if people don't know what they're talking about when they say "qualia this" or "qualia that" - and, in general, they're using gibberish definitions and speech - they're actually trying to talk about something close to the definition I've given.

you're using the word incorrectly

Again, if the only "correct" way to use the word is in the same manner as it is generally thought of, then of course you will never find a sensible definition because none of the sensible definitions are in common use - so you've ruled them out a priori, and are touting a tautology. But I'm not going to define what other people think they mean by a word - I'm going to define the ontology of the situation. If that's at odds with what people think they're talking about, then so what? People talk about God and think they're referring to this guy in the sky who is actually real - doesn't mean it's what's really going on, or that that's a really accurate definition of God (which would lead to the ontological argument being sound).

The problem, of course, is that qualia (or more generally, experiencing-ness) is not a concept at all (well there is a concept of experiencing-ness, but that is just the concept, not the actuality). A metaphor for experiencing-ness is the "theater of awareness" in which all concepts, sensations, and emotions appear and are witnessed. But experiencing-ness is prior to any and all concepts.

Nate Barna: Sometimes, they're not trying to speak for God, as they're not first assuming that an ideally intelligent God exists. Rather, they're imagining and speaking about the theist assumption that an ideally intelligent God exists, and then they carefully draw inferences which tend to end up incoherent on that grounding. However, philosophy of religion reasonably attempts coherence, and not all atheists are completely indifferent toward it.

It may be true that some times atheists carefully draw inferences from the idea of an ideally intelligent God. I have yet to see it. Eliezer doesn't seem to be at all careful when he says, "The obvious example of a horror so great that God cannot tolerate it, is death - true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense - full-blown, ontologically fundamental, the God - we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it." There is no careful inference there. There is just bald-face assertion.

Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.

The truth is, we don't even know what a SIAI would do, let alone a truly transcendent being, like God. If one going to try and falsify a concept of God, it should at least be a concept more authoritative than the ad hoc imaginings of an atheist.

Abe: Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.

An atheist can imagine God having the thought: As your God, I don't care that you deny Me. Your denial of Me is inconsequential and unimpressive in the greater picture necessarily inaccessible to you. If this is an ad hoc imagining, then your assumption, in your question, that a being who can create minds at will doesn't flinch at their annihilation must also be ad hoc.

Following on to the sub-thread here, initiated by Recovering Irrationalist, about whether mathematical existence may be all there is, and that we live in it.

What does that say about the title of the post, Beyond the Reach of God?

Wouldn't it imply that there are those who are indeed beyond God's reach, since even God Himself cannot change the nature of mathematics? That is, God does not really have any control over the multiverse; it exists in an invariant form independent of God's existence.

However we can also argue that there are worlds within the multiverse where something like God exists and does have control, as well as worlds where there is no such entity. (This requires understanding "control" as being compatible with the deterministic nature of the mathematical multiverse.) The question then arises as to which kind of world within the multiverse we inhabit. Are we beyond the reach of God?

A case can be made that this kind of question is poorly posed, because entities with brain structures identical to our own inhabit many places in the multiverse, hence we have to view our experiences as being a sort of superposition of all those instances. So we should ask, what fraction of our copies exist in universes controlled by a God-like entity, versus what fraction exist in universes without any such controller? At this point the traditional arguments come into play about how likely the universe we see about us is likely to be compatible with the ability and motivations of a God-like entity, whether such an entity would allow injustice and evil to exist, etc.

@Doug S. Read "The Gentle Seduction" by Marc Stiegler. And, if you haven't already, consider anti-depressants: I know a number of people whom they have saved from suicide.

::Googles "The Gentle Seduction"::

Yeah, that's a very beautiful story. And yes, I take antidepressants. They just change my feelings, not my beliefs. Their subjective effect on me can best be described as "Yes, my life still sucks, but I'm cheerful anyway!" If I honestly prefer retroactive non-existence even when happy, doesn't that suggest that my assessment stems from something other than a lack of happiness chemicals in my brain?

Fear not, I have no intention of committing suicide in the near future. Although I prefer the world in which I never existed, the world in which I exist and and then die tomorrow is worse than the one in which I exist and continue to exist for several more years. (Specifically, my untimely death would cause great misery to several people that care about me, so I will refrain from dying before they do.)

I don't understand why you believe this thought exercise leads to despair or unhappiness. I went through this thought experiment many years ago, and the only significant impact it had on me was that I evaluate risk very differently than most people around me. I'm no less happy (or more depressed at least) or motivated, and I experience about as much despair as a non-secular optimist: occasional brief glimpses of it which quickly evaporate as I consider my options and choices.

And, to be honest, the process of looking at existentialism and going through some of the chains of thought mentioned in this article definitely improved my young adulthood. It made it more interesting and exciting, and made me more interesting to other people.

In any case, despair is not such a bad thing. Look at Woody Allen. Is he a joyless unproductive fear-ridden hermit? uh, no. But he's certainly thought through everything in this article, and accepted it, and turned it all into a source of amusement, curiosity, and intellectual stimualtion.

Lots of ideas here. They only seem to work if God is primarily concerned about fairness on earth. What if God is not so concerned about our circumstance as He in our response to circumstance. After all, He has an eternal perspective, while our perspective is limited by what we see of life. If this were true, then earth, and our existence, are like a big machine designed specifically to sort out the few good from the bad. Being raised in an Orthodox Jewish family, Iâm sure you encountered countless examples in the bible where bad stuff happened to good people. This is no great revelation of truth â itâs plainly obvious, so the authors of the bible obviously had no problem reconciling this dilemma, countless Jews and Christians have no difficulty reconciling these facts. Theyâre probably not all idiots or wishful thinkers, so perhaps they understand a perspective that you have not considered. Maybe God doesnât settle all accounts on your time table, but rather His. Maybe your values (and mine) do not perfectly align with a perfect God âso who then should change? I completely understand why people spend their lives praying to God, searching for understanding, proselytizing to slammed doors. I cannot fathom why an atheist (an authentic atheist) would waste a moment of their precious short life writing endlessly on something they believe to be pure fiction. As for me, Iâll keep praying.

In particular as an Orthodox Jew he should be very familiar with Deuteronomy, Isaiah, and Jeremiah where the scattering of the Jews and there centuries of oppression and persecution are predicted as well as there eventual gathering after they have reached the point that they thought they were going to be forgotten forever and utterly destroyed, such that the survivors of the horrors are predicted to say at the number of them, these where did they come from? for we were left alone to paraphrase Isaiah 49:21.

I tend to resolve these issues with measure-problem hand-waving. Basically, since any possible universe exists (between quantum branching, inflationary multiverse, and simulated/purely mathematical existence), any collection of particles (such as me sitting here) exists with a practically uncountable set of futures and pasts, many of which make no sense (bolzman brains). The measure problem is, why is that "many" not actually "most"? The simplest answer is the anthropic one: because that kind of existence simply "doesn't count". So, there is some set of qualities of the universe as we know it that make it "count", let's call that set "consciousness". And, personally, I think that this set includes not only the existence of optimizing agents (ourselves), but also the fact that these agents are fundamentally limited in something similar to the ways that we are. In other words, the very existence of some FAI which can keep all of your bad decisions (for any given definition of "bad") from having consequences, means that "consciousness" as we know it has ended. Whatever exists on the other side of that barrier is simply incommensurable with my values here on this side. It's "game over". I can have perfect faith that my "me" will never see it completed - by definition, since then I'd no longer be a conscious "me" under my definition.

That means I am much more motivated to look for (weakly) "incremental" solutions to the problems I see with the world than for truly revolutionary ones like FAI or cryonics. (I regard the last "revolutionary" change to be the evolution of humanity itself - so "incremental" in this sense encompasses all human revolutions to date. The end of death would not be incremental in this sense.)

Sure, I can see where this is more of a justification for acting like a normal person than a rational exploration of fully coherent value space. Yet I can also argue that being meta-rational means justifying, not re-questioning, certain axioms of behavior.

Shorter me: "solving the whole world" leaves me cold, despite fun theory and all. So does ending death, or avoiding it personally. So me not signing up for cryo is perfectly rational.

While I acknowledge that this might not be the most complete and coherent possible set of values, I see no evidence that it's specifically incoherent, and it is complete enough for me. The Singularity Institute set of values may be more complete and just as non-incoherent, but I suspect that mine are operationally superior, or at least less likely to be falsified, since they attain similar coherence with less of a divergence from evolved human behaviour.

Last night I was reading through your "coming of age" articles and stopped right before this one, which neatly summarizes why I was physically terrified. I've never before experienced sheer existential terror, just from considering reality.

Are there any useful summaries of strategies to rearrange priorities and manage time to deal with the implications of this post? I get the existential terror part. We're minds in constant peril, basically floating on a tattered raft in the middle of the worst hurricane ever imagined. I'm sure only few of the contributors here think that saying, "this sucks but oh well" is a good idea. So what do we do?

Since I've started reading LW, I have started to devote way more of my life to reading. I read for hours each day now, mostly science literature, philosophy, economics, lots of links to external things from LW. But it hardly feels like enough and every choice I make about what to read feels like a precious one indeed. I am a grad student, and I think often about the rapidly changing landscape of PhD jobs. Should I be content going to an industrial job and paying for cryonics and hoping to nudge people in my social circles to adopt more rational hygiene in their beliefs (while working on doing so myself as well)? I know no one can really answer that kind of question for me, but other people can simulate that predicament and offer advice. Is there any?

If an intelligence explosion does happen in the next few decades, why am I even spending precious minutes worrying about what skill set to train into myself for such-and-such an industry or such-and-such a career? Those types of tasks might even be the very first tasks to be subsumed by advanced technology (much the way that technology displaces legal research assistants faster than janitors). The world isn't fair. I could study advanced math and engineering and hit my career at just the moment in history when those stop being people-tasks. I could be like the Reeks and Recs from Vonnegut's Player Piano. This is serious beeswax here. I want to make a Bayesian decision about how to spend my time and what skill set to train into myself. It would seem like this site is among the best places to pose the question and ask for direction to sweet updatable evidence. But where is some?

Not every child needs to stare Nature in the eyes. Buckling a seatbelt, or writing a check, is not that complicated or deadly. I don't say that every rationalist should meditate on neutrality. I don't say that every rationalist should think all these unpleasant thoughts. But anyone who plans on confronting an uncalibrated challenge of instant death, must not avoid them.

Granted. Now, where are the useful, calibrated challenges? I am like a school-age child in my rationality; I can read and understand this passage about neutrality and think about it for a moment, but I cannot yet hold it in my mind as I go out into the world to do work. But I want to do work, and I want it to be useful. Is there anything I can do short of disconnecting from society and meditating on rationality until neutrality seems intuitive?

Unfortunately, this post, dated 4 October 2008, blatantly ignores the good sense of the 'Occam's Razor' one, dated 26 September 2007. http://lesswrong.com/lw/jp/occams_razor/
It is very naive to argue along the lines of "cellular automata are Turing complete, hence we can build a cellular automaton simulating anything we want to". This is just using the term "Turing complete" in the same way as the poor barbarians of the 'Occam's Razor' post use the term "Thor", viz., as a sort of totem you wave around in order to spare you the work of actually thinking things through. Well, of course you can imagine a cellular automaton simulating anything you like, as long as it isn't self-contradictory. But there lies the problem, it is very difficult to know whether some concept is self-contradictory just using natural language before you have actually gone and built it. Who is telling you that all the moral and spiritual aspects of the conditio humana aren't going to pop up in your simulation as epiphenomena, by necessity, just as they did in this universe?
That's right, you can't know until you have done the simulation. The smug "Is this world starting to sound familiar?" really cuts two ways in this case.

Nice. Here I present what I genuinely think is a flaw in this article, and instead of getting replies, I am just voted down "below threshold". I believe I have pointed out exactly what I disagree with and why. I would have been happy to hear people disagreeing or asking me to look at this from some other perspective. But apparently there is a penalty for violating the unwritten community rule that "Eliezer's posts are unfailingly brilliant and flawless". I have learned a lot from this website. There are sometimes very deep ideas, and intelligent debate. But I think the community is not for me, so I will let this account die and go back to lurking.

I didn't vote down your post (or even see it until just now), but it came across as a bit disdainful while being written rather confusingly. The former is going to poorly dispose people toward your message, and the latter is going to poorly dispose people toward taking the trouble to respond to it. If you try rephrasing in a clearer way, you might see more discussion.

Then maybe, instead of just downvoting, these persons should have asked him to clarify and repharse his post. This would have actually led to an interesting dicussion, while downvoting gave nobody nothing. Maybe it should be possible to downvote a post only if you also reply to that post.

Personally I think that this call voting is indeed useless and belongs to places such as Youtube or other such sites where you can't expect a meaningful discussion in the first place. Here, if a person disagrees with you, I believe she or he should post a counter argument instead of yelling "your are wrong!", that is, giving a negative vote.

The problem with downvotes is that those who are downvoted are rarely people who know that they are wrong, otherwise they would have deliberately submitted something that they knew would be downvoted, in which case the downvotes would be expected and have little or no effect on the future behavior of the person.

In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty rather than signaling that the person who downvoted did so for various other reasons than being right.

Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong due to the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to either continue to express their opinion or avoid further discussion and continue to hold wrong beliefs.

With respect to the reputation system employed on lesswrong it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise.

And even if the above mentioned problems wouldn't exist, reputation systems might easily reinforce any groupthink, if only by causing those who disagree to be discouraged and those who agree to be rewarded.

If everyone was perfectly rational a reputation system would be a valueable tool. But lesswrong is open to everyone. Even if most of the voting behavior is currently free of bias and motivated cognition it might not stay that way for very long.

Take for example the voting pattern when it comes to plain English, easily digestible submissions, versus highly technical posts including math. A lot of the latter category receives much less upvotes. The writing of technical posts is actively discouraged by this inevitable effect of a reputation system.

Worst of all, any reputation system protects itself by making those who most benefit from it defend its value.

I didn't see you complaining for the upvotes you got in other comments. You just barge in here to accuse us of groupthink if you get downvoted (never complaining about unjust upvotes), because you can't even imagine any legitimate reason that could have gotten you downvotes for a badly written and incoherent post. It seems a very common practice in the last couple weeks -- CriticalSteel did it, sam did it, you now do it.

As for your specific comment, it was utterly muddled and confused -- it didn't even understand what the article it was responding to was about. For example what was there in the original article that made you think "Who is telling you that all the moral and spiritual aspects of the conditio humana aren't going to pop up in your simulation"? is actually disagreeing with something in the article?

And on top of that you add strange inanities, like the claim that "moral and spiritual aspects" of the human condition (which for some reason you wrote in Latin, perhaps to impress us with fancy terms -- which alone would have deserved a downvote) are epiphenomenal in our universe. The very fact that we can discuss them means they affect our material world (e.g. by typing posts in this forum about them), which means they are NOT epiphenomenal.

You didn't get downvotes from me before, but you most definitely deserve them, so I'll correct this omission on both the parent and the grandparent post.

If you really dislike everyone else so much why don't you people turn this into a private mailing list where only those that are worthwhile can participate? Or make a survey a mandatory part of the registration procedure where everyone who fails some basic measure is told to go away.

Either that or you stop bitching and ignore stupid comments. Or you actually try to refine people's rationality by communicating the insights that the others miss.

The very fact that we can discuss them means they affect our material world (e.g. by typing posts in this forum about them), which means they are NOT epiphenomenal.

Have you tried Wikipedia? "In the more general use of the word a causal relationship between the phenomena is implied: the epiphenomenon is a consequence of the primary phenomenon;"

What he tried to say is that "moral and spiritual aspects" of the human condition might be an implied consequence of the initial state of a certain cellular automaton.

It displays the typical lesswrong mindset that lesswrong is the last resort of sanity and everyone else is just stupid and not even worthy of more than a downvote.

Really? I think my flaw has generally been the opposite, I try to talk to people far beyond the extent that it is meaningful Just recently that was exemplified.

If you really dislike everyone else so much why don't you people

Who is "us people"? People that downvoted deeb without a comment? But I'm not one of them -- I downvoted him only after explaining in detail why he's being downvoted. The typical LW member? You've been longer in LessWrong than I have been, I believe, and have a much higher karma score. I'm much closer to being an outsider than you are.

Have you tried Wikipedia?

You are looking at the medicinal section -- when one talks about spiritual or moral aspects of the human condition, the philosophical meaning of the word is normally understood. "In philosophy of mind, epiphenomenalism is the view that mental phenomena are epiphenomena in that they can be caused by physical phenomena, but cannot cause physical phenomena. " as the article you linked to says.

Perhaps you know what he tried to say, but I don't. Even if he meant what you believe him to have meant (which is still a wrong usage of the word), I still don't see how this works as a meaningful objection to the article.