When Life Is Cheap, Death Is Cheap

Hunters couldn’t see how exactly a farming life could work, nor could farmers see how exactly an industry life could work. In both cases the new life initially seemed immoral and repugnant to those steeped in prior ways. But even though prior culture/laws typically resisted and discouraged the new way, the few groups which adopted it won so big others were eventually converted or displaced.

Carl considers my scenario of a world of near-subsistence-income ems in a software-like labor market, where millions of cheap copies are made of a each expensively trained em, and then later evicted from their bodies when their training becomes obsolete. Carl doesn’t see how this could work:

The Alices now know that Google will shortly evict them, the genocide of a tightly knit group of millions: will they peacefully comply with that procedure? Or will they use politics, violence and any means necessary to get capital from capital-holders so that they can continue to exist? If they seek allies, the many other ems who expect to be driven out of existence by competitive niche exclusion might be interested in cooperating with them. … In order … that biological humans could retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough that whole lineages will regularly submit to genocide, even though the overwhelming majority of the population expects the same thing to happen to it soon. But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

I see pathologically-obedient personalities neither as required for my scenario, nor as clearly leading to a totalitarian world regime.

First, taking the long view of human behavior we find that an ordinary range of human personalities have, in a supporting poor culture, accepted genocide, mass slavery, killing of unproductive slaves, killing of unproductive elderly, starvation of the poor, and vast inequalities of wealth and power not obviously justified by raw individual ability. The vast majority of these cultures were not totalitarian. Cultures have found many ways for folks to accept death when “their time has come.” When life is cheap, death is cheap as well. Of course that isn’t how our culture sees things, but being rich we can afford luxurious attitudes.

Those making body loans to ems would of course anticipate and seek to avoid expropriation after obsolesce. In cultures where ems were not slaves, body owners might have to guarantee ems whatever minimum quality retirement ems needed to agree to a new body loan, perhaps immortality in some cheap slow-speed virtual-reality. But em cultures able to avoid such guarantees, and only rarely suffering revolts, should have a substantial competitive advantage. Some non-slave ways to avoiding revolts:

Bodies with embedded Lojack-like hardware to track and disable em bodies due for repossession.

Fielding new better versions slowly over time, to discourage rebel time coordination.

Avoid concentrating copies that will be obsolete at similar times in nearby hardware.

Prefer em copy clans trained several ways, so the clan won’t end when one training is obsolete.

Train ems without a history of revolting, even in virtual reality revolt-scenario sims.

Have other copies of the same em mind be the owners who pull the plug.

I don’t know what approach would work best, but I’ll bet something will. And these solutions don’t seem to me to obviously lead to a single totalitarian world government.

I have thought about those and other methods of em social control (I discussed #1 and #5 in my posts), and agree that they could work to create and sustain a variety of societal organizations, including the ‘Dawn’ scenario: my conclusion was that your scenario implied the existence of powerful methods of control. We may or may not disagree, after more detailed exchanges on those methods of social control, on their applicability to the creation of a narrowly-based singleton (not necessarily an unpleasantly totalitarian one, just a Bostromian singleton).

At one point you said that an approach I described was how an economically-powerful Stalin might run an em project, and said, “let’s agree not to let that happen,” but if a Stalinesque project could succeed, it is unclear why we should assign sub-1% probability to the event, whatever we OB discussants might agree. To clarify, what probability would you assign to a classified government-run Stalinesque project with a six month lead using em social control methods to establish a global singleton under its control and that of the ems, with carefully chosen values, that it selects?

“In both cases the new life initially seemed immoral and repugnant to those steeped in prior ways. But even though prior culture/law typically resisted and discouraged the new way the few places which adopted the new way won so big others were eventually converted or displaced.”

Historically, intertribal and interstate competition have prevented the imposition of effective global policies to slow and control the adoption of more efficient methods, but the effective number of jurisdictions is declining, and my point is that there will be a temptation for a leading power to try to seize its early em advantage to prevent the competitive outcome, in a way that was economically infeasible in the past. Once we clarify views on the efficacy of social control/coordination, we can talk more about the political economy of how such methods will be used.

Carl, neither the ability to repossess bodies, as we do for cars now, nor the ability to check if job candidates have a peaceful work history, as we also do now, seem remotely sufficient to induce a totalitarian world regime. You seem to have a detailed model in mind of how a world totalitarian regime arises; you need to convince us of that model if we are to believe what you see as its implications. Otherwise you sound as paranoid as were abstract fears that reduced internet privacy would lead to a totalitarian US regime.

Just checking the bullet that is being bit here: Robin’s point is that we will find efficient, socially accepted ways to delete 10^whatever sentient programs when they are no longer needed?

The upload/emulation scenario here seems rather dystopian well before considering something like a singleton or a totalitarian regime. Lucky cryonics customers will be scanned and uploaded, then tested for willingness to work at subsistence wages until an inevitable shutdown at the speed of hard-takeoff obsolesence; those unwilling (or with inadequate potential) can hope to be saved and stored, inactive; those willing will be copied by the million and put to work until the next batch of millions is ready to replace them. Sort of signing up to be part of a rolling digital self-genocide.

Carl Shulman

I do have a detailed model in mind, considering the political economy of emulation developers and em societies, methods of em social control, and the logistics of establishing a singleton. However, a thorough discussion of it would require a number of posts.

James Andrix

neither the ability to repossess bodies, as we do for cars now, nor

Never has an apple been so orange. I exist without a car.

Only some of your revolt-prevention seem to make provision for the obsolete ems to continue to operate somewhere.

I submit that by your own description of the broad scenario, we need to start worrying about friendliness.

If we talked about a super AI uploading humans emulating them and deleting the emulations when its purpose was complete, I would class that as a nightmare scenario. That humans have tolerated nightmare scenarios really doesn’t change that.

I don’t think the Foom is even all that relevant if we’re already arguing about whether ems will take over or if we’ll just maintain the ability to murder them.

An unfriendly social structure with powerful technology is just as bad as an unfriendly AI. I don’t know if there’s a way to make this technology hard to abuse on a technological level, and I don’t know if there’s a way to make social structures friendly before emulation becomes available.

Jonas Klemming

I have been following these last brain emulation posts with some interest and would just like to ask why both Robin and Carl seems to assume these emulations will be emulating full human brains and not specialized brains of different kinds. This seems more plausable to me since that would probably increase their usefulness and also decrease the ethical problems.

luzr

Zubon:

“rolling digital self-genocide”

Possible solution to Fermi Paradox?

Wrt. this brain emulation thread, makes me feel pretty miserable. If future is going to be like that, luddites might have had a point.

It seems to me that you’re unnecessarily tying brain emulation to individual uploading. I would expect emulation to precede uploading by a significant margin. I’d imagine emulation would be a requirement to even begin work on uploading. Further, I’d expect both to start with simpler, non-mammalian nervous systems.

It’s easy to imagine the civil rights case for uploaded or even emulated humans. The lobster with a thousand subjective years of experience is another story.

luzr

What is the difference between ’emulated’ and ‘uploaded’ humans? I must have missed something.

Carl Shulman

Jonas,

Human capabilities are closely related and interact positively with each other, and a partial brain would have degraded capabilities. Further, the reason for discussion of brain emulation (as opposed to neuromorphic AI) is because the former could be done with lesser understanding. With the knowledge to create functional sub-brains that were anywhere close to as productive as full ones, why wouldn’t we create neuromorphic AI?

Consider innovation around the point of “death” (remerging) as well as “birth” (copying). I realize we can hardly do it well for our mobile cellphone contact lists, but perhaps re-merge could be worked out for brains sourced from the same starting state. If that was a precondition for making this acceptable, then perhaps there’s motivation to work on it.

I would suggest paying more attention to the title of this post. Everyone is focused on the morality of the deaths, but not the morality of the births. For symmetry, we should view the creation of a new person, or a new copy of a person, with just as much positive feeling as we view the destruction of a person, or a copy, as a negative. Just as we would fight to save 5 people in a burning building, we should fight just as hard to allow 5 people to be born as new copies. More to the point, just as we would fight to keep ourself alive, we should fight just as hard to create a new copy of ourself.

Our instincts on this are miscalibrated because birth and death are so asymmetrical today. But in this future scenario, where births are as fast and easy as deaths, morality will have to adapt to recognize that change. Evolution will drive people, at least uploaded people, to adopt this kind of morality, as those who adopt it will be most likely to thrive in an environment where copying minds is possible.

Hal, yes a basic point here is that the dollar value of our lives to use depends on context, including our income. So when creating lives becomes very cheap, we may well not consider the loss of a life as that big a deal.

Carl Shulman

“Everyone is focused on the morality of the deaths, but not the morality of the births.”
Hal,

Count me as an exception, my argument was about factual prediction, not an objection to the classical utilitarian ethical analysis of the situation.

Now, this is complete off-topic, but I believe there is much more to do in solar system first. We are gathering only a miniscule fraction of energy available and using only a miniscule fraction of matter available as well.

Interstellar sounds good, but for next 10000 years, even with rapid exponential growth, is unnecessary. And, AFAIK, special relativity is still not disproved. Surely, it might seem like machines live forever, but on 1000000 years voyage to the next good star, enthropy might get you as well. (another, more optimistic Fermi Paradox explanation? 🙂

(BTW, I believe all the interstellar expansion urge was created by sci-fi authors who definitely needed some aliens to deal with. There are not many of them in solar system….)

I don’t know what the “future dawn” scenario is, but ems in general may decrease the probability of interstellar expansion.

The economy grows at a rate proportional to human activity. If you increase either the number of people, or the speed of thinking, you increase the rate of growth.

If you have a thousand times as many ems as people, and they each think a thousand times as fast as a person, then economic growth would be measured by the minute rather than by the year.

People will invest capital in a venture only if their capital can be expected to grow at a rate greater than in other ventures. Any spacefaring venture can’t bring in any profits to an earthbound owner in any time less than the round-trip light-travel time, which is fixed. So the more ems there are, and the faster they think, the less economic sense space travel makes. If I need a reasonable chance of a 50% return per minute, a venture that will take 10 years is going to be a hard sell.

Space ventures might make sense only if all of the investors were going on the journey and never coming back. Travel outward into unoccupied territory might make sense. But never going back to Earth. Even communication between star systems would be of little value.

billswift

There was an essay written in the early 1980s, I think it was in one of Pournelle’s “Endless Frontier” collections, about the party at the end of the Universe and the Bean Dip Catastrophe. The basic idea which is applicable to this is that copies of uploads were merged. Don’t eliminate or kill copies, merge them back into the basic pattern. This also reduces chances of obsolescence, since anything one learned all would eventually learn.

billswift

Anther problem with uploads or AI and interstellar expansion is the combination Stross points out in his novel, “Accelerando”, limited bandwidth and communication lags would make “beings” dependent on networks unlikely to travel far.

Also in Accelerando the post-humans tried to keep the transhumans (less tied to the networks) from leaving with any mass that could eventually be transformed into comptronium.

Robin, you seem not to really care about the ethics of what you are talking about, and you have been criticized on this point before – for example by James Hughes in your debate over the “crack of a future dawn” scenario. In that debate I sympathized with you: you simply didn’t anticipate your *factual* predictions being mistaken for endorsements of those outcomes, but you should have learned your lesson by now. For example, you say:

an ordinary range of human personalities have, in a supporting poor culture, accepted genocide, mass slavery, killing of unproductive slaves, killing of unproductive elderly, starvation of the poor, and vast inequalities of wealth and power not obviously justified by raw individual ability. The vast majority of these cultures were not totalitarian. Cultures have found many ways for folks to accept death when “their time has come.” When life is cheap, death is cheap as well. Of course that isn’t how our culture sees things, but being rich we can afford luxurious attitudes.

I’m sorry, but calling the prohibition of genocide and mass slavery a “luxurious attitude” is a major blunder. I call for you to retract this statement or clarify it.

I realize that you like to focus on factual prediction, but factual prediction without any notion of good outcomes and ways to bring about good outcomes is something of a lost purpose.

Roko, I shouldn’t have to disclaim that I don’t approve of genocide or slavery every time I mention them.

dreamer

To Zubon and Roko: agreed.

Robin: To make an analogy: if we are going to have a recursively-improving, eventually-godlike AI, we would like it to be friendly, not a being that would annihilate us for paperclip conversion or (not that it couldn’t find something better to do) harvest our brains for computing power. Why? Why not just let it do what it sees fit?

Your ems scenario is a scenario I would describe as an unambiguous negative singularity. Ems have to be regarded as morally equivalent to humans – in the data that is their mental structure, which is all that matters, they are the same or very similar, so how could they not? – and you have just described a hellish economically-driven perpetual holocaust.

When life is cheap, death is cheap. When life is dear, our only imperative is to ensure that it grows longer, deeper and dearer.

michael.vassar

Robin: It seems to me that you, not Carl, are the person who keeps bringing up totalitarianism. However, since you bring it up, it certainly seems that very little change relative to the magnitude of the changes beind discussed has historically been required to set up a totalitarian regime. Also, it’s not obvious to me why you would want to focus on avoiding totalitarian regimes in particular rather than on avoiding the whole spectrum of outcomes that we today might consider to be morally bankrupt. Is the idea that you consider the preservation of competition to be the requisite for moral acceptibility and consider any outcome that is competitively selected to be desirable? If so, why worry about burning cosmic commons?

Ian C.

Why does an em have to die – couldn’t it upload to the latest version brain just like a human chooses to upload? What’s the difference?

dreamer, with Hal, I consider the vast increase in population in this scenario to be a good thing. Since with Hal I consider that a birth can be as good as a death is bad, I consider this scenario on the whole a better world than ours, and better than other scenarios where copies are prevented in order to ensure the immortality of a few. But please don’t confuse this normative judgment of mine in this comment with the positive analysis in the post above.

Hal Finney: “Everyone is focused on the morality of the deaths, but not the morality of the births. For symmetry, we should view the creation of a new person, or a new copy of a person, with just as much positive feeling as we view the destruction of a person, or a copy, as a negative. Just as we would fight to save 5 people in a burning building, we should fight just as hard to allow 5 people to be born as new copies. More to the point, just as we would fight to keep ourself alive, we should fight just as hard to create a new copy of ourself.”

Hal, based on your calculus, the positive of creating 5 Hal Finney ems would easily outweigh the negative of annihilating one Robin Hanson. Robin doesn’t seem to think it’s a big deal.

Your optimization based on total sentience (or whatever value system you have selected to derive your “should”s) mismatches with my evolution-honed moral intuition.

Robin Hanson: “So when creating lives becomes very cheap, we may well not consider the loss of a life as that big a deal.”

Robin, when asked, you keep telling us that you don’t approve of genocide or slavery, so why do you suggest that loss of life may not be a big deal?

Call me backwards and selfish, but I would still prefer to preserve the sentience of myself over the creation of thousands of new sentient lifeforms. And, possessing an ounce of empathy, I project that preference onto other humans and emulation of humans, and post-human agents. Any moral system that assigns a 1:1 value comparison between an existing agent and a potential agent is in conflict with the interests of all existing agents. Why should we consider it a viable moral system?

I am uncomfortable with way the word “interests” is used in the previous post.

The way I define my interests, for example, it is in my interests for me to be replaced with a lifeform that shares my goals and is more capable than myself of advancing those goals. (And the goals do not contain a clause about my continued existence.)

Now, since I do not trust my fellow humans to judge when such a trade is in my interests, I assert my rights under my country’s laws for the protection of my life, for I perceive my life to be an important resource toward the advancement of my goals.

james Andrix

People who do never come into existence don’t anguish over their nonexistence.
Killing one person to spawn another is not a good thing just because the new has a higher dollar value. Dollars exist to serve us, we should not exist to serve them.
To those who say it is preferable to have more sentients, I ask: What do you value them for? What is the end goal of all this? Because if killing one em to birth five is a gain, then you clearly don’t value the ems, because you would kill 25 for 50, and 50 for 100. They are each dispensable.

Do you just want them there, like your version of paperclips? or do you want paperclips that grow and learn and laugh and fall in love? If so then you have to realize that cutting a paperclip in half while its still growing and learning and laughing and falling in love is a _bad thing_.

I guess that’s what I’m saying: a paperclip cut in half is worse than no paperclip. What’s your utility function?

Carl Shulman

Robin’s position does seem to be in tension with this post: if largely selfish humans could work out a deal amongst themselves they would probably want to avoid Robin’s favored scenario.

Virge – I think many people might agree that saving two people at the expense of one’s person life is better than killing two to save one. Perhaps it would follow that you and I would agree to save our lives and let Robin die, versus the contrary. The fact that he might feel differently about it doesn’t necessarily change the abstract weight of the argument.

These considerations are not qualitatively different IMO when we start introducing the positive factors of creating new lives. That’s partly why “women and children first” has been a long-standing rule. Pregnant women in particular are often given special consideration, again in recognition of the value of new life. Once creating life is easy, there will be pressure to evolve morality towards recognizing the great value of new lives.

James Andrix – “To those who say it is preferable to have more sentients, I ask: What do you value them for?” I might reply, with Franklin, “What use is a newborn baby?” But I would mean something different from him; I would be asking to consider why we value newborns? Because we surely do, and I suppose that it is largely instinctive. Evolution has given us a morality that values new life. Now, presented with the possibility of making copies of individuals, our evolutionary instincts aren’t activated due to the novelty of the situation. But I believe that in a mature evolutionary process where such copying is possible, we will come to value new copies very highly. If someone is offered a 60% chance of getting to make a copy vs a 40% chance of being killed, evolution will pressure them to accept it, because people who accept such gambles will tend to increase their numbers. This will require that the positive value of a new copy balances the negative value of death.

Carl, if possible people could be in on the deal, they’d prefer a chance at a short life over no life at all. In my scenario, ems we preferred could follow a policy of only creating copies they were sure could live long safe lives. Under the assumption of no externality, the free market labor outcome should be Pareto optimal, and so no deal could make everyone better off.

Carl Shulman

Robin,

But possible future people can’t be in on current deals. In the linked post you said that morality was overrated in that suggested that morality suggested that we should sacrifice a lot for animals, future generations, and other fairly powerless groups. In contrast, you said, dealmaking between current individuals on the basis of their actual preferences would favor currently existing people with power over those other powerless groups.

Jordan

I tend to lean libertarian, the main problem I have with socialism being that when you replace money with a more apt utility function (quality of life) the vast majority of people in prosperous nations are quite rich, or at least potentially so. In a sense there is a perpetual state of equalized wealth, as my quality of life isn’t an order or magnitude off from that of a rich person, hence there is no justification for additional, artificial redistribution (at least not to me or people in my income range).

The problem with uploads is that quality of life for an em (both duration of life and the richness of an upload experience) is proportional to monetary wealth. It’s one thing to be poor and unable to afford luxury items, its another thing when luxury is redefined as being able to live in real time or even at all. In my opinion this shifts the weight heavily in favor of socialist arguments.

Khyre

How much of the “Future Dawn” economic analysis is relevant if the ems almost immediately become intelligent goods with no desires for salary or autonomy ? Would that assumption simplify the analysis ?

I ask this because it seems that a political and social landscape that would allow (a) ems to be created that are nominally free economic agents but are subject death by bankruptcy, would also allow (b) em engineering that rapidly produces an em that is happy to work for free, has no fear of death, and is completely loyal ?

Even taking a black-box approach to em engineering, I bet you could rapidly evolve a em that is a willing slave ? (producing something like the azi of C J Cherryh’s “Cyteen”, but done digitally to speed things up) Once this is done, no em with any remnant of human desire for anything other than work could compete.

Insisting on some form of legal autonomy for azi would be pointless (and cruel) – azi would immediately hand over their power of attorney to their creators (or be legally adopted by the creators, or something).

James Andrix

Hal:
Why should we let evolution be the dominant optimization process guiding our future? Why not use the morality we have now to decide that a future where poor people get erased is a future to avoid? Why shouldn’t it be avoidable?

True, to an extent evolution is unavoidable, but we can breed dogs to instinctively herd our sheep for us. Why not breed ourselves to be something more than murderous replicator-minds?

No matter how much we value newborns, we don’t take people off life support to give other people fertility treatments.

Hal: “I think many people might agree that saving two people at the expense of one’s person life is better than killing two to save one.”

Hal, knocking down a strawman does nothing.

Are you ignoring the distinction between creation of a new, previously non-existent agent (who clearly has nothing to lose), and “saving” an existing agent? My question for you deliberately contrasted creating new duplicate Hals with destroying a living Robin. I’d like to hear your answer to the question I posed.

Valuing potential lives as equivalent to current lives seems to create a utilitarian nightmare. In order to maximize the X-value of the universe (now and future) we have to accept a system that will terminate everyone as soon as we can create an army of clones that can experience X-value at a marginally increased efficiency.

To me, that suggests the X-value you’ve chosen to optimize is wrong. It’s a paperclip maximiser of sentience with no weighting on pleasure/pain.

Hal: “Pregnant women in particular are often given special consideration, again in recognition of the value of new life. Once creating life is easy, there will be pressure to evolve morality towards recognizing the great value of new lives.”

The reason we have emotions that favour saving women and children first is easily explained in genetic terms. At present death is inevitable, so new life is favoured.
When death is optional, extension of life is favoured, not devalued with respect to the new.

Creating new life is easier now than it has ever been in human history. Yet we use birth control to manage our quality of life rather than produce all the babies we could possibly create. If your utility function makes this practice immoral, then your utility function doesn’t match the bulk of humanity.

Richard Hollerith: “The way I define my interests, for example, it is in my interests for me to be replaced with a lifeform that shares my goals and is more capable than myself of advancing those goals. (And the goals do not contain a clause about my continued existence.)”

Richard, do you assign zero value to your autonomy? Do you also assign zero value to your personal enjoyment of the process of achieving your goals?

haig

To echo a prior comment, I think uploading, autonomous emulation (EMs), and whole-brain emulation are all different technologies though with some important overlaps. Bottom-up whole brain emulation of the blue brain variety seems to be on a nice growth curve now but extending higher up that curve will not automatically give you either disembodied EMs or uploads. If anything, it will give us a better understanding of the cognitive architecture and algorithms of the brain and we can use those piece-meal in a designed AI long before any uploading of humans or EMs becomes possible. I hope future dialogue will make this distinction more apparent.

The knowledge necessary to create fully-functioning EMs will not come before figuring out at least the fundamental architecture and algorithms of the brain. As for uploading, that requires a huge technological leap in both understanding the situated brain-in-body as well as fundamental nanotechnological based scanning methods. An AGI with human-like capabilities, on the other hand, is just waiting for the right models and algorithms.

It seems most likely that the prevailing scenario would be an Eliezer-lite version of the future, where an AGI is constructed but isn’t friendly in the strong sense and isn’t recursively self-improving, based loosely on the mammalian brain. This initial AGI seems the most probable given the trends. Once this class of AGI is operational, then we can use it to further along the transhumanist agenda perhaps culminating in an upload scenario and/or a friendly recursively self-improving first-mover AGI with humanity’s best interests in mind.

Carl, no ems exist at all today. Anyone today who can save some capital would benefit enormously from unrestrained, relative to restrained, em growth.

Khyre, I’d rather create a real person, even with a limited lifespan, than a zombie/willing-slave without respectable desires. But I doubt creating a productive zombie can be done quickly.

haig, you are confused.

Tyrrell McAllister

I’m still trying to think about this as a conversation between Robin and Eliezer. Maybe that’s a mistake, because I’m starting to lose what small grasp I thought I had on what the shared topic is, much less what their respective positions on that topic are.

Here’s what I thought was the big picture—the ultimate point of contention: Eliezer thinks that the advent of a superhuman artificial intelligence must be preceded by a solid theory of Friendliness. Otherwise, with very high probability, we are doomed. Robin, on the other hand, (A) is unconvinced that Eliezer can justify such confidence in his predictions and (B) thinks that his (Robin’s) model of the world justifies assigning much lower probability to the prediction that we’re doomed without Friendliness.

I’ve already groused about Eliezer’s method of argument in comments to his posts. But now I’m confused by Robin’s approach, too. I’m fine with his approach to (A) above. But he seems to be spending most of his time on (B). He’s argued for (B) by giving a scenario in which we develop artificial intelligence, but friendliness isn’t required. However, Robin’s whole case for (B) now seems like a non sequitur to me.

Eliezer seems to think that Robin’s scenario is implausible. But let’s set that aside. Suppose that Robin’s scenario comes to pass exactly as he describes it. How does that make Eliezer’s doomsday scenario any less likely? Suppose that Eliezer were to concede that the advent of ems wouldn’t be an extinction-level event in and of itself. Suppose that the ems themselves are as innocuous as Robin predicts. Well, Eliezer already thinks that mere wetware humans are likely to unleash the proverbial paperclip maximizer within a century. Robin’s scenario just adds umpty-gazillion human-equivalent minds to the pursuit of superhuman AI. However likely we are to stumble on non-em-based AI, surely we are even more likely to do so once we have an army of ems helping us.

It seems to me that Eliezer’s arguments kick in once we have a realistic chance of building superhuman intelligences. Maybe the ems themselves are sufficiently human-like that we can manage whatever threat they pose. But surely their advent would only accelerate the search for non-human-based artificial super-intelligence.

After all, the “artificial” part per se isn’t the threat. The threat comes from the “super-intelligent” part (since we couldn’t outsmart it) and the non-human-based part (since we then couldn’t predict its motivations with the naïve psychology that evolution gave us).

Given my understanding of Eliezer’s expectations, Robin’s scenario, if it happened, would just be another pre-doomsday era. It would stand between us and doomsday, but not as a barrier making the doomsday less likely. Rather, it would be an enabling precursor making the doomsday even more likely.

luzr

Tyrrell:

“Maybe the ems themselves are sufficiently human-like that we can manage whatever threat they pose.”

Actually, being ems human-like would only increased my worries about friendliness. After all, we know how unfriendly humans might be when they are threatened with death.

Khyre, I’d rather create a real person, even with a limited lifespan, than a zombie/willing-slave without respectable desires. But I doubt creating a productive zombie can be done quickly.

Why the preference? Why would those desires not be respectable? And how much does it really differ from what you are describing as a real person?

There is some range of human desire in willingness to work, subservience, etc. Choosing to emulate someone at the far end of the bell curve seems like a real person. Or is the objection to pushing out how far that tail goes? I am not sure what my moral calculus says about the case where the desires themselves are malleable.

Setting aside the normative question, this seems like the natural result of the plan for em/upload workhorses. Those who are not on that tail will have fewer (no?) copies, because they are at a competitive disadvantage with workaholics and people who are quite happy to labor without sleep until they are turned off. If our hypothesis is existence at the edge of electronic subsistence, the options seem to be willing slaves and unwilling slaves. Those who will not work like slaves will be shut off in favor of those who will, unless there is some desirable worker characteristic that cannot co-exist with this mindset.

Tyrrell: After all, the “artificial” part per se isn’t the threat. The threat comes from the “super-intelligent” part…

That nails what I think is the crux of the disagreement between Robin and Eliezer.

Robin seems focused on emulation of humans. Even with easily mass-produced emulations a foom is unlikely to happen until one can reliably reverse engineer the emulated human brain and work out how to expand its capabilities. Even then, the highly coupled architecture may present some serious limits (e.g. combinatorial explosion) on what modifications can be made. Under the emulation scenario, progress towards SI does look like it would undergo a slow series of improvement steps, with every step limited by human-scale limitations.

Since Eliezer expects coded SI to come before we can create ems, he’s trying to explain why self-enhancing intelligence represents a completely different dynamic from every other change so far in human history. Everything we’ve seen so far has been limited by human wetware. Even when we mass thousands of humans onto one project, the inter-human communications problems impose limits on our capabilities. If we had a coherent, systematic general intelligence engine, with the ability to self-analyse and self-modify, then it’s very difficult to see what could limit its accelerating intelligence. Under this scenario, going foom looks inevitable.

Can someone can point me to a comment or post where Robin argues either
(a) why coded AGI cannot or will not be produced by current human efforts, or
(b) why a self-improving AGI is necessarily limited or extremely slow to self-improve?

“However likely we are to stumble on non-em-based AI, surely we are even more likely to do so once we have an army of ems helping us.”

This is a good point. I’d like to hear Robin’s response to this.

@Robin: Perhaps you could consider adding a disclaimer as a footer to the bottom of your posts: this would probably save you a lot of time and avoid misunderstandings.

I still think that your analysis would benefit from saying something about ethics; because after all, we are in the prediction business for a reason: namely to shape the world into desirable outcomes.

You and Carl are debating the different possible ways that a dystopian nightmare could be created, arguing the details of scenarios that we just plain want to avoid. I think that your time would be better spent by first asking “what scenarios do we want to realize” and then thinking about how to get there. Eliezer is adopting this strategy…

frelkins

@Roko

Doesn’t Robin have 2 famous ethical maxims: “try to be better humans” and “actions should be as noble as possible, but not nobler?” Aren’t these enough to cover this conversation?

bambi

Virge:
> (a) why coded AGI cannot or will not be produced by current human efforts, or
> (b) why a self-improving AGI is necessarily limited or extremely slow to self-improve?

I can’t think of a place where Robin has explained these, nor would I expect him to (though it would be interesting). It’s a question of burden of proof. If somebody makes up bizarre future scenarios we expect them to demonstrate their likelihood, not for others to convincingly prove them impossible.

For (a), partially it depends on what you mean by “current”. Since half a century of effort has produced squat it’s not unreasonable to project some more squat, unless provided a reason not to. The credulous always latch onto today’s handwaving as a “reason” when they really want to, which leads them to consider other people unreasonably skeptical. While there is no provable reason to think that researchers will never understand intelligence enough to code it, nobody has demonstrated such understanding yet, nor even convincing progress toward such a theoretical foundation on which coding could be based.

Again, for (b), it depends what you mean by “extremely slow” — even if the millions of man-years finally produce a coded AI, how many millions of years should we expect it on its own to produce a better coded AI? Do you consider being merely millions of times more capable than human beings to be “extremely slow”? That’s what would be required for any alarming self-improvement rate to occur. As to whether it is “necessarily limited”, well if you find it more plausible to posit “unlimited” capability to a coded AI, I guess that’s up to you.

James, the problem is that our present morality is engineered into us by an evolutionary environment which no longer exists. Why should we honor that one? Evolution does encourage us to reproduce, but it does so via the sex drive. An alternative would have made us value reproduction per se, and given us instinctive awareness that sex would lead to reproduction. But presumably that would have been too complex to engineer into our more primitive ancestors. This contingency hardly seems a sound basis for favoring the resulting set of values.

However I admit that it is hard to come up with arguments to choose one morality over another. Consistency would be desirable at a minimum. You might review the discussion around Parfit’s “Repugnant Conclusion” which to me suggests an inconsistency in failing to value new life sufficiently.

Virge, in answer to your question, although I think Robin has more to offer the world than I do, if you were to balance enough copies of me against him dying, then yes, at some point I think it would be moral to favor the copies. Whether five is enough is hard to say. But as I was trying to indicate, these kinds of dilemmas are not specific to the issue of copying. Would you save Robin’s life or that of five random people, if you had to choose one or the other? How about two people? How about ten? What if they are old and about to die? You can come up with a million variants. It is always hard for us to balance life against death. And see what you think about the Repugnant Conclusion linked in the previous paragraph.

loqi

It seems the accepted scenario here presents an artificial dichotomy between “funded” (living) and “unfunded” (dead) ems. Is this really the case? The primary thing that determines an em’s (objective) lifespan is the longevity of its storage, not necessarily the CPU time allocated to it. If processing, not storage, is the bottleneck, then all it takes is a small amount of generosity (one wealthy storage-baron) to “freeze” unfunded ems. If decent compression is applicable to storage of forked ems, this type of coverage could easily be universally practical.

But why go straight from 1 to 0? An em can be slowed down to a near-infinite degree. A 6502 pried out of a NES given access to sufficient storage could run an entire civilization, albeit at a tremendous slowdown.

Continual genocide certainly seems possible, but as far as I can tell, you’d need to be confident that storage demands will keep pace with computing demands to put much weight into such a belief.

Khyre

“Khyre, I’d rather create a real person, even with a limited lifespan, than a zombie/willing-slave without respectable desires.”

Well me too, but my personal preference is irrelevant if my offspring are going to compete with more productive but less human ems.

“But I doubt creating a productive zombie can be done quickly.”

(“Zombie” has the wrong connotation – think more of an bright, enthusiastic cult member. But you did say “productive zombie”, so that’s ok)

I’m not so sure about the long time frame. We’re not talking about understanding how memory encoding or reverse-engineering anything “deep” about human intelligence, we’re talking about psychological conditioning. If you can stomach it stretch your imagination …

Think about removing all ethical limits from experimentation into psychological conditioning, having ability to perform perfectly repeatable experiments.

Imagine if you could get hold of a pre-adolescent em (f**k that’s a horrible thought) – the extra plasticity might be worth the longer training time.

You hear about new discoveries in neuroanatomy just about every month from fMRI. Imagine what will be known by the time uploading is possible. Even if you don’t know exactly where all the neurons go and why, you might be able to engineer gross personality changes. You can experiment as much as you want.

Virtual psychopharmachology.

If I can think of that off the top of my head, think what would be achievable given the enormous commercial pressure to produce a willing slave. Yes, the development might be f**king horrible, but if you’re going to assume that ems can be involuntarily KILLED, I don’t think you can assume there will be any ethical restrictions on em development.

Thanks for the Repugnant Conclusion link, Hal. On first read, it amazes me to see serious philosophers employing mathematical models that are clearly unstable right at the point where they’re drawing their strong conclusions. Any of them working with welfare values on a linear scale that can take both positive and negative values must have noticed the discontinuity in their equations at zero. The tiniest change in the definition of a marginally positive quality of life can make the total welfare go from being the best of the best to being so negative that it isn’t worth considering.

It’s really not surprising that one can find paradoxes in a welfare function when the mathematics is obviously not modeling what they want it to model over the whole domain of interest. I’ll have to think about it a little more. The only paradoxes I’m seeing so far come from unrealistic modeling.

Lightwave

1. I am somewhat appaled how easily everyone discusses the use of ems as tools which can be created and shut down at will. If ems are superior than bio-humans, maybe its the bio humans that should be shut down.

2. The whole ems economy scenario strikes me as very unlikely. Eliezer somewhere said to be wary of things that are fun to argue, and I think that’s what everyone here is doing.

loqi, it is hard to imagine storage demands not being at least 0.1% of running demands, nor that ems wouldn’t consider running 1000x slower as something akin to death.

Lightwave, can my scenario really be more fun to argue than the basement AI that suddenly takes over the world?

Hal, even the evolved morality we inherited does not entirely approve of the morality that ems would evolve, the question is how hard we’d be willing to work to change their world/morality to match ours.

frelkins, I’m pretty sure I have no famous ethical axioms.

Virge, bambi is right; I’ll assume crude trends continue until I see reasons to think otherwise.

Tyrrell and Roko, a hand-coded AI foom remains possible after ems, but the context would be different in important ways.

loqi

it is hard to imagine […] that ems wouldn’t consider running 1000x slower as something akin to death

I’m a bit suspicious of statements that begin with “It is hard to imagine” or “I can’t see how” when speculating on the non-immediate future. They convey a sense of misplaced confidence in a huge space of potential counter-examples. Whether or not it is probable, it is certainly not hard to imagine, particularly when talking about something as unconstrained as a future em’s philosophical intuitions.

Anyway, it is what it is regardless of what the ems consider it to be. It’s not total suspension, and it’s not information-theoretic death. Put yourself in the role of the evictee, with the options of termination, archival, or massive slowdown. I believe very few would choose archival, and fewer yet termination, especially once there’s any established “slow culture” to participate in.

Daniel Carrier

Why does it matter if there’s an established slow culture? From your point of view, there will be one soon.

James Andrix

Hal:“…our present morality is engineered into us by an evolutionary environment which no longer exists. Why should we honor that one?”
We don’t ‘honor’ anything. We want what we want. For exactly the same reasons I don’t want the future to give rise to a paperclip maximizer, I don’t want the future to give rise to societies that commit genocide as common practice.

Like I said before: This scenario already makes me want to look into friendliness, even without a singleton, because it is what I consider an unfriendly outcome. That the people who would exist in this outcome would be ok with it is moot, just as I don’t weigh the values of the paperclip maximizer as relevant to what I want.

James Andrix

Hal:
Rereading some of you arguments, I get the impression that you would favor many copies of you over saving Robin, because at some point the many Hals could do more good than Robin could do. Is that right? This seems different than the idea that we should favor many Hals because they additively have a better quality of life than one Robin.

Tyrrell McAllister

Robin Hansen: . . .a hand-coded AI foom remains possible after ems, but the context would be different in important ways.

Is it a context that makes Friendliness of the hand-coded AI less of a concern? If so, how?