Random access memories: My time at a singularity conference

Androids, millionaires, and idealism at a summit on the singularity.

I'm sitting in the far left corner of Lincoln Center's Alice Tully Hall in New York, in a dark spot under the balcony, watching a man who is not a man.

On the brightly lit stage, the man sits comfortably in an Aeron desk chair, hair falling into his eyes as he gazes idly about the room through glasses, hands in lap. The emcee of the Global Future 2045 conference, Phil VanNedervelde, introduces him as Dr. Hiroshi Ishiguro, director of the Intelligence Robotics Laboratory in Osaka, Japan. He's a leading expert in the creation of lifelike robots. As VanNedervelde steps off stage, the man looks around at the crowd and begins to speak.

"In order to investigate humans, we need to have a test bed. I am the test bed," he says. "The professor is using myself to study the Hiroshi likeness. I am the most important research he has out... Now, let's welcome professor Ishiguro." With that, the professor himself strides onto the stage and the "man" in the chair is revealed as Ishiguro's hyperrealistic robotic doppelgänger.

Wait 30 years, and the distinction between the "man" and the "machine" might not be so easy to make. If the conference organizer, Russian multimillionaire Dmitry Itskov, has his way, robots like Ishiguro's will make us immortal—perhaps as soon as 2045.

Consciousness transfer

The conference I attended in June is the product of Itskov's 2045 Initiative, which has set itself the goal of transferring "an individual's personality to a more advanced non-biological carrier." The side effect of that ability to transfer personalities would be that one never has to die with a body; they would become, potentially, immortal.

Itskov made the money that funds the 2045 Initiative with a blog about the Russian Internet, tarakan.ru, and an online newspaper, Dni.ru, according to a New York Times profile. This developed into a company, New Media Stars, with ties into the Russian government. Partial ownership of the company eventually made Itskov rich, but it did not prove fulfilling. Itskov, who now lives more like an ascetic monk than a tech multimillionaire, spends his life promoting the 2045 Initiative in the hopes of overcoming humanity's limited span of years.

The first step in Itskov's plan is creating android avatars that are controllable by a brain-machine interface. According to the 2045 Initiative, this will give humans the "ability to work in dangerous environments" without personal risk. The androids, which are somehow both more capable and more expendable than the average human, should appear by 2020 if all goes well.

By 2025, the medical and technological communities should work out how to make an "autonomous life support system for human brains," to save people whose bodies are "worn out." Once the brains are pickled away in their Mason jars, the goal is to recreate the brain as a kind of computer by 2030, enabling people to transfer their consciousnesses to another host. These minds that are now "substrate-independent" should eventually be transferable into new bodies with "capacities far exceeding those of ordinary humans." In sum: transferable consciousnesses into artificial bodies by 2045.

Enlarge/ Dmitry Itskov opens his conference, touting the need for open discussion.

The 2045 Initiative's mission, described more broadly, is associated with the "technological singularity," the point at which humans achieve superintelligence aided by technology. The singularity is the Big Bang but reversed. With the Big Bang, the shift of everything into existence was so sudden and radical that we don't currently have a reasonable way of figuring out what came before. Proponents of the singularity say that the changes would be such a significant tipping point in the way our world operates that we have no way of predicting what will come after.

We can look to the greats of science fiction for some predictions on what might come of a singularity. They almost universally foresee disaster—the Cylons of Battlestar Galactica, the replicants of Blade Runner, the circumventions of Asimov's laws of robotics. Supporters of the singularity see more upsides.

While Itskov has a general plan, the specifics of getting to consciousness transfer by 2045 remain murky. Even the question of how to achieve the first landmark goal in GF2045's timeline—affordable android avatars controllable by brain-machine interfaces by 2020—is yet unanswered. Skeptical, I attended the conference hoping to see someone, anyone, indicate that even the first step of the 2045 Initiative's timeline was attainable, let alone practical.

The personal motivations of the singularity have to do with overcoming the “imperfections” of humans: we are temporary, physically restricted in capability and location, and not smart enough. Each of us dimly ambles through life and makes the planet a little worse with each pass. Singularity boosters aren’t the first sect to target immortality, but they might be the first group motivated enough to try to systematize it.

The morning of the first day of the conference, the lobby was quiet as a few dozen attendants milled around while a flute and guitar filled the room with high-brow music. I started talking to a wide-eyed gentleman who sang the praises of TA-65, an expensive “telomerase activator” that is supposed to slow the aging process. “When I’m about 35, I’m probably going to start it,” he said.

The young man said he was at the conference to see longtime singularity booster Ray Kurzweil. He had not read any of Kurzweil’s books, but he watched his videos online. “You’ve heard of YouTube?” he asked.

Some supporters of the singularity movement tend to be focused on their personal well-being. At a minimum, they want to live as long as possible so they can live to see the advancements that will make them smarter and better.

But the singularity also has a larger purpose to them. Refining humans to perfect living, feeling androids would bring immortality, the reduction of disease and suffering (as we know them from our human position anyway), and the ability to regulate people's needs in a way that is compatible with a planet and ecosystem that prove time and again to be too fragile to compete with our wanton desires for power, convenience, and dominance.

Enlarge/ Dr. James Martin predicts the various impending disaster scenarios for humanity.

Dr. James Martin, an IT consultant and early employee of IBM, opened the conference with an hour-long rundown of the problems facing the planet (like climate change) and how they are influenced in large part by an overgrown human population. Dr. Martin called this "the make or break century," the moment we could see "the birth of a global renaissance or things descend into chaos. Or both." Put more succinctly, things will change.

As Martin ran through his list of damning statistics—including the frightening number of cows and their resulting volume of gas emissions—it's hard to shake the feeling that the presentation is a massive troll of a room full of people who have at least a passing interest in living forever. We're ruining the planet because... we're living too long and reproducing too often? Obviously the solution is to advance technology to the point where no one needs to die!

But Martin pointed out that a state of consciousness housed in some abstraction of a human body might lead to huge advantages. An android doesn't need all those cows, for instance—or even a temperate planet. As I listened, it hit me: Dr. Martin was describing all of the problems that won't be problems once people aren't people anymore.

Martin's consciousness won't be one of those transferred to an android, however. Several days after speaking at the conference, Martin passed away in a swimming accident.

Casey Johnston
Casey Johnston is the former Culture Editor at Ars Technica, and now does the occasional freelance story. She graduated from Columbia University with a degree in Applied Physics. Twitter@caseyjohnston

172 Reader Comments

If I was to kill you but at the same time replace you with a copy that behaved exactly like you, could pass any test to convinced everyone you know that it was you, and even thought to itself that it was you ... Would you still be dead?

I note you are describing the copy only in terms of functional equivalence, but by "kill" and "replace" you seem to be referring to structural equivalence (correct me if I am wrong).

a) From a functional perspective, the answer is clearly "no", I am not dead.

As an analogy, I would ask you to imagine you have installed Windows on VMWare Fusion running on a Mac. Under domain/work-group settings you set the name to be "Madestjohn", you edit a few documents in Word and then suspend the VM instance so that the image state in RAM is copied to disk. The original location in memory is recovered by the host OS for use by other apps. You then resume the instance, it is reloaded into RAM, likely in a different location. Your document edits are the same. The name is still "Madestjohn". Everything operates the same. Functionally, it is indistinguishable from the original you suspended a couple of minutes ago. Has anything been lost in this process?

b) From a physical perspective, do you now owe Microsoft $100 for a new licence?

Minds aren't the same as computer hardware. Your view *only* makes sense from an external perspective. What matters is the internal perspective.

I'm an athiest. There's no special magic inside me powering my brain. It's all physics and chemistry and science. We'll be able to reproduce that in machinery and computers one day, or just upgrade what I have now so it lasts. I want that to happen before I die, or at least before my children die. Before my parents die seems unlikely, but I can hope.

No, I don't know the answers to all the many objections. But I would rather face those problems than die.

While I agree with not wanting to die, I really disagree with the Atheism part. I have no particular religion. But Im not "Atheist" either because that would require me to have a belief in Atheism itself, making me no different than the people who have a belief in God or Jesus or the Bible or whatever their religion might be. You are still holding a belief that is not known. If you are Atheist, you are still religious, and Atheism is your religion of choice.

That being said, I dont think transferring consciousness will ever be possible, not in a prolong ones life sort of way. We'd still die, there would just be an avatar that thinks its me or has my mannerisms and certain transferable personality traits. Machinery and computers will never reproduce what took billions of years to develop biologically, we'll never even come close. Controlling an avatar remotely via wireless brain interface, is childs play when compared to the multitude of feelings, emotions and neural pep-tides that color our world, and make us who we are. Every time we feel something a new combination of drug is released into our bloodstream, changing our perspective, changing how we think, and changing how our bodies operate. We are a thousand or more years away from ever being able to reproduce, in a machine, the ability to feel emotion, and physical pleasure. Even if we could transfer our consciousness and suddenly "wake up" in a new body, that new body would be cold and lifeless. A husk or shell. A prison for our minds. It would drive us mad, and make us wish for death. If living longer means taking away what makes us human, our feeling, our honed senses... touch, smell, hot, cold, light heavy, soft and gentle, hard or fast, adrenalized or excited, turned on and pleasured... then I dont think Id much like that life. Those are the things I dont think anything but biology and a billion years could ever reproduce. Of course I could be wrong, in which case Ill change my current "opinion" and all will be well.

That is the benefit of not holding beliefs, such as Atheism or Christianity. I can change my mind at will, or as new information is presented or learned.

So, if we can edit out bad memories and experiences, as some hope will happen... does that mean we'll be making the same mistakes over and over and over?

No it means when the NSA's computers flag your comments as being somehow negative or not what the ruling party would like, they can just take you in and reprogram you to their liking, then wipe the memory of the reprogramming out of your brain, and send you on your way, none the wiser.

But Im not "Atheist" either because that would require me to have a belief in Atheism itself, making me no different than the people who have a belief in God or Jesus or the Bible or whatever their religion might be. You are still holding a belief that is not known. If you are Atheist, you are still religious, and Atheism is your religion of choice.

Off-topic and absurd nonsense.

There are mountains of evidence for a natural world vs ZERO evidence for a supernatural/magical one.

But Im not "Atheist" either because that would require me to have a belief in Atheism itself, making me no different than the people who have a belief in God or Jesus or the Bible or whatever their religion might be. You are still holding a belief that is not known. If you are Atheist, you are still religious, and Atheism is your religion of choice.

Off-topic and absurd nonsense.

There are mountains of evidence for a natural world vs ZERO evidence for a supernatural/magical one.

My disbelief in magic/supernatural, is in no way a religion.

Correct, Kin24's description is closer to agnosticism, the belief in a world shaped by higher forces but without any gods as humanity defines them.

I think the world may be better off if previous generations die (in their due time). Can you imagine growing up as a child, knowing that the grown-ups will always be there, and will always consider you a junior member of society, forever? Imagine the generation gap in that world.

Who says there will be children? If no one dies, the world population would skyrocket, and we'd all be doomed. As an example, I found a link that states there are approx 356,201 births per day. 153,781 deaths per day, meaning the population increases by approx 202,419 per day. So take away all deaths and suddenly over-population becomes a problem, with 150,000 extra people around every single day. Millions per year. Of course births would probably decline in such a society. There would be less need for children, less drive for them, and those that had been transferred to an artificial body might not even be capable of breeding. As more and more people turned to immortality though artificial means, there would be less and less births until finally no one was capable of breeding, everyone lived till their artificial bodies broke down, maybe forever, or until their money runs out(companies are NOT going to be doing this for free, it will not be a right and it will be VERY expensive). It could literally be the end of the human race.

Lets say we figure out the births vs deaths and that becomes a non-issue. Then we have to deal with the costs issue. Who will be granted this newly extended life, and what do we do with those people? Will we be competing for jobs with them? How do you compete with a machine? What kind of power source will they run on and what resources will they need? Will they be in competition with normal humans for sustenance? If we run out of food supply due to over-population, who gets the food rations? Those who are younger natural humans? Or those who have lived 1000 or more years already?

If we ever succeed at this, which I doubt, it could very well be the end of the human race as we know it.

But Im not "Atheist" either because that would require me to have a belief in Atheism itself, making me no different than the people who have a belief in God or Jesus or the Bible or whatever their religion might be. You are still holding a belief that is not known. If you are Atheist, you are still religious, and Atheism is your religion of choice.

Off-topic and absurd nonsense.

There are mountains of evidence for a natural world vs ZERO evidence for a supernatural/magical one.

My disbelief in magic/supernatural, is in no way a religion.

Correct, Kin24's description is closer to agnosticism, the belief in a world shaped by higher forces but without any gods as humanity defines them.

I dont adhere to that belief either. But the original poster stated he was atheist and therefore "believed" that after death there is nothing of him that carries on. Since one has to die to find that out, it is no less a belief than Christianity and as far as Im concerned they are both religions that people adhere to. Personally I dont know either way. As for the claim that there is 0 evidence of supernatural things, Id say thats complete bullshit. Things happen all around us that we cant explain, yet. The nature of human consciousness and whether or not there is life after death is one of those things. To hold any belief, one way or the other, is your religion of choice.

I'm an athiest. There's no special magic inside me powering my brain. It's all physics and chemistry and science. We'll be able to reproduce that in machinery and computers one day, or just upgrade what I have now so it lasts. I want that to happen before I die, or at least before my children die. Before my parents die seems unlikely, but I can hope.

No, I don't know the answers to all the many objections. But I would rather face those problems than die.

While I agree with not wanting to die, I really disagree with the Atheism part. I have no particular religion. But Im not "Atheist" either because that would require me to have a belief in Atheism itself, making me no different than the people who have a belief in God or Jesus or the Bible or whatever their religion might be. You are still holding a belief that is not known. If you are Atheist, you are still religious, and Atheism is your religion of choice.

[...]

That is the benefit of not holding beliefs, such as Atheism or Christianity. I can change my mind at will, or as new information is presented or learned.

You are an idiot. I say this without reservation. Atheism is no more and no less than a rejection of the claim that deities exist. It is natural that this rejection extends to all supernatural claims. It is also an evidence-based stance which is open to changing if evidence actually appeared that any supernatural claim was true.

Do you behave as if you need to accept Jesus as your savior to have an afterlife? If you don't, then you do not hold that belief. You aren't even hedging on that belief. You can talk all you want about how you give equal weight to every imagined permutation of supernatural claims, but if you never act as if they had equal probability, you don't really consider them as possible.

You say that weird things happen all the time. Why aren't these "weird things" being documented given that nearly every human has a phone with a camera that can immediately upload images and video to the internet? Despite the ubiquity of recording equipment, the amount of evidence for ghosts, miracles, sasquatches, etc, has not increased.

If you are Atheist, you are still religious, and Atheism is your religion of choice.

It's rather impressive how incorrect that sentiment is. I've noticed that comments like this (as well as 'science as a religion') tend to come from people with a very religious perspective, who try to put all world views in the context of religion. Atheism has no ritual, no belief system, no morals or ethics, it does not seek to explain anything; it is simply the prefered nomenclature for one who rejects religious belief.

"If you are a Pacifist, you are still warlike, and Pacifism is your war of choice.""If you are Vegatarian, you are still a carnivor, and Veggies are your meat of choice.""If you are Dead, you are still living, and not breathing in a pine box is your life of choice."

Those researching robotics: the holy grail is cognition. Focus on making those robots able to think and reason, instead of trying to make them look like people. The former is A LOT more important than the latter.