Michael Graziano has an article at the Atlantic looking at the plausibility of mind copying. He doesn’t beat around the bush, going all in with the title: Why You Should Believe in the Digital Afterlife, although the actual text of the article is more nuanced, and echoes what I hear from most neuroscientists.

As a neuroscientist, my interest lies mainly in a more practical question: is it even technically possible to duplicate yourself in a computer program? The short answer is: probably, but not for a while.

He proceeds to give a basic overview of how the brain processes information, which I highly recommend reading if you’re skeptical that the mind is essentially information processing. He doesn’t shy away from noting the enormous difficulties.

To copy a person’s mind, you wouldn’t need to scan anywhere near the level of individual atoms. But you would need a scanning device that can capture what kind of neuron, what kind of synapse, how large or active of a synapse, what kind of neurotransmitter, how rapidly the neurotransmitter is being synthesized and how rapidly it can be reabsorbed. Is that impossible? No. But it starts to sound like the tech is centuries in the future rather than just around the corner.

And then there’s the largest difficulty, would the resulting software mechanism be conscious? This may always be a metaphysical debate, even if or when minds start being uploaded, but Graziano makes this point.

But in every theory grounded in neuroscience, a computer-simulated brain would be conscious. In some mystical theories and theories that depend on a loose analogy to quantum mechanics, consciousness would be more difficult to create artificially. But as a neuroscientist, I am confident that if we ever could scan a person’s brain in detail and simulate that architecture on a computer, then the simulation would have a conscious experience. It would have the memories, personality, feelings, and intelligence of the original.

Graziano goes on the discuss the difficulties inherent in the fact that brains don’t exist in isolation, but are integrated in a tightly coupled manner with the rest of the body, including the peripheral nervous system, glandular system, and other aspect of the body. Any successful simulation would have to deal with all of that complexity.

He is actually optimistic that computational capacity will continue to increase, enabling the complexity of a simulation, noting that he thinks quantum computing will open up possibilities. But I don’t have his certitude on this.

The main problem is that it’s not enough simply to do the information processing that the brain does, but a computer would have to simulate the hardware. If you’ve ever run software engineered for a different hardware platform in a hardware emulation program, you’ll know that such emulation typically comes with a severe performance penalty. The partial emulation of biological neural processing that has happened so far has required immense processing power.

Moore’s Law is usually cited as an argument that any computational capacity issue will eventually be solved. However, the actual Moore’s Law is an observation of a trend that the number of transistors on an integrated circuit chip doubles every two years. Gordon Moore, the originator of that observation, noted early on that eventually the trend would end. Recent industry reports are that the trend is approaching its end, with progress now coming slower, probably to peter out in the next few years.

Transistors can only get so small. Currently their features are scaled at 14 nano-meters. It’s generally recognized that silicon will reach its limit at around 5 nano-meters as quantum tunneling becomes an issue. Graphene may extend that down somewhat further, but we appear to be nearing the limits of easy capacity increases in classic computers. Some researchers have managed to scale logic gates down much further, but it’s not clear how commercially viable those implementations might ever be.

Quantum computing may dramatically increase capacities for certain types of processing, but I’m not sure simulating a biological neural network would fall into that category, but I’ll admit I could be underestimating the possibilities of quantum computing. The big problem right now is that quantum processors have to process in a near 0 Kelvin (absolute zero temperature) environment, something that is unlikely to happen on your desktop computer.

Still, there’s room for optimism. The brain itself operates at 37 degrees Celsius and has a very modest power requirement of about 25 watts. While the processing of any one neuron is very pokey by electronic standards, the brain more than makes up for it with its massively parallel architecture.

All of which indicates that we’ve likely only scratched the surface of possible information processing architectures. The end of Moore’s Law will likely force a type of innovation that simply hasn’t been necessary for several decades: looking at alternate ways of processing information. That progress will likely come in fits and starts, but there’s no reason to suspect it won’t come.

All that said, it may well eventually turn out that emulating the brain hardware (or “wetware”) will never be an effective strategy. Maybe, to have a functional copied mind, we’ll have to recreate a new substrate very similar to the original, in other words, a new biology-like brain. Doing this while imparting the information from the source mind might be profoundly difficult, but again, taking a very long view, there’s nothing that fundamentally prevents it.

This will require a very thorough understanding of the brain and mind. However, having that understanding may actually enable another strategy. We may find that the best way to copy a mind is to put it through a translation process, to, in effect, port it to a new platform in the same way that programmers sometimes port software from one hardware platform to another.

This is easier to understand if we consider if a part of the brain were damaged and we swapped it out for a technological replacement. If someone’s, say, V1 vision processing center was destroyed, and we replaced it with a computer unit that processed vision in its own way, but provided the same information to the rest of the brain that the original V1 center did, would we still have the same person? If we replaced other components as they failed, at what point would the person cease being the original? And what’s different if we do it all at the same time?

Of course, this will make skeptics even more convinced that we haven’t really copied the mind, only set up an imitation. But it seems to me that skepticism is going to come regardless, and that people will still be arguing about it long after the first successful copy is made.

That reminds me of an ancestor of mine who reportedly wouldn’t listen to those new fangled phonographs because anything other than live music wasn’t “real” music. Of course, “live” music now typically goes through an amplifier and speaker system, so there probably wouldn’t be much music he considers real now.

Douglas Adams wrote, “I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.”

I agree very much that “we’ve likely only scratched the surface of possible information processing architectures”. Perhaps an end to Moore’s Law would provide the necessary economic incentive for pushing into radically new concepts of computing.

I haven’t read the article, but will do. In the meantime, may I ask what is likely a daft question, Mike? What is the purpose of copying a mind, and given a mind (if such a thing exists) is nothing but flux, what is it that is being copied?

The purpose? Having a version of us survive biological death. And given that space exploration has been shown to be the domain first and foremost of robots, it might enable human minds to experience that exploration, most notably interstellar exploration.

Of course, you might argue that if the physical you doesn’t survive, then what’s the point? But if I’m on my death bed, knowing that my consciousness is about to end, then having a version of me be preserved seems attractive. And subjectively, the copied me would consider itself to be me.

Given that any initial method to do this is likely to be destructive anyway, there won’t be an original me left to be unhappy about it, meaning that subjectively, it would seem like I went to sleep prior to the scanning process and woke up in whatever place the copied mind was running in.

But if we did figure out a way to do it non-destructively, maybe we could also figure out a way for the original me to receive the memories of the copied me and vice versa. If so, then original-me would feel like I was also copied-me, allowing original-me to feel like only one of my bodies was dying when the time came.

I still don’t see what it is that ‘survives our biological death’, Mike. The notion seems to presuppose some enduring instantiation of selfhood that gets transferred from one carrier system to another, but no such thing exists – not in neurophysiology, nor in any everyday introspective/subjective analysis. There seems to be the idea that a mind (whatever that connotes as an original or as a copy), once connected to a similar, but artificial, means of sentience, results in something tantamount to a continuation of the biological being upon which it supposedly is based, and which itself only imagines its existence as an enduringly instantiated self.

The concept seems to be about replicating an illusion or deception of sorts; so to take your scenario of me being on my death bed, then I’m not at all sure I would find it ‘attractive’ to have some existence seemingly preserved in memory, disposition, or what have you. Why would that be attractive in actuality, I wonder? Perhaps it might, as it seems to be for those who feel they’ve lived past lives, and extend their illusion of continuity not just in their current life, but back into imagined previous ones. What is it that they think transmigrated – a soul? What is that gets duplicated in the mind copy – a soul? Obviously not – so what?

Hmmm. Well, if you’re going to look at it that way, then it’s an illusion that I’m the same person tomorrow as I was yesterday. Definitely within a few years, all of the atoms in my body (except for parts of my bone structure) will have been replaced. My future self isn’t me, and I don’t owe it anything. Since a continuing self is an illusion, I might as well end it all now, or at least lead a hedonistic life with no thought for tomorrow.

Of course, I don’t, because I have a powerful instinctive urge to continue existing as long as possible. Whether the self is an illusion or not (I’m on the side of saying it’s not, but I’ll admit it depends on what we mean by “illusion”), we have an intense desire for that self to continue. Unless the transition threatens to heavily alter my personality, I would consider the uploaded or copied version to be a continuation of me.

I do agree in not seeing the point with reincarnation. If I’m not going to remember my current life, then I’d have to say that a huge part of me wouldn’t have survived. I wonder what the difference is between that a just making a clone. The clone would have all of my innate dispositions, but none of my learned ones. It seems like it would be me just as much as a reincarnated version would be. (Well, minus the karmic adjustment in station.)

Yes, well of course we are not in fact the same person tomorrow as we were yesterday. There is only an idea that we are – the mind is not the same, nor is the body [i.e. apoptosis]. I don’t find that acknowledging this leads me into nihilism or neglect, Mike, as I am alive, and life seeks its own continuance whether or not a belief is held as to a continuing self – hunger, pain, discomfort, and so on, all tend to towards producing life-sustaining responses, regardless of any thoughts of, or desires for, my (supposed self’s) continuation.

I suppose it’s fair to say that if we believe in an enduring self (not merely as a social construct), and if we argue based upon its existence, even to the point of positing its virtualised or actual transmigration, then we ought be able to demonstrate what it is, or what its referents are, or effectively repudiate the argument that asserts its non-existence. No one ever seems able to do this, instead appealing merely to intuitions, albeit that they are almost ubiquitously held. I think the elephant still is in the room – what is it that in effect transmigrates or is replicated in the mind copy?

Hariod, I may be missing something here, particularly after the long post I did recently on Damasio’s layered conceptions of self. I’m not sure how accurate his theories will turn out to be, but it seems likely to me that the self is a useful data model constructed in the brain. I could see that conceptions of a self more constant than that could be regarded as an illusion, but isn’t there something that’s experiencing that illusion?

I would agree that people’s conception of the self as a soul (or similar notions) are social constructions. But that social construction arises from people’s instinctive desire to keep some version of their self existing as long as possible. In that sense, recording the data model self and giving it existence in a new body or environment strikes me as something that can coherently be desirable.

Let’s accept that Damasio’s theory of self, or some other computational/algorithmic model, is correct – and I do, as I regard the ‘self’ as being some kind of evolved trick of the brain. That means that what’s being copied is nothing more than a function – a prismatic way of looking at phenomena, one which includes an assumed and unquestioned belief-idea that something (the ‘self and mind of me’) endures throughout the flux of phenomena, even though that ‘something’ is no more than a subroutine, or whatever, an illusion-creator.

The subjective experience will carry along a recallable history as a result of transferred memories and dispositions, but this history will not now, nor ever previously, have been experienced by any enduring entity of selfhood. Nor, for that matter, any enduring mind – I regard mind as nothing more than a convenient concept. You say “isn’t there something that’s experiencing that illusion?”, by which you suggest an enduring self or mind. My position (fwiw) is that no, nothing experiences anything behind the phenomena, as if some interior homunculus, or experiencer of experience, or thinker of thoughts, or subject apprehending objects. There is just the illuminated phenomena and its underlying system, which is forever only a loose aggregation of other phenomena, themselves being in a state of flux.

So, to answer my own question – what is it that in effect transmigrates or is replicated in the mind copy? – then it is certain patterns of memory and disposition along with the algorithm that creates the prismatic self-like view, granted, but no actual enduring self or mind, as (I say) they never once existed so as to be replicated or to transmigrate. As a pragmatist, I might say that I don’t care if it’s all some sort of (now) digital deception, I just want the belief I am immortal, that I, as I (erroneously) believe myself currently to be, will continue. On the other hand, I might ask what is it that exists now, and which has existed throughout this life, which I seek to continue in some other life? What and where are these enduring self or mind which I seek to copy?

Hariod, it feels like we agree ontologically. The only question is whether the word “self” remains productive, once we’ve concluded that it isn’t the naive version that many people hold.

My philosophy on these kinds of questions is to ask if there is still something there to be referred to, such that if we dispensed with the original phrase we’d have to come up with a new one. “Self” doesn’t refer to what people might naively consider the self, but I think it does refer to a pragmatic information model. If we dispense with the word “self”, then we’d have to come up with a new name for that model.

But all of that said, I think the main point for mind uploading or copying, is that it doesn’t introduce identity issues unless your sense of identity is still caught up with things like ghostly souls and the like. I think you’re right that it’s function, or galaxy of functions, that are being copied.

If mind, consciousness, and the self are something more than function, more than what the brain does, then it might represent an obstacle. But I haven’t seen any evidence for anything like that, at least not yet.

Even if the mind is merely a transient information model (which I agree with you, it is), then mind copying still raises profound questions of identity. What does it mean for there to be two instances of a mind?

I stand corrected. Thanks Steve. I overstated the case that there wouldn’t be identity issues. I should have said that many of the identity issues it appears to introduce are ones we already have to deal with.

I think the issues with multiple instances of a mind being around are the practical ones. Who is married to the spouse of the original? Or obligated by contracts that the one instance made? Who controls the checking account? If one of the instances commits a crime, who is responsible? If only the one instance, what prevents people from forking just prior to committing a crime to evade responsibility?

And there’s no guarantee that instances will continue to see eye to eye on matters as they grow apart. Even if they can share memories, the order in which they experience the memories could still lead to them diverging in personality. Ann Leckie’s Ancillary series features a ruler who has cloned herself thousands of times going to war with herself because of differences between different factions among her clones.

Hariod stole a version of my question on this one, which you have answered. I was curious why the field of neuroscience had become so intrigued by the possibility of preserving human consciousness on a hard drive, but I understand the desire to continue on in some way beyond the horizon of physical life if you will. And as I don’t read a tremendous amount of neuroscience it may also be the case that reading here I encounter a disproportionate amount of references to this subset of a wider field. I read one of Kurzweil’s books about twenty years ago on this subject– the Age of Spiritual Machines I believe it was– and you will likely not be surprised to find I wasn’t enchanted with the concept.

All other issues aside, I find it difficult to make the leap to believing that a software program on an operating system any different from the human body would actually be able to provide the sensation of continuity. A bit like the way sensory deprivation tanks have a way of driving people batshit, if I accept for the sake of argument the validity of this idea, I wonder whether or not the consciousness on the far side of this transmigration wouldn’t feel quite estranged without the relationship to biology that is so integral to its purpose and function. It feels at some level like the ultimate extension of the idea “I think, therefore I am.” Which to me feels like a half existence almost, like remaining alive without the aspect of our being that is rooted in the organic beauty and sensation of Earth. Out of curiosity, how many female researchers are at the fore in this line of research?

At this point, I would prefer biological death to such a future, personally. But it raises fascinating questions about what it is to survive, why we might survive, and what qualities of life are not worth living without. Graziano’s article also brings up what is surely an important point to this, which is the challenge of making such a life extension available to all, and how this would very likely be– at least initially– a very expensive undertaking available only to a limited number of persons. I see a potential recapitulation in technological form of the dilemma of determining who the chosen people will be…

I can’t say I’m a fan of Kurzweil. When I discuss this stuff, I’m talking about developments centuries from now. It’s like discussing how we might do interstellar exploration. But Kurzweil has turned it almost into a type of religion, a rapture of the nerds. It’s not impossible that this stuff will happen within our lifetimes, but I think the chances are remote. The best we might hope for is the ability to record our minds at death with the possibility of someday being loaded somewhere.

The idea of an uploaded minded going crazy without a body is a commonly expressed concern. On the one hand, I agree that an uploaded mind, to feel complete, would need a body of some sort, even if it were only a virtual one. And I’m pretty sure no one wants the hideously disembodied feel of a sensory deprivation tank. Hopefully no one will attempt to upload a human mind until some kind of body or virtual body can be provided.

But I think there’s a lot more range in this than is commonly assumed. Patients with spinal cord injuries, or who suffer debilitating nervous system diseases, often lose all sensation and control of their body, and while that’s certainly not a happy state to be in, most of them retain their mental faculties. In other words, the human mind is more robust than people often assume.

And if I were one of those patients trapped in my own body, and perceived that uploaded minds seemed happy, I would probably volunteer to become one. Certainly I’d have few qualms if I were on my death bed and had little else to lose.

Making it available to all, avoiding a have / have-not situation, is going to be a major problem. I could see a major push just to get people’s minds recorded somewhere, even if they couldn’t be re-instantiated for a while. But even then, there will be a generation where a lot of people die knowing that a mechanism exists to preserve them, but that just isn’t available to them in time.

Is the mind synonymous with the self? I can change my mind multiple times each day, depending on new data or a ‘change of heart’, while the self seems unaffected. Obviously, the homunculus that observes everything and talks to himself, does not care about changes to the self, except on a philosophical level. On deeper reflection, my mini me has to admit that I am not the same self. I feel differently about all aspects of consciousness compared to 66 years ago, when I was three. I know that I am the same person in an evolving body, but others would disagree vehemently. The truth is that my self, me, my body changes every nano-second of every day, from the moment of conception on. Through it all, I am the same individual defined by my genomic signature and place in space-time.

This feeling that my mini me is the essence of me is an illusion, apparently shared by most people. (I would bet that a psychological investigation would reveal significant variations in the descriptions of individual internal observers. Eg strong versus weak; nature of primary concerns, content, etc)

I suspect that before we try to upload the self, most of the scientific effort will be toward copying and duplicating the minds of animals, I foresee two major difficulties: while we will be able to describe the structures and their relata of animal minds we will be unable to correlate these with content. This would be especially difficult with humans. Secondly, it will become increasingly difficult to justify the continuing destruction of minds just for the purpose of understanding them, especially, as, as I suspect it would, it becomes more and more clear that animals’ minds are very similar to ours.

The complexities of the structures involved should not be underestimated. As I am confronted with the evidence, my mini me feels inchoate, unable to describe the ineffable reality. I would recommend a visit to the website of Jonathan Lieff MD, psychiatrist and mathematician, to get a sense of the challenges. (jonlieffmd.com)

“I can change my mind multiple times each day, depending on new data or a ‘change of heart’, while the self seems unaffected.”
A lot depends on how we define terms like “mind”, “self”, etc. The common usage of these terms are often shifted around depending on context. The problem is that these concepts seem to be a collection of systems. Loss of any one component is often not seen as destroying the whole, although if enough is lost it can be. And of course, some areas, such as the brainstem, are crucial but not sufficient.

On animals, I understand your concerns, but I suspect it will happen anyway. We have few qualms about subjecting mice and rats to pretty brutal testing procedures in the name of science. A lot of what we know about mammalian brains comes from selectively damaging portions of their brains and observing the effects.

I personally hate the idea of these kinds of tests, but I also know a lot of science has been made possible from them, including a lot of information that has helped in the fight against human diseases and afflictions. I don’t doubt that the first uploads will be of creatures like c elegans worms, and then up the chain until we get to humans. But by the time the first human is uploaded, we’ll likely have tons of mice, rats, and other mammals scampering about in robot bodies or virtual environments.

Thanks for the link to Jon Lieff! I hadn’t heard of him before. I’ll check him out.