A common trope in cyber punkish sci-fi is the ability to upload a human mind to some storage device or AI. This of course grants practical immortality but it is also the main hurdle for tropes like teleportation, cloning etc.

So my question is: how close are we to being able to make restorable backups of a human brain? What scientific or engineering break throughs are required?

For the sake of this question, the goal is:

To transfer the complete memories, feelings and "consciousness" (whatever that is) of a human being to an external storage device (electronic or biological) of some kind.

To be able to interact with the back up, directly or after a "restore" to another human body.

It should be impossible to tell original and back up apart without seeing their physical form.

The question has been asked on Quora but without any detailed answers.

I'm tagging this "Science-based", but any hard-science answers are preferred.

$\begingroup$I would argue we are much closer to make this happen biologically = arbitrary long life. That doesn't mean we are anywhere close to either. I also believe this question gets asked here once a month$\endgroup$
– Raditz_35Aug 31 '17 at 7:45

3

$\begingroup$Ps: let me explain: we already have the machine and the person in there. I always wonder why one would first construct a machine and then figure out how to upload if one can avoid those steps. That's like trying to make a cake by first constricting a fusion reactor to generate heat$\endgroup$
– Raditz_35Aug 31 '17 at 7:56

1

$\begingroup$@Raditz_35 I suppose future tech could go either way, but even if we could prolong the life span of human brain (and body) that would still not protect an individual mind from catastrophic failure. (like a bullet to the head). If this you feel this is a duplicate, please post some links and VTC. I did not find the same question with a quick search, nor do I recall one, but that might just be poor search skills.$\endgroup$
– GuranAug 31 '17 at 8:31

9 Answers
9

We are still far away from being able to correctly interpret a neural system and to be able to simulate that on a computer of some kind.

We have a kind of mixed situation about our understanding of brain (and nervous system in general) physiology; I will try to sketch what I understand of "State of the Art".

We have a fairly good understanding of neuron workings, down to molecular level, at least for local interactions.

We have several "models" that capture (at different levels of precision) neuron functionality, abstracting it completely from actual chemical processes powering it. These "models" are powerful and useful enough to be used in real-life A.I.s solving computationally "hard" problems (e.g.: weather forecasts).

We are not sure we have understood all implications of "systemic" neurotransmitters (the ones in the blood stream).

Systemic neurotransmitters seem to play a key role in "reward system" and thus on all network training and memorization.

We have a general map of connections between neurons both in brain and in Spinal chord.

We have no detailed connection map.

There is still no consensus if current neuron models actually contain enough information to faithfully replicate a Natural Neuron Network functionality in a Simulated Neuron Network.

We have little to no understanding of process leading to formation of new synapses (neuron connections).

We have little to no understanding of processes leading to modification of synapses.

The above two points mean we are really far from having a sensible breakthrough on learning and memorization processes.

A detailed mapping of neuron interconnection is impossible right now, but is thinkable in a relatively near future.

Compute a map of "weight" connection for each neuron is a problem several orders of magnitude more complex, but still theoretically doable.

As said there is no consensus about if and, eventually, how much these parameters unequivocally characterize the whole network.

Assuming all this data actually define enough the brain there is the problem to fully simulate this NNN with a suitable SNN (computationally intractable, to date, essentially due to the high degree of parallelism needed).

If and when the above problems are solved it's possible to have a simulated conscience responding (given the necessary "peripherals") "as if" the original human was still alive. A few problems would remain:

as said there's still not consensus if the model I'm outlining captures enough of neuron complexity to replicate faithfully behavior.

these models completely ignore "systemic" influences.

these models do not include any part of some proposed (and undemonstrated) "local" interneuron "resonances".

in any case it would be a conscience "frozen", with no way to learn anything "long term" (short term memories would be possible).

There are studies on how to model neuron interaction at a higher level, but AFAIK they are not applicable for "backup" purposes.

I see no way to restore any kind of "backup" unto a biologic brain, in any foreseeable future, as:

We have no understanding of how to grow neurons with specific connection pattern (neurons grow new connections in the learning process, reaching several hundred thousand connection for a single neuron)

We have very small chance to "restore" behavior of even a single neuron because signal sensitivity is encoded in membrane artifacts which will be very hard to duplicate.

We need to do the above on almost a hundred billions neurons (not counting sympathetic system, spinal chord and the myriad nerve ganglia we have almost anywhere; note: recent studies indicate these are not irrelevant to our behavior).

Some of the "global" states of the brain are controlled by "systemic" neurotransmitters floating in the blood stream. At least some of these are generated by systems very far from the brain (cardiac ganglia, adrenal glands, intestine, etc.).

$\begingroup$A biological restore is not neccessary per this question. A self conscious AI with the full memories and personality of an original human is sufficient.$\endgroup$
– GuranSep 1 '17 at 6:46

$\begingroup$I am the OP ;) and the question is "directly, or after a restore.$\endgroup$
– GuranSep 1 '17 at 7:27

$\begingroup$@Guran: oops, I missed that. I will update my answer to include current "State of Arts" about the "functional backup" which, I can tell you is not very near, but neither deemed "impossible". It will take some time ;)$\endgroup$
– ZioByteSep 1 '17 at 7:54

$\begingroup$Congrats, those bullet points made yours the preferred answer, though many are similar.$\endgroup$
– GuranSep 4 '17 at 6:24

First of all, we still don't know how memories, feeling and consciousness are coded in our brain. We have some ideas on which brain areas are devoted to certain tasks (sort of black box model), but we still lack the finer detail. If we make the (somehow poor) analogy with a computer, we know which is the RAM, the ROM and which the GPU, but we have no clue on how data are processed and stored there. Now, imagine how ridiculous would somebody appear approaching a hard disk with just paper and pen to copy its content, and you have a hint of our standpoint when it comes to "copying brains".

The first breakthrough would then be to understand this fine level detail. By knowing how the information is coded, we can read it (continuing with the computer analogy, once we know that bits are coded with a magnetic field on the substrate, we can arrange a magnetic head to read it). Once this is known, we can then move on to try to replicate it. Once we are able to replicate it, copying and pasting is possible.

$\begingroup$Plus we'd have to figure out how to read the state of a brain, and (still more difficult) how to write one.$\endgroup$
– thsAug 31 '17 at 9:51

$\begingroup$In other words, there are substantial scientific breakthroughs required (figure out how a "consciousness" is represented in physical form) before we can even begin to tackle the engineering challenges (how to read and store a state of mind). This answer could be improved by some actual reference to what our current knowledge is.$\endgroup$
– GuranAug 31 '17 at 10:08

2

$\begingroup$@Guran: The point he is making is that our "current knowledge" is non-existent. We can recognize the hardware, but we have absolutely no idea of how data is processed within it.$\endgroup$
– nzamanAug 31 '17 at 11:38

$\begingroup$...which is quite fascinating, since such tech generally comes before interstellar travel in most sci-fi. (in my mind at least)$\endgroup$
– GuranAug 31 '17 at 13:45

1

$\begingroup$We can already decode the brain's activity to a point though, for an example see: technologyreview.com/s/604332/… where they used AI to read, from brain activity, what the user was seeing.$\endgroup$
– MKIIAug 31 '17 at 14:11

First of all, we do not have the faintest idea of how the brain works at all, nor a good way of even looking at it.
This profound non-understanding coupled with very modest means to even look at what's going on has lead to ill-advised beliefs such as we're only using 10% of our brain.

There's EEG, which is basically a couple of wiggly lines in which scientists try to find patterns, but if you're being honest it's close to reading tea leaves. The problem is that what the EEG shows is a summation of interfering microscopic impulses along a multitude of vectors, under the (probably wrong) assumption/simplification that cranial structures and/or skin conductivity go exactly the way you think and do not influence the outcome. Put differently, you're looking at some pretty patterns, but there's no way you can truthfully make too much of it in a sense of "read mind" or even "copy a personality". But even assuming you could read someone's mind that way, this would likely still not let you extract knowledge that isn't being accessed or copy a personality from those electric impulses.

There's the UCSD experiment where probands (although the researchers were in my opinion cheating because all they really measured was the brain's response to a flicker pattern that probands looked at!) managed to "dial" numbers on a cell phone by means of thought patterns. Well, awesome. The brain reacts to external stimuli, that's big news. Now tell me how to upload your mind to your cellphone.

And there's MRI, which gives even prettier patterns than EEG, in color and in 3D. You can show that certain areas of the brain light up when certain things are done or when certain external stimuli appear. While this is impressive, it's approximately the same thing Mengele did 70 years earlier, only less invasive, and at slightly higher resolution.
From "we see these areas light up" to "copy a personality" it's approximately like having discovered that things that you drop fall to the ground and a manned mission to Mars.

We do not know exactly how the brain stores (or processes) information. Yes, we do have some educated guesses, but we don't really know exactly. When looking at not only what a simple honey bee is capable of remembering, but also how capable it is at path planning and rather non-trivial mechanical tasks, I'm stunned how the hell nature manages to fit all that into a brain the size of a needle pin. How large is your brain again? Good luck decoding that.

We do not even know how much information a human brain can store, but we do know that the amount is huge, and we know that information is stored in a non-obvious way which one could consider a kind of "interlinked lossy compression with forward and backward error correction". Something like that. Memories are not just data, they are data that has been filtered, weighted and validated, and connected to other, sometimes unrelated data in a non-obvious way, with massive holes that are filled from other data, or sometimes interpolated with what seems plausible to the brain, and with no way of telling a difference (guess why witnesses are such a pain in the ass). So far, we cannot even remotely guess how this works at all. We can only tell it must be something the like from observing what people remember (and sometimes what they think they remember). Some personality-defining memories/abilities (let's say playing an instrument) are in addition supported by dedicated hardware (if you want to call the cerebellum that). Which, of course, you would need to somehow copy too.

We don't know whether personality has anything to do with stored information either, or where personality comes from, for that matter. Is it defined by your experience? Genetic? Given by God? Hardwired by your dendrites? Stored chemically? We have no idea. We can only tell from observation, with reasonable certitude, that it's probably not one of the previously mentioned things alone.
Experiments that might give an answer would take decades and would be highly unethical to the point of being forbidding (e.g. raise clones in different environments, observe them for 20-25 years, then cut their brains to slices).

If we knew all of the above, we still wouldn't know how to map all of this to a digital format that a computer can store, let alone build a computer large enough to do the job, or how to "transfer back" the mind, once copied and stored. While it might, in principle, be feasible one day to copy the "data" from the human via some "scan thingie", the brain simply isn't built to receive a new mind like this. There's no "input" plug of sorts.

$\begingroup$Mapping to a digital format is not a problem. We can currently map all the information we have. The question could be the available memory, but if we have some big cloud like Google, than it can be much bigger than the information that a brain has.$\endgroup$
– keiv.flyAug 31 '17 at 17:00

$\begingroup$@keiv.fly: Doubtful. While for example synapses obviously operate on quantifiable information (molecules) they are by all practical means fuzzy analog devices. Sure, in theory you can store the type of every molecule in a human body, if only given enough memory. But that's not realistic. At around 200g/mol on the average for neurotransmitters, if each molecule took but a single bit of storage, that'd already be 2^80 bytes for a snapshot of one brain's synapses... ugh... if memory capacity doesn't bite you, then bandwidth...$\endgroup$
– DamonAug 31 '17 at 17:15

Like others have said already, this is still in the not-forseable future. The main reason is very simple: We have no clue how consciousness actually works.

Science has made a lot of progress explaining the most basic building blocks of nerves and brains. We have also accumulated a lot of knowledge of finer details, but here already our understand of how this actually happens is beginning to be vague. For example, object recognition and object persistence in visual perception. We know a lot about it, and another lot is under active research because we haven't figured it out yet. And that's a low-level building block of the entire visual system.

And as any IT person knows, backup is just one half of the process, restore is the other. AFAIK, nobody has the slightest clue towards how to manufacture a brain, or even a simulation of a brain sufficiently advanced to allow human consciousness to run.

We are so far away from this that any estimate of when we'll be there is pure speculation.

$\begingroup$Most people are actually able to manufacture a brain that a human consciousness can run on. My parents did it. Yours probably did too.$\endgroup$
– MichaelAug 31 '17 at 17:46

2

$\begingroup$"I know how people make more people, but it requires that you have two people to start with, which I hope someone got fired over" -Glados$\endgroup$
– sdrawkcabdearAug 31 '17 at 23:06

$\begingroup$@Michael I know it was a joke, but here's the thing: To manufacture something or to grow something is not the same thing. We might, in fact, be able to grow brains in petri dishes much sooner than building them in factories - but growing something doesn't tell us how, exactly, it works.$\endgroup$
– TomSep 2 '17 at 6:14

$\begingroup$@Tom You are right, but explaining the joke is like dissecting a frog. Of interest only to academics and kills the frog in the process.$\endgroup$
– MichaelSep 4 '17 at 17:08

First, I'd like to mention that exactly this mechanism is used in Commonwealth Saga by P.Hamilton. I suggest reading through these books if you'd enjoy seeing how such technology affects society in everyday life. Probably there are dozens of other examples, though.

Now, regarding the question: we are far away from such tech.

We don't know how exactly our brain works. Currently, we're able to
create neural networks with 150+ billions of neurons, and yet
we're not even close to simulating human brain. As AlexP pointed out below, it's not quite correct to compare artificial neuronetworks (ANN) with human brain because they serve different purpose. But history of such networks begins with modelling human neurons, so let's at least say we tried to simulate the brain.

We don't know how exactly the information is stored inside our brain. In simple words, we think that patterns of active neural chains form our memories, but I haven't heard about anyone ever replicating "memory' as it is. There is also a holographic theory: we assume that each piece of information is spread all over the brain. If we remove some part of it, we can still restore the entire memory from the rest of the brain at the cost of "resolution". Just like with holographic pictures. Generic information like reflexes might be stored this way. I mean, there are different views on how it works and no one ever claimed their approach to be 100% correct.

Even restoring memory is almost impossible in case when it's not a
psychological factor (like intuitive defence mechanism). Here
we see scientists restore part of the memory for mice, but as far as
I understand it, it doesn't last long after procedure is over. So
technically, if we want to restore memory via medical intervention,
we're in trouble.

So, in conclusion, we are not ready to introduce separate memory storage compatible with human brain. Because memories are not binary, I suspect we need to simulate neurons to properly treat them outside of our heads. Of course, that is my personal opinion.

$\begingroup$Also, ZioByte and ths in the comment above made a good point - we don't know how to upload data directly into the brain.$\endgroup$
– user2851843Aug 31 '17 at 11:12

4

$\begingroup$"We're able to create neural networks with 150+ billions of neurons": An actual biological network of actual biological neurons, and an artificial neural network as used in computing are vastly different things. They are only vaguely similar to each other, in that they are both made up of interconnected simple computing nodes. It is not expected that an artificial neural network will somehow duplicate the operation of a biological brain.$\endgroup$
– AlexPAug 31 '17 at 11:15

$\begingroup$Agree, though the history of such networks started exactly by trying to replicate neurons. Later, people discovered cons and pros of ANNs, enhanced latter using various techniques and applied all this in different fields like visual object recognition, classification and so on. So yeah, no one expect Alpha Go or Deep Dream to become as smart as human in everything.$\endgroup$
– user2851843Aug 31 '17 at 11:21

$\begingroup$A little tip: you need to use an "@" in front of a username to notify them. Only the OP of a post is normally notified of a comment and one person can be notified per comment. It even autocompletes. So to notify Alex of your response you would need to write @AlexP (like I just did) somewhere in your comment.$\endgroup$
– SecespitusAug 31 '17 at 11:41

It is impossible to know how far we are away from eternal consciousness. We are far enough away from being able to do this that we don't even know enough to figure out how far we are away! Most agree it wont happen soon (i.e. not in the next 25 years), but nobody can say whether we're 100 years away or 100,000,000 years away.

We don't even have a solid scientific definition of consciousness, though there are some interesting ones being floated which rely on information theory. It's not even clear whether or not the idea of "preserving consciousness" is meaningful without also preserving the entire environment (i.e. copying the universe).

Also unclear is whether one can copy a consciousness without also copying the death-creating features that are present in our cells. It may not be possible to unlink what we call "consciousness" from the natural cycle of life and death. The thing that we may unlink from this cycle may not even meet our current definitions for consciousness.

I highly recommend reading about the philosophical problem: The Ship of Theseus. It is a thought experiment regarding identity which dates at least 2000 years back, and there is still not a solid consensus as to how to resolve this problem. We would certainly need to have solved this multi-milinia old problem before we could accomplish what you seek.

One possibility that I haven't seen considered in the existing answers: rather than just "far away", it might be fundamentally impossible.

First, a short anecdote: some time in the 90s I ran a primitive markov bot (perhaps even too primitive to be called a markov bot since it didn't really even have weighted probabilities) on IRC. For readers not familiar with the concept, the general idea was that it would build a corpus of short chains of words from things other people chatting wrote, and randomly assemble them into sometimes-meaningful sentences. Anyway, at initial setup most of the output was nonsense, after a few weeks it was producing lots of amusement, and after a few more weeks the output was again nonsense. The problem was that both presence of "useful associations", and absence of "too many associations", was necessary to get something other than garbage out.

So, back to human or human-like consciousness: it's possible that the entire neural model of our consciousness has fundamental limits on "how much experience" it can accumulate without becoming overloaded and starting to produce less-useful and eventually all-garbage output. If there is such a limit, it might or might not scale with the size of the neural network. This kind of limit seems plausible in terms of how our experience of consciousness changes with age or with accumulation of more and more fields of knowledge, gradually transforming from vivid memories of specific things and events to more of a "digest memory". What happens if you keep throwing more experience into that without bound? Does it break down entirely into dementia? Is dementia entirely a matter of physical/physiological faults in the nervous system, or also partly a computational state? Can a human (or augmented/digitized) mind progress successfully to further and further states of being able to work with thousands or millions of years of experiences?

Fortunately, these are all amazingly fun questions from a scifi writing/worldbuilding perspective, but if anything the fact that they're open supports the view that we're at best far, far away from achieving what OP asked about.

Yes, it is possible but the person can not be put into a computer directly.

In order for eternal life to exist we first must invent sentient Artificial Intelligence. Once sentient A.I is in existence we may be able to programme personality traits into it. Once we have the same personality of whoever wants to be posthumanised we then have to collect their memories. This is simple as we could just have the person in question give their life story in as much detail as possible and all of the knowledge they have acquired. So once we have an A.I programmed with their personality we programme their memories into it. Once the A.I is programmed, and the customer is satisfied, we must then kill the customer. This is to prevent any problems that would be caused by there being two of the same personality. an Android is then built that looks exactly like the person, their digital clone is put into it, and then we send the clone out. Then it lives it's life as the original.

Yes I do know that this idea is not the actual person but it is the closest way I can think of that is realistic. The problem with "cloud saving" memories is that the brain and computers are radically different to each other. If you would like to research this topic further this concept is called whole brain emulation.

TL;DR -- I can believe the upload might be possible in the far future, but I don't think download is feasible

This is difficult, difficult. Let's say our scanning technology is awesome; we can map every neuron-to-neuron connection and (if necessary) the current charge-state of each. Cool, we upload Jim to our computer.

Oh dear ... we forgot the epicellular data -- the brain is awash in hormones and chemicals of various sorts. Okay, with v2 we got this measured and modeled too. So now we have a copy of ol' Jimbo. Oh dear ... we forgot that there is a steady stream of input coming into the brain from all the nerves. Computer-Jim quickly goes insane from total sensory deprivation. Time for v3.

Okay, now we're cool. Jim and Computer-Jim amuse themselves with endless games of "Jinx! You owe me a soda!" Till finally Computer-Jim sobers a bit and says, "So when they restore back into my head, will it hurt?" Jim goes, "No, I'm the one with the body, dummy!" CJ: "Nuh-uh." J: "Yeah-huh" CJ and J together: "Jerk. ... Jinx! You owe me a soda!"

Here's where it gets really hard. The transfer is not just information; we have to physically wind the neurons into place, adjust the chemical soup of the cerebrospinal fluid, and make sure all the right neurons are firing. Assuming you've found a volunteer body, or a ... "volunteer" body, this is a tough row to hoe.

But cool. Restored-Jim immediately goes into convulsions. Oh dear ... could it be that the lower levels of his brain aren't interfacing properly with the signals coming out of the spinal cord? Restored-Jim's body has different muscle mass. A different amount of skin. Might be opposite sex. The signals coming out from his cerebellum are going to the wrong muscles (including the heart, woops) or using the wrong amount of force. There might not even be a good mapping of spinal-cord nerves to cerebellum nerves.

But let's say we overcome this somehow. Restored-Jim feels awful in his new body. Nothing works right. He doesn't fit his own self-image. "Didn't I used to be caucasian? Is Turkish really me? Do you have something in an early Aztec physique?"

While watching this, Computer-Jim -- we didn't turn him off, right? -- is saying "This is no fair! Why did you send him to the new body and not me?" Doc: "Uh..." Computer-Doc: "Jeez you idiot! Why did I let you have the body this week anyway?"

$\begingroup$I enjoyed your writing (not my DV), but you did not really answer the question. These are the implications of such a technology, it doesn't deal with how far ahead the means are.$\endgroup$
– GuranSep 1 '17 at 6:40

$\begingroup$Dear @Guran ... I may have sailed off on a tangent a bit there, I'll admit it. ;D That said, the point I was trying to make is that I don't think the "restore" is even possible, unless you have what is essentially full matter-manipulation control down to the atomic level. The upload seems at least theoretically possible, though I can't really put a timeframe on it. Even so, I'd love to see a story where the restore almost works, and the people get loopier and loopier after every time through the cycle...$\endgroup$
– akaioiSep 1 '17 at 15:19

$\begingroup$Yeah, there are a number of stories still to be written around this topic. The reason I asked, naturally.$\endgroup$
– GuranSep 4 '17 at 5:45