Cognitive Prosthetics and Mind Uploading

I am on record (in this old episode of Spacetime Mind where we talk to Eric Schwitzgebel) as being somewhat of a skeptic about mind uploading and artificial consciousness generally (especially for a priori reasons) but I also think this is largely an empirical matter (see this old draft of a paper that I never developed). So even though I am willing to be convinced I still have some non-minimal credence in the biological nature of consciousness and the mind generally, though in all honesty it is not as non-minimal as it used to be.

Those who are optimistic about mind uploading have often appealed to partial uploading as a practical convincing case. This point is made especially clearly by David Chalmers in his paper The Singularity: A Philosophical Analysis (a selection of which is reprinted as ‘Mind uploading: A Philosophical Analysis),

At the very least, it seems very likely that partial uploading will convince most people that uploading preserves consciousness. Once people are confronted with friends and family who have undergone limited partial uploading and are behaving normally, few people will seriously think that they lack consciousness. And gradual extensions to full uploading will convince most people that these systems are conscious at well. Of course it remains at least a logical possibility that this process will gradually or suddenly turn everyone into zombies. But once we are confronted with partial uploads, that hypothesis will seem akin to the hypothesis that people of different ethnicities or genders are zombies.

What is partial uploading? Uploading in general is never very well defined (that I know of) but it is often taken to involve in some way producing a functional isomorph to the human brain. Thus partial uploading would be the partial production of a functional isomorph to the human brain. In particular we would have to reproduce the function of the relevant neuron(s).

At this point we are not really able to do any kind of uploading as Chalmers’ or others describe but there are people who seem to be doing things that look like a bit like partial uploading. First one might think of cochlear implants. What we can do now is impressive but it doesn’t look like uploading in any significant way. We have computers analyze incoming sound waves and then stimulate the auditory nerves in (what we hope) are appropriate ways. Even leaving aside the fact that subjects seem to report a phenomenological difference, and leaving aside how useful this is for a certain kind of auditory deficit, it is not clear that the role of the computational device has anything to do with constituting the conscious experience, or of being part of the subject’s mind. It looks to me like these are akin to fancy glasses. They causally interact with the systems that produce consciousness but do not show that the mind can be replaced by a silicon computer.

What we can do now is fundamentally limited by our lack of understanding about what all of the neural activity ‘means’ but even so there is impressive and suggestive evidence that homelike like a prosthetic hippocampus is possible. They record from an intact hippocampus (in rats) while performing some memory task and then have a computer analyze and predict what the output of the hippocampus would have been. When compared to actual output of hippocampal cells it is pretty good and the hope is that they can then use this to stimulate post-hippocampal neurons as they would have been if the hippocampus was intact. This has been done as proof of principle in rats (not in real time) and now in monkeys, and in real time and in the prefrontal cortex as well!

The monkey work was really interesting. They had the animal perform a task which involved viewing a picture and then waiting through a delay period. After the delay period the animal is shown many pictures and has to pick out the one it saw before (this is one version of a delayed match to sample task). While they were doing this they recorded activity of cells in the prefrontal cortex (specifically layers 2/3 and 5). When they introduced a drug into the region which was known to impair performance on this kind of task the animal’s performance was very poor (as expected) but if they stimulated the animal’s brain in the way that their computer program predicted that the deactivated region would respond (specifically they stimulated the layer 5 neurons (via the same electrode they previously used to record) in the way that the model predicted they would have been by layer 2/3) the animal’s performance returned to almost normal! Theodore Berger describes this as something like ‘putting the memory into memory for the animal’. He then shows that if you do this with an animal that has an intact brain they do better than they did before. This can be used to enhance the performance of a neuroscience-typical brain!

They say they are doing human trials but I haven’t heard anything about that. Even so this is impressive in that they use it successfully in rats for long term memory in the hippocampus and then they also use it in monkeys in the prefrontal cortex in working memory. In both cases they seem to get the same result. It starts to look like it is hard to deny that the computer is ‘forming’ the memory and transmitting it for storage. So something cognitive has been uploaded. Those sympathetic to the biological view will have to say that this is more like the cochlear implant case where we have a system causally interacting with the brain but it is the biological brain that stores the memory and recalls it and is responsible for any phenomenology or conscious experiences. It seems to me that they have to predict that in humans there will be a difference in the phenomenology that stands out to the subject (due to the silicon not being a functional isomorph) but if we get the same pattern of results for working memory in humans are we heading towards Chalmers’ acceptance scenario?