Cognitive Sciences Stack Exchange is a question and answer site for practitioners, researchers, and students in cognitive science, psychology, neuroscience, and psychiatry. It's 100% free, no registration required.

There have been multiple articles and videos circulating on the Internet claiming that optogenetics has made it possible to have perfect input/output to the brain from a computer. This is obviously false or someone would be making a lot of money selling this.

Consequently, I'm curious as to why this is false. Why does having the ability to fire and read the firing information off of individual neurons not imply that a seamless human-computer interface is possible? I'm assuming that this has something to do with the limits of the processing power of our brain and the limits of neuro-plasticity, but my experience in this domain is limited.

If this question is too broad (and I think it is), it can be reframed as a request for references. What current research is being done on this problem? What are the current approaches? Who is working on it? Are there any publications that nicely describe the progress in this (hypothetical) domain?

I am by no means an expert in optigenetics, but I have worked with photolytic uncaging of neurotransmitters in vivo. When the Wired article says That is to say, they insert the new gene into every neuron in that area, indiscriminately. But because of the promoter, the gene will only turn on in one type of neuron. All the other neurons will ignore it., they are referring to what may be dozens to hundreds of cell bodies and thousands of fibers of passage (white matter tracts) in that cubic millimeter.
–
Chuck SherringtonFeb 27 '14 at 3:51

To guarantee that you hit upon a transfected neuron and have no light leakage to the surrounding (transfected) cells would be quite a feat, and I don't think that the field is ready for that level of specificity quite yet. It's a great question, btw, just saying that some of it is probably more hyped up than anything that's actually possible right now.
–
Chuck SherringtonFeb 27 '14 at 3:53

For the record, we have a pretty amazing, evolutionarily-honed brain-computer interface going on already, with this eyes/hands combo thing. Anything hacked together using optogenetics would still be thousands of year behind, surely? Disclaimer: All I know about optogenetics I learned from a single Ted talk. Disclaimer: I know this is in no way helpful. Sorry.
–
EoinFeb 27 '14 at 16:00

1 Answer
1

has made it possible to have perfect input/output to the brain from a computer

Perfect? Definitely not: the complexities of optogenetics of a single mm square of cortex, of a mouse lets say, are extremely complex. As Chuck mentions, many neurons/synapses may be activated by a single LASER and current technologies allow only a few different LASER frequencies to be used simultaneously. For 'perfect input/output' it is easy to imagine a system whereby one would need thousands of different LASER operating at different frequencies to individually control/read thousands of neurons. (This blog post by Mark Baxter is a nice summary of the issue of hype with optogenetics.)

Furthermore, this assumption completely ignores synapses, glial cells, dendritic branchs etc. which may be important for computation in the brain and might therefore need to be considered for any I/O system.

Why does having the ability to fire and read the firing information off of individual neurons not imply that a seamless human-computer interface is possible?

You are assuming that everything the brain does is encoded in the firing rate of individual neurons. This is a massive assumption to which a lot of evidence is in the contrary. For example, there is evidence suggesting that the visual system does not have time to encode information in firing rates and it is postulated that the 'time to first spike' is important in this domain (first paragraph of the introduction has a nice summary of the reasoning/evidence behind this hypothesis). Furthermore, as I mentioned above, firing rates are an extremely small part of the brain, and just one of many many different ways the brain may encode/decode information (we do not know which, see sections 1.5, 1.6 and 1.7 for different methods the brain may employ).

To finish, one point which comes to mind currently limiting optogenetics is 'depth'. Light transmission falls off quickly through neural tissue: reduced by 50% after 100 $\mu$m, and by 90% after 1mm. If one wanted an input/output system connected to a computer to control/read a part of a brain 1cm below the surface one would (currently) have to chop out the neural tissue in the way -- this is a big problem!