Diane and her guest will discuss the new neuroscience of connecting brains with machines–and how it will change our lives. Award-winning neuroscientist Miguel Nicolelis’ work with primates has uncovered a new and controversial method for capturing brain function. It is paving the way for a cure for Parkinson’s disease, new ways of treating paralysis, and using brain waves to control everything from transportation to manufacturing.

Nicolelis is best known for his work with the rhesus monkey Aurora, who was able to play a video game using a robot arm directed only by her thoughts. You can find more about the research over at Think Artificial, as well as check out the Duke video below.

I was really struck in the Diane Rehm view (among many other things!) about the central of neurofeedback in his approach, and how he defines it differently from most of what one hears about “neurofeedback.” Rather than a hokey approach (getting in tune with your brain or something like that), Nicolelis places neurofeedback at the center of brain-machine interfaces. Basically, “information back from the environment” is central to how brains learn and function.

“We realized that the feedback that you send from these devices back to the brain is so important; the issue of how you handle feedback to retrain the central nervous system has acquired tremendous new interest.”

In one sense, this is not surprising. Neurofeedback in the Nicolelis’ sense is simply the complement to neuroplasticity. If you accept plasticity, then you will also have to accept that neurofeedback – information from the environment, and subsequently learning/adaptation based on that information – is a basic evolutionary component of how brains evolved. That is as radical an idea as plasticity itself, since it forces us to focus on how feedback happens, and how the brain uses feedback, rather than simply a more passive idea of the brain as “plastic.” This is active plasticity. Built plasticity. And the Nicolelis’ research is showing that the brain already works in this way, at least in the experimental research he has done.

Brain–machine interfaces (BMIs) establish direct communication between the brain and artificial actuators. As such, they hold considerable promise for restoring mobility and communication in patients suffering from severe body paralysis. To achieve this end, future BMIs must also provide a means for delivering sensory signals from the actuators back to the brain. Prosthetic sensation is needed so that neuroprostheses can be better perceived and controlled. Here we show that a direct intracortical input can be added to a BMI to instruct rhesus monkeys in choosing the direction of reaching movements generated by the BMI. Somatosensory instructions were provided to two monkeys operating the BMI using either: (a) vibrotactile stimulation of the monkey’s hands or (b) multi-channel intracortical microstimulation (ICMS) delivered to the primary somatosensory cortex (S1) in one monkey and posterior parietal cortex (PP) in the other.

Stimulus delivery was contingent on the position of the computer cursor: the monkey placed it in the center of the screen to receive machine–brain recursive input. After 2weeks of training, the same level of proficiency in utilizing somatosensory information was achieved with ICMS of S1 as with the stimulus delivered to the hand skin. ICMS of PP was not effective. These results indicate that direct, bi-directional communication between the brain and neuroprosthetic devices can be achieved through the combination of chronic multi-electrode recording and microstimulation of S1. We propose that in the future, bidirectional BMIs incorporating ICMS may become an effective paradigm for sensorizing neuroprosthetic devices.