Posted
by
timothyon Friday January 01, 2010 @06:01AM
from the think-I-better-not-wear-one dept.

An anonymous reader points to this explanation of a brain-machine interface for real-time synthetic speech production, which has been successfully tested in a 26-year-old patient. From the article: "Signals collected from an electrode in the speech motor cortex are amplified and sent wirelessly across the scalp as FM radio signals. The Neuralynx System amplifies, converts, and sorts the signals. The neural decoder then translates the signals into speech commands for the speech synthesizer."

I'm DAMN sure that this would F***ING F***ING F*** THIS IDEA give me at least a F*** WHAT A STUPID IDEA NO NO NO Tourette syndrome or something GAWD DAMNIT like it. BLOODY HELL DAMN MUST BE POLITE SIGN OFF

They don't even have to be words.They could be the thought pattern you get when you see a particular image (or face).

Then you tell the computer to associate it with the picture/video the computer sees in its camera (shared with you for augmented reality), or recorded audio clip. So the next time you tell the computer, "<command mode start>fetch <thought pattern><go><command mode end>", the computer will fetch the relevant object.

Frontal Lobes [headinjury.com]
Located, right under the forehead (anterior) the frontal lobes are involved in tracking and sense of self.
Additionally, they're involved in arousal and initiations well as consciousness of environment
reaction to self and environment.
Executive functioning and judgments.
Emotional response and stability.
Language usage.
Personality.
Word associations and meaning.
Memory for habits motor activity.

Translating your speech thoughts into speech by this machine requires implanting electrodes in your brain, wearing a large device stuck to your scalp, and then actually speaking (though this only reads your brain). If you do all that, the government can read your thoughts. Though the could read those speech thoughts with a microphone for a lot cheaper, and without your helping by going through all that surgery.

They keep referring to the patient in the test as a 'volunteer' but also state that he was "paralyzed except for slow vertical movement of the eyes." So he what? Signed the release forms by slowly looking up and down? I am guessing they mean volunteer as in 'his guardian(s) "volunteered" him'.

They keep referring to the patient in the test as a 'volunteer' but also state that he was "paralyzed except for slow vertical movement of the eyes." So he what? Signed the release forms by slowly looking up and down? I am guessing they mean volunteer as in 'his guardian(s) "volunteered" him'.

[irony]Why those bastards! And to think, they could have preserved the poor guy's rights and left him in his locked-in state, unable to communicate. That's the way God obviously intended him to be, and they had no right to play God for him. No doubt the poor guy's first 'words', since he would have recognized his rights were violated, would be "unplug me".[/irony]

There has been controversy. There are claims that his amanuensis did much more than simply transcribe. I saw a squib on it somewhere. No link for that part of the story. But Google is your friend if you're interested.

That seems an unnecessary piece of anti-science paranoia. The people doing the experiment are not the white coated demons of science fiction. Even if they were as amoral as you suggest, it would sdtill be practical of them to get the patient's permission before starting an experiment that took over three years to set up.

On the radio recently, I heard about the difficulties the doctors had with an even more extreme 'locked-in' case that had no eye movement. They got the patient to communicate one bit at a time by imagining tasting milk or lemon juice for minutes at a time. This caused the patient's saliva to change pH. This was not simply "think lemons if it is ok to operate", followed by "oh, bother, best of three?" - they had to establish that the intelligence was present, understanding what was being said, and replying in a reliable manner.

If you understand what I'm saying to you, please look up. (Patient looks up)To verify your ability to make choices in this manner, I will ask you a few questions.
If you want to indicate a YES answer, look up. Look down to indicate NO.
Do you understand this method of communication? (Patient looks up)
Would you like us to set fire to your genitals with a propane torch? (Patient looks down)
Would you like this young nurse to rub her naked breasts a

Because those last two words are Welsh, not English. You could also argue that é is a vowel letter because of words like café, but of course these words are French in origin, even though they show up in English usage.

Exactly. People tend to forget that the analysis of a natural language necessarily comes after the language itself, and often mistake the descriptive system for a prescriptive one. Classification of language parts is useful, since it's easier to learn and remember things that are structured, but there will always be outliers and unsettled areas. The differences among described language systems highlight this.

No I dont think you can. Café is a French word that English speaking people use. I'm just glad English does not have an entire separate alphabet to wright words in that are non-English (see http://en.wikipedia.org/wiki/Katakana [wikipedia.org] )

Imagine the implications for people with cerebral palsy or paralysis of similar nature. I would always cringe when I watched someone who had been severely limited in their motor functions and could not speak, but with the help of an unconventional system, could communicate. They would stare at letters on a placard, and would spell out (at a rate worse than texting!) each word letter by letter. Or they would attach a rod to the forehead of the person and have them peck at a screen, again, typing out each word letter by letter. I get frustrated enough texting with one hand--these people have amazing patience.

There is a movie, based on a book based on a true story, called the Diving Bell and the Butterfly [wikipedia.org] where this man gets into an accident and was thought to be in a vegetative state, but actually was fully conscious and aware of everything around him. This is called locked-in syndrome [wikipedia.org] and it is scary to even imagine. He ended up being able to communicate with the outside world by BLINKING. And even blinking was difficult for him, since he only had control of one eyelid. The nurse would slowly speak out letters in order of the most frequently used (in this case, he was French, so the letters were in order of the frequency of letters in French words) and he would blink to indicate that this was the correct letter. Needless to say, this was a very long and tedious process. But, as a testament to the perseverance of the human spirit, he actually wrote a book sharing his experiences of being in this state.

Imagine the freedom he would have experienced at being able to talk again.

I'm interested in the results of this technology as applied to stammering and similar speech disorders - these are not physical, but psychological issues, and appear to be mostly confined to the vocal chords; stammerers can type just fine. This might help us isolate exactly where the breakdown between mind and voice is happening.

Neither of those (I believe the second two links are showing the same interface) are at all close to what the GP is looking for. The first is closer, in the same way that a 6-foot tall man is closer to passing clouds than a 5-foot tall man. I'm not sure exactly what the mechanics are behind the system in the first video, and the author's site isn't very informative, but it looks like it might be symbol-based; that is, he thinks "D3" hard enough and the system picks up on it.

All I've ever wanted from brain-interface computing is the ability to 'think' music into some format where I can play it back again. Are we getting close to that yet?

In what sense? Single note control of a virtual synthesizer (external control of a real synth, like MIDI) is possible using the technique here applied to motor control of a finger. Multiple notes easily done after once cross-channel signal interference is eliminated (the fingers talk to each other in the cortex). Add another channel, and you can control a bank of instrument selection, again like MIDI. But all you've done is replace your hands with hardware. This is a damn expensive process as well as requir

Ah, every time something like this comes up I keep hoping that we've found some auditory buffer that both memories and the ear feed into, in a format that would be easy to parse. I think we're seeing the first glimmers of doing that with the visual cortex? I'm not sure how similar 'remembered' music is to 'entirely imagined out of thin air on the spot' music, but I imagine that either way, it would be a great boon for composers...probably do for them what the Internet did for journalists. Whether that's

How long or how many real or faked terrorist attacks will it take until such an electrode is mandatory and thoughts are registered and stored in central databases? Call me paranoid, but if something like this is technically possible it will be done. Of course, if not to prevent terrorist attacks then to protect our children. So the first to get this electrode will be sex offenders. The usual way to soften resistance against the removal of civil rights.
True, the first versions now are still very primitve a

Most implant approaches use electrodes shoved in from the outside intending them to work immediately. That invasive technique leaves the person open to infection, and the neurons contacted tend to die fairly quickly, requiring yet another round of more of the same. This approach takes a long time, but eliminates the chance of infection (after the obviously necessary implantation) and lets neurons grow into and around the electrodes, so none of them producing signal are likely to die off soon, allowing long term contact and communication.

I'm sure there will be improvements on this, but this looks to me to be the first really viable direct neural signal collection technique.

"Five years ago, when the volunteer was 21 years old, the scientists implanted an electrode near the boundary between the speech-related premotor and primary motor cortex (specifically, the left ventral premotor cortex). Neurites began growing into the electrode and, in three or four months, the neurites produced signaling patterns on the electrode wires that have been maintained indefinitely.

Three years after implantation, the researchers began testing the brain-machine interface for real-time synthetic speech production. The system is “telemetric” - it requires no wires or connectors passing through the skin, eliminating the risk of infection. Instead, the electrode amplifies and converts neural signals into frequency modulated (FM) radio signals. These signals are wirelessly transmitted across the scalp to two coils, which are attached to the volunteer’s head using a water-soluble paste. The coils act as receiving antenna for the RF signals. The implanted electrode is powered by an induction power supply via a power coil, which is also attached to the head."

Rather than risking killing off speech center neurons in the implant process, they instead implant them in the pathway through which the speech center communicates outbound. Previous attempts by others went directly for the primary processing centers. This small change shows remarkable thinking foresight. I'd call this the first true hack in neural interfacing.

The only point of clarification I'd add is to say "through the scalp" instead of "across"; the latter more often implies a lateral vector. And the only point I'd request is, if only the scalp needs to be traversed, is the transmitter between the skull and scalp? It appears so but isn't stated s such in the paper (the PLoS article's URL is at the bottom of TFA). In any case, the FM transmission through the scalp does away with all the permanent jacks and sockets that SF and Hollywood have always used to signify brain/machine interfacing. With this one implementation, the future image of neural interfacing becomes something like a hair net with buttons sewn into it (we already have EEGs like this). Someone call Larry Niven. Wireheads will be buttonheads.

A future hack will almost certainly be to collect the signal wires running from the scalp to a second transmitter operating between the person and the machine. This will eliminate the direct connection and allow movement, including ambulatory data collection and processing. That not only makes possible testing in realistic situations, but also neural control of machine mediated locomotion for the paralyzed, without being restricted to the length of a cable. An obvious inclusion here would be a transmitter at the machine with receiver on the person, running the signals into the relevant muscle groups. This will also take some power induction that may be greater than the FM systems being used can handle. And are we not on the verge of getting wireless power induction for operating such devices, the same technology intended to refresh batteries and even run laptops?

A bit farther in the future will be to switch from spike analysis of neural firing to time/frequency analysis of synchronized activity such as EEGs examine. The former require computation that's commonly available. The latter require continuous wavelet analysis that s

Actually I think James Schmitz may have coined the term "wirehead" in "The Telzey Toy" (January 1971, Analog Science Fiction and Science Fact), sometimes reprinted as "Ti's Toys." But I could be wrong.

I always figured one could probably blast high-frequency IR through several layers of skin, to solve the "wire/socket" problem. Skin is reasonably transparent to a fairly wide range of IR, and UV too (for the melanin-deficient among us).

I wonder if this machine could somehow be used for military applications. If you'd make it accurate enough and hook it up to some transmitter device, you could use it for perfectly silent communications - sure, you have handsigns for that already, but words can be a little bit more precise at times...

There is a better device for that now. a much more advanced throat mic.http://focus.ti.com/pr/docs/preldetail.tsp?sectionId=594&prelId=sc08029

It records the signals that you actually send to the voice box and outputs that. It is less invasive and can be used in a VVOX setup to remote control objects without worrying that someone else will over hear the commands or try to shout commands for you.

Not necessarily telepathy. But if it is possible to implant electrodes into the speech centre of the brain, is it necessary to be trained to use them? Or can those electrodes be implanted and the system can automatically transmit speech into radio waves? This way it would be possible to implant this system into every person. Using the infrastructure which is already present for cell phones it would easily possible to record and store every single conversation of all people. Wet dream of all governments.

The study is led by Frank Guenther of the Department of Cognitive and Neural Systems and the Sargent College of Health and Rehabilitation Sciences at Boston University, as well as the Division of Health Science and Technology at Harvard University-Massachusetts Institute of Technology. The research team includes collaborators from Neural Signals, Inc., in Duluth, Georgia; StatsANC LLC in Buenos Aires, Argentina; the Georgia Tech Research Institute in Marietta, Georgia; the Gwinnett Medical Center in Lawrenc

Basically she has bulbar onset ALS. Her main symptom is she can't talk at all anymore. You'd think just writing things down would be a appropriate substitute until you actually have to try it as your sole method of communication.

A brain interface is only as good as the number of unique states it can detect. I this case, it's only a handful (4 I think). So when the summary says "speech" it means "a number of vowel sounds." This guy isn't able to play the wheel of fortune, but he could buy a vowel - and that's about it.

Still, promising technology for sure. It just had a long way to go before fully synthesized meaningful speech.