Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

jd writes "In a major breakthrough, neurologists are reporting that they can decypher neurological impulses into speech with an 80% accuracy. A paralyzed man who is incapable of speech has electrodes implanted in his brain which detect the electrical pulses in the brain relating to speech. These signals are then fed into computers which covert these pulses into signals suitable for speech synthesis. As a biotech marvel, this is astonishing. Depending on the rate of development it is possible to imagine Professor Hawking migrating to this, as it would be immune to any further loss of body movement and would vastly accelerate his ability to talk. On the flip-side, direct brain I/O is also a major step towards William Gibson's Neuromancer and other cyberpunk dark futures."

My wife was in a massive car accident, a decade ago. She was in a coma for a month, suffered brain injuries, a collapsed lung, shattered arm, cracked eye socket, multiply broken jaw, etc. A national merit scholarship winner before the accident, her parents were told that, if she survived, she'd likely never walk much or be able to look after herself again.

As it happened, she was sufficiently beaten up at the time that she had no concept of how bad her injuries were. She got out of the wheelchair simply because it frustrated her. She went back to working part time simply because she didn't realize she wasn't supposed to be able to. By the time she comprehended what had happened, she'd improved enough that setting impossible goals like "become a personal trainer" weren't quite so impossible. We taught her to read again (yes, even that got messed up) and even managed to get her back in to school - initially only able to pull a 2.0 average but improved each semester.

In her case, she had an amazing recovery. Yet she, herself, says, "If I'm ever like that again, turn me off." She didn't realize how hurt she was and got lucky with recovering before she did. Understanding now, she has absolutely no desire to try that fight again. She'd rather just call it a day.

So, sadly, there's a real likelihood that his first words, upon realizing he can finally communicate, after years of being unable to and stuck in a totally paralyzed body, will be, "Kill me." Probably not ideal to have the family in the room for.

And yes, that entire story was just so I could "drop" that I have a wife in a slashdot post. Cunning, huh?

As a slashdot.dot reader it goes without saying that I love to revel in the latest tech but, stories like this one prove that it is people like you and your wife that are the true inspirations in the world.
All the tech and science is wasted if it can't benefit people with "real lives" like yours.
Like tjstork said:
"A story like yours deserves to be told, and demands that we listen."
Any that don't listen, cut them selves off to reality and lose out on more than they can dream of.
-papvf

UP to a certain point I agree. Blanket party, I've had. Shipmates dicking with my fold-n-stow in the boot barracks earned me Marching Party. Two marching parties would have led to "Short Tour", but then the jerks (some among us recruits) figured out I was harmless, and they left me alone. While others claimed Marching Party was hell (PT with 14-lb rifle, at night, during sleep time, from about 2200-0000), I considered it exercise, and I made it just fine. By considering it exercise my mind dissuaded me from

The BBC article is pretty light on detail, and the New Scientist one is subscribers only, but there is more stuff here [eurekalert.org].

They have hooked up to 41 neurons and:

For now, the team is focusing on the building blocks of words. In a series of experiments over the last few years, Ramsey has imagined saying three vowel sounds: "oh", "ee" and "oo". By watching his brain activity, the researchers have been able to identify distinct patterns associated with the different sounds. Although the data is still being analysed, they believe that they can correctly identify the sound Ramsey is imagining around 80 per cent of the time

Thanks, that's quite helpful. I could find no details about this on my own, lacking a New Scientist subscription.
He isn't "imagining" these sounds - he's trying to produce them. I suspect they've tapped into the motor cortex, where one of the last stages of motor processing. They're not tapping into "speech" centers - it's simply a motor area associated with articulatory muscles.
Not that it isn't impressive, but it's not a step towards mind-reading or better computer-human interfaces unless you suffe

This sounds great, but considering how well cochlear implants work this scares me a bit. I know some one who has a a defective cochlear and it is causing her a lot of problems. Worse than the fact that her restored hearing sounds like a computer and the implant is failing is the prospect of another operation to fix it. How ever much this technology could be of benefit I would much rather avoid the implants all together.

You shouldn't be scared about cochlear implants. The cochlea is just a biological transducer (converts pressure changes to electrical pulses). It is complex, but it is not a "decision-making" part of the brain. It serves as an input to the brain. So, a cochlear implant replaces the biological transducer with an electronic one. It works well but of course it will be improved.

"With the new exoskeleton, Stephen will be able to safely handle radioactive isotopes in the high-radiation area of the new supercollider particle accelerator. And his new robo-arms are capable of ripping open enemy tanks like they were nutshells,"

To Six million dollar man Stephen Hawking to the point that there are two Chuck Norrises in the world, asymptotically, of course. Because the only thing better than Chuck Norris is a geek Chuck Norris i guess.

Two desires:1. To restore Stephen Hawking's physical body to its former fully-functional form.2. To turn Stephen Hawking into a mobile, indestructible cyborg of incomprehensible power.

It was the movie RoboCop that's driving it all. No one really cares about that one disabled genius. The public just thinks most geniuses are mad scientists anyway and are just waiting for an evil one to "invent" or experiment with making RoboCop.

Although the data is still being analysed, researchers at Boston University believe they can correctly identify the sound Mr Ramsay's brain is imagining some 80% of the time.

In the next few weeks, a computer will start the task of translating his thoughts into sounds.

"We hope it will be a breakthrough," says Joe Wright of Neural Signals, which has helped develop the technology.

While this is indeed promising, and I hope that this 'unlocks' this poor fellow, this 'unlocking' has not happened yet. Hopefully, when they are able to decipher these signals, he's not saying, "Kill me" over and over again.

While this is indeed promising, and I hope that this 'unlocks' this poor fellow, this 'unlocking' has not happened yet.

It's a little unclear from the BBC article, but going from their research posters [speechprosthesis.org], they have in fact tested the translation already, using a data set compiled from neural recordings made while having the subject try to produce different phoneme sounds. However, this analysis was done "offline," not in real-time. I think what they're referring to doing in the "next few weeks" is getting the

I believe Antonio Damasio addresses this question in one of his books. Apparently, a fortunate side-effect of this condition is it impairs the part of your brain that would normally find this horrific and intolerable and leaves you with a weird sense of acceptance and well-being (IIRC). Otherwise, I guess you just blink a lot and hope they keep the feeding tube hooked up.

Apparently, a fortunate side-effect of this condition is it impairs the part of your brain that would normally find this horrific and intolerable and leaves you with a weird sense of acceptance and well-being

Really? I hope so, but that just seems like too much of a coincidence -- like something the caregivers tell themselves so they don't have to deal with the horror of the situation.

Apparently, a fortunate side-effect of this condition is it impairs the part of your brain that would normally find this horrific and intolerable and leaves you with a weird sense of acceptance and well-being (IIRC).

and not a 'techno-biological' failure. The future's darkness comes from a tyrannical plutocracy which misuses the technology, which could have just as easily been used to save mankind. It is in fact an outgrowth of current economics and politics, not technology. Please, get your stories straight.

But your kind of reasoning could also be used inside out, eg: "Mr. Gibson's dark future is a technological failure and not an economical/political one. That nasty future comes from a tyrannical group of technologists who misuse the social system."

What I want to say is technology and politics/economics are all a creature of humans. It's just as misleading blaming "economics" and "politics" instead of the people misusing the system (who are basically all of us), as it is to blame a particular technology for

Tyranny has been around since before the stone age. What has technology got to do with it other than increasing the tyrant to subject ratio? The desire to oppress is inherently a human social one. Some will claim (neocons for instance) that we can use tyranny to make things better, but it doesn't work that way. Technology, on the other hand is much more legitimately separable from human motivation (there are a variety of motivations that can lead to most technologies.) Moreover, unlike tyranny, we have a chance of using a given technology only(or at least predominately) for good. Technology is a double edged sword, in part because it and its fruits are actually tools, not motivations unto themselves.

Remember a few years ago when we could control wheelchairs with 90% accuracy from electromagnetic transducers outside the skull. Now the external sensors are gone and we have a breakthrough with 80% accurate speech synthesis from internal sensors. Wonder when the wheelchair one is going to become a product.

I have to be really skeptical when I see this kind of report. Research has suggested that the way the brain functions to produce speech is not like typing out words into a computer. Things are probably not grouped by the similarities in their letters or pronunciation. They are most likely stored by a particular hierarchy that may or may not vary widely across individuals depending mainly on environment. Noise also becomes a huge issue, having the electrodes inside the brain cuts down on that problem but

how would it bea ble to differentiate between "out loud" voice and private thoughts? This could be really embarrasing for users. Imagine if a secretary (or nurse) walks by when you're in the middle of speaking or dictating a letter:

Dear sir,I am writing wow nice tits and she has a great ass too uh oh wedding ring in order to ask if you would be interested in our new product line of neural-input word processors.

I am actually curious about this. How many of you talk in your head? I have noticed that I haven't done it frequently in a few years, these days the thoughts mainly just "happen". It seems to me as if the thoughts "happen" anyway (in an instant), but people talk to themselves to mull them over or just to pass the time. How many of you talk inside your heads, and how often?

...when I read about advances in neural-electrical interfacing, I hope for a quick solution to the problem of blindness. I have so many friends that would be even more creative and productive, if they only could see.

My mother is becoming blind, too, and it's breaking my heart to see her like that. I hope an affordable implantable camera, interfaced to the vision centers, will come in the near future. Nothing fancy, just B&W at low resolution with no greyscale, would do miracles.

IAANS(I am a neuropsychology student)The issue with treating blindness is the occipital lobe of the brain(and other areas) needs appropriate input at certain ages in order to develop typically. If your friends have acquired blindess in adolescence or adulthood, then it would be fairly simple(the preceeding is a lie) to hook up a camera to their optic nerve, much like we do with cochlear implants. The neurons have learned how to deal with visual input earlier, and now are just kicking around relaxing and w

For those curious, this speech prosthesis research was presented in a number of posters at the Society for Neuroscience (SfN) conference a couple weeks ago. Their six SfN posters can be found on their website here, covering topics like the circuitry they developed, Bayesian signal analysis, and so forth:

I still can't scan a 50 page document and OCR it without spending hours to clean it up afterwards. Nor can voice recognition software really understand or interpret what I say and lay it out with correct punctuation on paper.

Those are 2 basic advanced tasks I would expect to be perfected at some point, and until they are I take all these great human-machine interface "breakthroughs" with huge grains of salt.

One has to wonder who is doing the work. Is the paralyzed man adapting to the computer or is the computer learning the brain signals. Either way, it's good work, but I would bet that the way to perfect this type of technology is to "teach" the human to control his neurological impulses. I doubt the technology is directly eavesdropping on his speech.

How do they know they're accurately converting the signals to sound, if they're basing this off a man who has no ability to speak?

Many people who are unable to speak are able to communicate in some other way (usually, some form of gesture, whether sign language, nodding, blinking, whatever.) It doesn't take a much to be able to indicate "right" or "wrong".

Many people who are unable to speak are able to communicate in some other way (usually, some form of gesture, whether sign language, nodding, blinking, whatever.) It doesn't take a much to be able to indicate "right" or "wrong".

Remember, it's only 80% accurate. It may be more like "rigm!" or "prong!"

I'm guessing the 80% comes from the fact that this is an issue of the linear separability of signals. Its generally hard to get reliable sensitivity/specificity measures over this that anyone is going to take seriously.

Probably they can get up to 90%, but from experience I would say the rate of false positives at this sensitivity likely is moving towards exponential increase. It's better to stop at 80%, at least when something is in the early stages.

This is just guessing of course, I have no understanding of their research, but going from my own work on non linearly separable sets, I'd say this is what's happening.

So what you're saying is that when the machine correctly identifies 80% of the signals, it recognizes that the other 20% are garbage and ignores them, whereas at 90% it (falsely) recognizes the other 10% as correct as well?

This is great. Now all we have to do is reverse the fucker so it figures out 80% garbage and 20% signal. Then we attach it to congress critters, lawyers, and RIAA stoges. Now we don't have to listen to their shit at all anymore.

So your main source of skepticism is something that you, as (I'm assuming like me) a layman, thought up a solution to in 5 seconds?

Yeah it's a short article, what's your point? You want the exact methodology they used to get that number (which if we took literally only has one significant digit), you'd have to read whatever paper they publish. "Ask them to say X, compare to what the computer says" seems a pretty reasonable assumption of how they did it though.

The article says the man is 'locked in', which means that he not only cannot speak, but he has no voluntary movement whatsoever, even blinking eyelids.

There was an article recently in New Scientist about this. One problem doctors studying this field have is that since it is an experimental treatment, they need consent of the patient, and how can they get consent if the patient can't communicate?

With some locked-in patients, they are able to respond based on the acidity of their saliva. They are told to either imagine eating lemons (for yes) or eating milk (for no), and their saliva sympathetically adjusts to their thoughts. Then their saliva is measured. See more here: http://www.mindhacks.com/blog/2007/08/locked_in_with_the_b.html [mindhacks.com]

Sad to say it, but I suspect the first thing the patient will say is "kill me".

I've taught a few people who couldn't speak how to work their voice. In one case, she would talk a little like Boomhauer from King of the Hiil. "Daddy, mumble mumble me mumble mumble juice mumble mumble counter?" Once she got used to the feedback and the system, she would fill in the mumbled parts with the correct conjunctions. Perhaps that's how the 80% is getting in there. The gene

No they weren't. I hope you are not spreading that tired old, and completly disproved, myth that the US was founded Christians? or on "Christian Values"?

Huh? The founding fathers were predominantly Christians in their private and public lives. Judeo-Christian values were at the core and often demonstrated at "federal" and state levels of government. What they did disprove of was government favoring any particular church or religion. Therefore they wrote in a very neutral manner, such as "... the separat

The majority of Western values do not trace their roots to any of the Middle Eastern religions. They come from other places, such as Greek philosophers.

In fact, the philosophical foundations of the US are in many ways opposite to the so-called Christian values. Cruel and unusual punishment, for example, is condoned--actually commanded--by the Christian god. Slavery, and the belief that all men are NOT created equal, is a common theme in the Bible.

The statesmen/philosophers who founded this country may have been Christian, but the documents they wrote to found this country were quite the opposite.

On the flip-side, direct brain I/O is also a major step towards William Gibson's Neuromancer and other cyberpunk dark futures."

As we move toward a better understanding of the brain as a biochemical machine, we are better able to manipulate it through various methods. As we do that, we run into the ethical delima of doing so. But if we accept that we are only a complex machine, then is there really any concept of "human rights", or is it just a social construct that may be revoked at any time.

It's not duplicity of thought. You just lack understanding. One does not need a creator to imagine a human spirit. In fact, the idea of a creator adds nothing to the idea of the spirit. It just marks an artificial stopping point in the quest for answers: What did it? Creator did it! What made Creator? Don't go there! Dumb.Eastern religions have a better word for it: suchness. That is just so, as it is. The idea of spirit relates more to the idea that things are more than the sum of their parts (due to the i

Would a device like this work on someone who doesn't know how to speak english or better yet a baby that speaks no language at all

The answer is "Yes" (but not the way you intended) and "No."

It would work for a non-English speaker IFF that speaker was trying to speak his native language; what they've detected is the brain's intention to produce a SOUND; so, by extension, the interpretation is producing a phonetic representation of the sounds in the person's head.

It isn't interpreting the concept of the sound (someone isn't thinking of a cat and the word "cat" is produced). It should be possible for someone speaking any language (including a made-up one) to use this system.

For a baby (who has no word associated with the object), it wouldn't provide any use... unless your conjecture is that a baby doesn't speak because the muscles in her throat aren't strong enough to form words, but her brain knows what sounds would be made. Then... sure, it would work. 8)

> Would a device like this work on someone who doesn't know how to speak english or better yet a baby that speaks no language at all, if so then we just invented the universal translator, live long and prosper trekkies

Yes it certainly would. The device works by directly picking up the intent of the subject in a global individual-neutral format. That intent is then translated into English by dictionary lookup and standard text-to-speech software. It would be a trivial matter to subsitiute any other langua

Would a device like this work on someone who doesn't know how to speak english or better yet a baby that speaks no language at all, if so then we just invented the universal translator, live long and prosper trekkies

Yes it certainly would. The device works by directly picking up the intent of the subject in a global individual-neutral format. That intent is then translated into English by dictionary lookup and standard text-to-speech software. It would be a trivial matter to subsitiute any other language

> This kind of research obviously would lead to, a few years down the road, a type of electronic telepathy.

Yes, think of the progression:

- Improve detection to the point it can accurately detect thought-sounds- Instead of translating the sounds into audible sounds, trasmit them wirelessly (transmitting)- Implant wireless receiver that injects sound-signals into brain for receiving- AI spontaneously emerges and takes over subject's brain, becoming the first of our neural-implant overlords!

I don't think that this thing actually reads your internal thoughts. You have to learn to send the right impulses to it just like as a baby you have to learn to send the right impulses to your mouth and vocal chords.