Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

narramissic writes "Honda has released a video of experiments showing a person wearing a large hemispheric scanner on his head and controlling Honda's Asimo robot by visualizing movement. Back in 2006, Honda and ATR researchers managed to get a robotic hand to move by analyzing brain activity using a large MRI scanner. This latest work uses EEG to measure the electrical activity in a person's brain and blood flow within the brain using near-infrared spectroscopy (NIRS) to produce data that is then interpreted into control information. While both the EEG and NIRS techniques are established, the analyzing process for the data is new. Honda said the system uses statistical processing of the complex information to distinguish brain activities with high precision without any physical motion."

While controlling something (robot/UAV) will get easier and easier. The problem I see is getting the Input side working for total control. Combining the technology that is being developed to help the blind and deaf for something like this would be the next great leap.

House M.D. did this in the latest episode (yesterday's) to some extent.

The patient was paralyzed ("locked in syndrome") and could only communicate by blinking, then lost that ability, so they used an EEG and trained ("think 'up'") the patient to move the cursor on the screen.

Either way, I'm not really impressed as I'm sure this has been going on for about as long as the EEG has been around, which has been for over 100 years now, and NIRS for about 70.

I wouldn't be impressed if I were you, either. Obviously you are one of the great minds in human-computer interface work and have a list of accomplishments that dwarfs this minor stepping stone of an achievement.

Anyone who has seen Angelic Layer will know what I am talking about. Guy invents direct thought control system for medical applications, but in order to get it funded develops it into a game. Makes sense, go for a very large market to help reduce the cost of the technology and rapidly improve it and use that to push the more limited medical applications.

Heinlein's character Sergeant Zim (I think?) made the argument that men in the field were still the most superior weapon, and the only ones capable of stealing small objects out from under the enemy's noses, taking captives, and adapting to complex objectives.

No no. You still have men-sized (roughly) robots who are controlled mentally by men in La-Z-Boy armchairs. You don't need to put the person in the middle of the field, just the robot, since all its motions mimic the person's will.

Entangled photons, imposible to jam or eavesdrop upon, and as nearly instantaneous as we can measure.

Oh, snap! Someone go told!

Kidding, but it's a good idea. But we probably don't need anything more than what WiMax can deliver -- we're not transferring GB of information, just a continuous stream of data. Not too much different from an MMO, and the graphics need not be as good. You'll want technical feedback of your robot's status (health bar?) as well as eye-in-the-sky recon, but since you'll probably b

Waldos were first used to handle hot things inside nuclear power plants. Since then they've expanded their capabilities and uses. But until now they haven't been mentally controlled. (And I'm not certain that this counts...but it's sure getting a lot closer.)

The generic term is telefactor. Waldo is the name given to tele-operated hands. (And as I recall that's how "Waldo" in the story "Waldo" used the items.)

Why on earth would these gal-danged scientists create a brain interface for robot control?! The fools have already ensured that the robots will take over our society and force us into slavery. Do they have to make it so easy for them that the robots can just control us directly via our brains?! Are they trying to destroy mankind? These scientists have gone mad!

DRM in your brain. If you get a song stuck in your head you didn't pay royalties for... Not to mention the political implications of this, no more need for torture when you have a Romulan brain probe that can download the information you want out of the person's brain. Or how such technology could be used by a dictator to control the population by punishing unloyal thoughts.

Everything in Asimo is a slightly fancier and much more well-funded and publicized version of everything that's already been done before. But most people see Asimo and they're amazed and they become instant fanboi's of it because this is the first time they see such-n-such technology.

..is that this is Honda, a name brand major manufacturer who are in a position to mass produce things that actually work and are affordable. Sure, random joe nerd youtoober or pick a university project of choice might come up with something spiffy, but when Honda does it, there's at least some hope you might get one, one day.

Sweet! Ars was just covering a story about using carbon nanotubes for artificial muscles, and now we have the neural interface controls we need too. If only ITER would hurry up and get us to the point of developing compact fusion reactors, we'd be all set to go.

An immediate application, of course, is prosthetics for lost or damaged limbs.

But if it works out as described, where it's possible to direct additional stuff without interfering with your normal actions, by imagining what you want done and having the device do it, it could be used to control a robot helper or ADDITIONAL artificial limbs.

How many times have people wanted extra hands while soldering, welding, assembling models or appliances, building houses, repairing cars,...?

It is interesting to me that this is conveyed as a "robot" controlled by a human interface. It always seemed to me that the field of robotics generally tended to slide towards autonomy, not control by human interface. I would be more likely to dub this an achievement for a cybernetics or biomedical field than a robotics field. Of course, since few people seem to agree on what makes a robot a robot and whether autonomy is a requirement or not, I suppose I could just be picking at words...

I concur, and this is something that's been increasingly annoying me over the last few years (get off my lawn!). People seem to refer to anything with more than a couple of degrees of freedom as a "robot" even when it's blatantly remote controlled by a human operator. Robot Wars is a prime offender - they're not robots, they're glorified friggin' RC cars. I always thought the best entry would be one that jammed the others' radios while roaming around autonomously with an angle grinder.

I was gonna build something like this with some statistical stuff too... Only for computer games. But I worked out it was too expensive, and too dangerous. Looks like Honda beat me and my fantasy hobby project to it. Lol!;P

The example video shows discrimination of 4 available discrete actions. The eventual goal would presumably be to discriminate tens, hundreds, or thousands of actions, if not smoothly varying parameters of action.

There are two main ways to go about this:1. Train the algorithms processing the brain signals.2. Train the brain signals.

The best approach is probably to do both in concert using real-time feedback to the user about how the algorithm is currently interpreting the signal. The user can then learn (exp