In their experiments, the researchers first showed users wearing the mind-control headsets a series of known images and numbers to measure what a moment of recognition looked like in their EEG data, using the P300 response, a electrical spike that typically appears close to 300 milliseconds after a stimulus the subject recognizes.

Then they showed the subjects a series of test images and numbers and looked for those same signals. In a collection of unknown faces, for instance, they found a significant spike in the EEG data for a picture of Barack Obama that revealed the test subjects’ recognition of the president’s face.

A P300 event-related potential elicited as a brain response to a target stimuli (in this experiment the non-target stimuli were pictures of unknown faces, while the target stimuli was the picture showing President Obama) (credit: Ivan Martinovic et al)

When shown a collection of locations on maps that included one of their home, the headset-wearers’ brains emitted tell-tale hints that allowed the experimenters to determine their home’s general location with 60% accuracy on the first try among a collection of ten choices.

And when the subjects were asked to memorize a four-digit PIN and then shown a series of random numbers, the researchers found they could guess which of those random numbers was the first digit in the PIN with about 30% accuracy on the first try–far from a home run, but a significantly higher success rate than a random guess.

For home and business use this just creates a future market for a smart filter that wouldn’t allow some of the signals in or out. It would be a direct descendant of today’s firewalls. As far as law enforcement and interrogation go; pragmatically it’s not really any different than an improved lie detector. It’s not exactly a gateway technology to, say, William Gibson’s black ice, but it is interesting, certainly.

IMHO, this has nothing to do with empathy but memory activation, so it should work, including for lobotomized patients!

But the system has to be calibrated, hence the importance of a thorough study with convicts.

Besides, the protocol has to be very strict as typically one gets only one chance in eliciting the expected response (you can’t present the same face twice even from different pictures).

There are problems with direct vs indirect recognition too. I can recognize Obama even if I was never face to face with him. So people could “recognize” a victim from memory of an article with a picture of this person, a person they never met.

Lawyers will delight in attempts to deconstruct the “evidence” gained from the machine.

This is definitely significant: with time, like honing speech recognition by usage, an exploiter could develop algorithms and AI that can triangulate and mine any information out of the subject’s brain, even via subconscious routes? Major concern! That could render the tech legally unregulatable?

With anything doing, comes risk. What excites me about he mindwear tech, however, is the potential for training our brains. The potential to gain much greater control over our thoughts, attention, and concentration, perhaps even to the point where the P300 won’t be relevant. I’m almost drooling when I think of what a generation of 16-24 year olds will accomplish when mindwear is used to control RTS e-sports, for example.

Biofeedback is basically already doing this, although the tech is still young. Very much agree with you in the potential for cultivating mental discipline, I think most people will be shocked at how the world feels with better control over their own minds/perceptions. Scary potentials are possible with this tech on the other end of the spectrum, it’s probably about time people began designing mental firewalls.

Good, and scary, point. Which also means defensive tech will be employed, forcing nearly everyone to seriously consider use of this tech whether they want to or not. The near future will be truly strange. We take the sovereignty of the mind for granted, and that sovereignty will end.

Yes, probably the main point is not to use a mind-controlled device on a computer with Internet access (or otherwise compromised), since a hacker could gather EEG responses the correlate with text, images, and video data to acquire private information.

Ummm…the way it works is they show you an image and then read your reaction to the image. In order for this technology to become ‘exploitable’, there would need to be some way for the exploiters to ‘beam’ an image to you, force you to look at, it and remotely sense the changes in your brain. I think we can rest easy for awhile yet…..

1. I’m not aware of any technology for remote P300 detection in a non-shielded environment, but it could exist in the black-budget world. 2. Thinking of a number does not provide useful detection of information; detecting the pin requires matching the subject’s P300 responses against the target stimuli (the same or closely matching pin number in this case), which would take a long time and require the subject to be in the same location and agree to a test procedure involving seeing or hearing a long series of numbers. See http://www.scribd.com/doc/102968008/On-the-Feasibility-of-Side-Channel-Attacks-with-Brain-Computer-Interfaces for details.

It can be much more subtle and various. Imagine a kind of Derren Brown AI, or a ‘Dune’ type ‘mentat’ AI, analysing the data capture and also compelling the gamer to navigate the game (by reward mechanism) to facilitate that analysis, data capture, and even developing its own skill to analyse with experience. The P300 signal occurs due to stimuli that are not via the usual producers in a scene (wikipedia, p300, oddball paradigm). What happens is this AI or analysing app can collect, collate, extrapolate, test each clue and conclusion as it goes along. e.g, your pc password is DaVinci, you pass by an innocuous mona-lisa picture as you are playing ‘Doom’ with the emotiv headset…

I was thinking something similar, particularly with the AI angle. Keeping a PIN safe is a mere trifle when one considers what advertisers could do with this. If you were using such a device to surf the net, a series of images should be enough to get positive/negative emotional feedback from the user. An AI could change the images and gradually build up a database on how to ‘push your buttons’ quite literally.

Turning the internet into a truly enthralling experience would not be that hard at that point. A user could literally find himself ‘hijacked’ and unable to to stop. Picture an endless series of ‘Wow, that’s so cool!’ moments. To a lesser degree (much less) Amazon does this with the User Recommendations database thing it has. Often I wind up purchasing more than one book simply because the system knows which books to recommend to me based upon my viewing selection.

I’m sure an expanded system would be able to take into account a wider variety of things and develop detailed profiles of users to use like this. Anything you had a strong emotional response to would be useful. Not just to get you to buy stuff but to persuade you to vote a certain way for example or to psychologically prime you for a future event. You could perhaps instill mass hysteria or panic or hey, even euphoria (changes the whole meaning of terms like ‘Olympic build-up media coverage’).

The part that really concerns me is, if done cleverly, there is no reason why a company or hacker (or rogue AI) using such a system to mine the public at large would not be able to remain anonymous. It would merely be gathering data from your EEG and perhaps surreptitiously inserting test photos in web sites. How would you know? Your end experience would just feel like you really enjoyed using the internet.

Personally, I think that if a company invents the first AI- oh, say… Google- they probably would not tell anyone. Why would they? They’d use it to their advantage as long as possible. Google is thoroughly plugged into everything we do on the internet. They would be in a prime position to exploit such technology to its fullest. Imagine an AI with cult leader style charisma, only you didn’t realise you were dealing with an AI. More like- ‘Man, the new Google Chrome browser is totally amazing!’