Bionic Eyes: Sensors that use lookout’s brainwaves to spot trouble

The CT2WS camera system being tested at the Yuma Proving Grounds in Arizona.

DARPA

As good as surveillance technology has gotten at some tasks, computers still frequently fail when it comes to figuring out the difference between a threat and a tumbleweed. As the Department of Homeland Security found out with its failed efforts to build a "virtual fence" along stretches of the US border, automated sensors can generate a very high level of false alarms, unable to distinguish cars and people from animals. In Israel, animals interfering with sensors have forced the military to string electrified barbed wire to keep wild boars from triggering alarms.

But depending on people alone to do the watching isn't the answer either. Even with the help of cameras and portable radar systems such as the Cerberus sensor towers deployed by the US military in Afghanistan, nearly half of the potential threats slip by—mostly because of the limits of human vision and fatigue associated with constant scanning of the screen or the horizon.

The Defense Advanced Research Projects Agency (DARPA) set out to find an answer to this problem in 2008 when it launched the Cognitive Technology Threat Warning System (CT2WS) program, an effort to magnify the abilities of a human lookout to achieve the perfect early warning system for soldiers in the field. Now, that program has completed testing of the product of its research: a sensor system that uses the operator's brain activity as a filter.

The B-Alert x24 wireless EEG "cap" from Advanced Brain Monitoring. A similar device is used to track the brainwaves of CT2WS operators.

Advanced Brain Monitoring

Developed by a team of researchers from HRL Laboratories, Quantum Applied Science and Research, Advanced Brain Monitoring, and the University of California San Diego, the CT2WS system uses a combination of a 120-megapixel wide-field digital video camera, image processing software, and an electroencephalogram (EEG) "cap" that is worn by the operator. Scanning a 120-degree arc with its digital camera, the system presents up to 10 images per second to the sensor operator, monitoring for a specific type of brain activity—the P-300 brainwave, which is associated with the brain processing images and sounds. Even with those short glimpses, the human brain can perceive things like motion and shapes that would trigger a cognitive response.

The spikes in brain activity detected by the system don't represent something the operator would necessarily be aware of. The brain filters out a lot of this information before it reaches the level of consciousness, so many of these spikes are effectively ignored, in that a person doesn't follow up on everything their own visual system thinks is interesting. The system is basically an automated way of flagging every image that the visual system thinks may contain something different, and ensuring that the operator becomes aware of it.

The advantage of using human feedback is that the system can detect "threats that are context-specific and operator specific," Dr. Deepak Khosla, senior scientist in HRL’s Information System Sciences Laboratory and program manager for CT2WS said in a statement about the program. While the image processing algorithms might not process a bird flying off or moving vegetation as a threat, a human in the loop might respond otherwise because those might be signs of some other activity.

Over the past four months, DARPA's team tested the CT2WS system in desert, tropical, and "open" terrain. Without the operator wired in, the CT2WS system's own cognitive visual processing algorithms resulted in 810 false alarms per hour, based on 2,304 "target events" per hour. But when a human was wired into the system with the EEG cap, that error rate plummeted to five false alarms per hour, and the system successfully identified 91 percent of the "real" threats introduced in the test. When a commercial portable radar was added to the system, it achieved a 100 percent detection rate.

the system presents up to 10 images per second to the sensor operator, monitoring for a specific type of brain activity—the P-300 brainwave, which is associated with the brain processing images and sounds. Even with those short glimpses, the human brain can perceive things like motion and shapes that would trigger a cognitive response.

I cringe thinking about the experience of the operator in this rig. It's like the designers said "we have no way to design a chip for that, so let's just wire in a human." Did I miss something, or is there something dark and frightening about this development?

I was thinking the same thing. The computer wants my processing power now. It's not as if it's just enhancing our abilities, that would be different than how it's using our brain power for it's own data processing.

Just think if your job required you to put on an enhanced version of one of those devices, say in 10 years time, that can detect you screwing off, daydreaming, thinking of ways to kill your boss undetected, or lying about why you were sick. And that's just what I thought of off the top of my head.... could be used to power their computers analytical processors in the server. Now you're basically a Borg.

the system presents up to 10 images per second to the sensor operator, monitoring for a specific type of brain activity—the P-300 brainwave, which is associated with the brain processing images and sounds. Even with those short glimpses, the human brain can perceive things like motion and shapes that would trigger a cognitive response.

I cringe thinking about the experience of the operator in this rig. It's like the designers said "we have no way to design a chip for that, so let's just wire in a human." Did I miss something, or is there something dark and frightening about this development?

It's not a human actually wired in, just a human with sensors attached/touching their head, you can buy brainwave sensor things as toys these days (there was a levitating ball that used "the force" & a brainwave detector I recall & someone made/makes something similar for computer interface

So basically the cam hardware looks for anything that is "moving", then pass that by the human operator's unconsciousness to see if there is a spike. If there is, it will do its best to bring that to the operators, or the operators CO, attention?

soon enough we will replace the human with visual learning algorithms but that time is probably years away. It will be interesting to see when the AI scientists and engineers catch up and can replace a human's visual recognition of a human.

Just think if your job required you to put on an enhanced version of one of those devices, say in 10 years time, that can detect you screwing off, daydreaming, thinking of ways to kill your boss undetected, or lying about why you were sick. And that's just what I thought of off the top of my head.... could be used to power their computers analytical processors in the server. Now you're basically a Borg.

Anyone in management fool enough not to know full well that it's impossible to get 100% productivity out of employees 100% of the time commits some career-limiting error sooner rather than later.

Just think if your job required you to put on an enhanced version of one of those devices, say in 10 years time, that can detect you screwing off, daydreaming, thinking of ways to kill your boss undetected, or lying about why you were sick. And that's just what I thought of off the top of my head.... could be used to power their computers analytical processors in the server. Now you're basically a Borg.

Anyone in management fool enough not to know full well that it's impossible to get 100% productivity out of employees 100% of the time commits some career-limiting error sooner rather than later.

Sure, sucks for them. But when their career-limiting move gets them stuck as your manager, it sucks for you too.

So basically the cam hardware looks for anything that is "moving", then pass that by the human operator's unconsciousness to see if there is a spike. If there is, it will do its best to bring that to the operators, or the operators CO, attention?

No, they have the computer and human both looking at the same camera feed. The computer is analyzing the image, and analyzing the human's subconscious image processing activity. If the computer detects a threat, or if the human subconscious detects something unusual, it raises an alert to make the human pay attention to it, so they can determine whether action needs to be taken.

Just think if your job required you to put on an enhanced version of one of those devices, say in 10 years time, that can detect you screwing off, daydreaming, thinking of ways to kill your boss undetected, or lying about why you were sick. And that's just what I thought of off the top of my head.... could be used to power their computers analytical processors in the server. Now you're basically a Borg.

Anyone in management fool enough not to know full well that it's impossible to get 100% productivity out of employees 100% of the time commits some career-limiting error sooner rather than later.

Sure, sucks for them. But when their career-limiting move gets them stuck as your manager, it sucks for you too.

Then it may just be time to move on.

Honestly, deploying this kind of tech en masse in the workplace as suggested would just provide confirmation of what's already widely known - that humans can't give 100% any appreciable percentage of the time. Trying to improve upon this through coercive monitoring alone would only hurt organizational performance.

It is spooky. There's a potential for a "controlling" feedback loop, if the image being fed into the viewer is manipulated before displaying it, then the human in the loop becomes more than a semi-passive wetware module. Becoming Borg indeed. I assume that thermal and radar sighting can be overlaid on the visual scene.

Now put the system in something mobile with satellite com links. Put the operators in cubicles in Nevada. Eventually they'll embed neural connections to the operators. Do that long enough and you'll have at least 2 classes of citizen's, normals and operators/gamers. Society will start to get much weirder.

Don't humans have relatively poor vision? What about wiring up cats for night vision? Or something more hawk-eyed...like a hawk. Their eyes and visual cortex might be capable of processing higher definition images at a greater frame rate too. And maybe some scent hounds or gazelles downwind of the patrols zone to monitor their olfactory and auditory spikes. Maybe some animal ethics issues here, but perhaps less danger than those already involved in bomb sniffing. So best to stick to scent hounds and shelve the gazelles.

Don't humans have relatively poor vision? What about wiring up cats for night vision? Or something more hawk-eyed...like a hawk. Their eyes and visual cortex might be capable of processing higher definition images at a greater frame rate too. And maybe some scent hounds or gazelles downwind of the patrols zone to monitor their olfactory and auditory spikes. Maybe some animal ethics issues here, but perhaps less danger than those already involved in bomb sniffing. So best to stick to scent hounds and shelve the gazelles.

But a cat (nor a computer) won't know that a flock of birds suddenly leaving a bunch of bushes on the horizon probably means there's something hiding in those bushes. That's one of those visual things that a computer won't reconize as being odd, but a person will.

But a cat (nor a computer) won't know that a flock of birds suddenly leaving a bunch of bushes on the horizon probably means there's something hiding in those bushes. That's one of those visual things that a computer won't reconize as being odd, but a person will.

Agreed, and the cat would naturally alert on anything small and delicious-looking - those birds, a mouse, lizard, laser-pointer dot - and would probably completely ignore the rifle barrel that is poking out of that odd-looking bush over there, on that little rise... come to think of it, I don't think there was a bush over there befo<BANG/>

Right now the computer is using the human as a periphery, which is cool/scary in its own way, and the human flags what was so interesting about the pic.Next, automated eye tracking, so the computer knows immediately what was so damn interesting.Finally, train a neural network with that data (long testing phase)

At that point, you can cut the 'operator' (read: wetware periphery) out of the process until you need to retrain the neural network.

So basically the cam hardware looks for anything that is "moving", then pass that by the human operator's unconsciousness to see if there is a spike. If there is, it will do its best to bring that to the operators, or the operators CO, attention?

No, you got it backwards.

Human visual systam is detecting motion recorded by the camera which in turn is causing spikes in brain activity. Some of those spikes human naturally ignores based on many factors. This machine makes sure he doesn't ignore a single one. It is like cutting out your brain natural filtering ability and making you ultra-aware of everything.

Which makes me wonder how long a cognitive part of the brain can cope with so much sensory overload before getting fucked up or making the operator fuck up.

There seems to be two conflicting explanations of how the system works: the article mentions that the system relies on the P 300 brain waves to detect that there is a visual difference worthy of algorithmic analysis then starts talking about the fact that a human operator will use context (a flock of birds taking off together) to determine whether the perceived motion is relevant or not.

If the system relies on the brain as an image diffing mechanism then there can't be any contextual analysis of the event since this relies on more than just visual analysis and requires some higher level brain operations (not necessarily entirely conscious though).

Also, it's a nice system until we are able to algorithmically emulate the part of the process delegated to an operator but it's a shame that this stuff gets developed on the impulsion of the army when there would be so many useful civilian applications that would make it worthy of public or private investment.

I appreciate that the army testing this will allow the corresponding technology to enter the civilian realm but I can' t help thinking we have our priorities wrong and that a society who would finance that kind of research for civilian use *first* then for military purposes would produce very different results than ours currently do.

I wonder if there are examples in history of such a reversal? If anyone knows, please comment, that would definitely be a nice sociological/historical/economical research topic.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.