Information about Mind Reading

Mind Reading -- 60 minutes CBS News videoJune 28, 2009 4:50 PMNeuroscience has learned so much about how we think and the brain activity linked to certain thoughts that it is now possible - on a very basic scale - to read a person's mind. Lesley Stahl reports.How Technology May Soon "Read" Your Mind Read more: http://www.cbsnews.com/stories/1998/07/08/60minutes/main4694713.sht...

Mind-machine interfaces can read your mind, and the science is improving. Devices scan the brain and read brain waves with electroencephalography, or EEG, then use a computer to convert thoughts into action. Some mind-reading research has recorded electrical activity generated by the firing of nerve cells in the brain by placing electrodes directly in the brain. These studies could lead to brain implants that would move a prosthetic arm or other assistive devices controlled by a brain-computer interface.

Most preventive screening looks for explosives or metals that pose a threat. But a new system called MALINTENT turns the old school approach on its head. This Orwellian-sounding machine detects the person — not the device — set to wreak havoc and terror.

MALINTENT, the brainchild of the cutting-edge Human Factors division in Homeland Security's directorate for Science and Technology, searches your body for non-verbal cues that predict whether you mean harm to your fellow passengers.

It has a series of sensors and imagers that read your body temperature, heart rate and respiration for unconscious tells invisible to the naked eye — signals terrorists and criminals may display in advance of an attack.

But this is no polygraph test. Subjects do not get hooked up or strapped down for a careful reading; those sensors do all the work without any actual physical contact. It's like an X-ray for bad intentions.

Currently, all the sensors and equipment are packaged inside a mobile screening laboratory about the size of a trailer or large truck bed, and just last week, Homeland Security put it to a field test in Maryland, scanning 144 mostly unwitting human subjects.

While I'd love to give you the full scoop on the unusual experiment, testing is ongoing and full disclosure would compromise future tests.

• Click here for an exclusive look at MALINTENT in action.

But what I can tell you is that the test subjects were average Joes living in the D.C. area who thought they were attending something like a technology expo; in order for the experiment to work effectively and to get the testing subjects to buy in, the cover story had to be convincing.

While the 144 test subjects thought they were merely passing through an entrance way, they actually passed through a series of sensors that screened them for bad intentions.

Homeland Security also selected a group of 23 attendees to be civilian "accomplices" in their test. They were each given a "disruptive device" to carry through the portal — and, unlike the other attendees, were conscious that they were on a mission.

In order to conduct these tests on human subjects, DHS had to meet rigorous safety standards to ensure the screening would not cause any physical or emotional harm.

So here's how it works. When the sensors identify that something is off, they transmit warning data to analysts, who decide whether to flag passengers for further questioning. The next step involves micro-facial scanning, which involves measuring minute muscle movements in the face for clues to mood and intention.

Homeland Security has developed a system to recognize, define and measure seven primary emotions and emotional cues that are reflected in contractions of facial muscles. MALINTENT identifies these emotions and relays the information back to a security screener almost in real-time.

This whole security array — the scanners and screeners who make up the mobile lab — is called "Future Attribute Screening Technology" — or FAST — because it is designed to get passengers through security in two to four minutes, and often faster.

If you're rushed or stressed, you may send out signals of anxiety, but FAST isn't fooled. It's already good enough to tell the difference between a harried traveler and a terrorist. Even if you sweat heavily by nature, FAST won't mistake you for a baddie.

"If you focus on looking at the person, you don't have to worry about detecting the device itself," said Bob Burns, MALINTENT's project leader. And while there are devices out there that look at individual cues, a comprehensive screening device like this has never before been put together.

While FAST's batting average is classified, Undersecretary for Science and Technology Adm. Jay Cohen declared the experiment a "home run."

As cold and inhuman as the electric eye may be, DHS says scanners are unbiased and nonjudgmental. "It does not predict who you are and make a judgment, it only provides an assessment in situations," said Burns. "It analyzes you against baseline stats when you walk in the door, it measures reactions and variations when you approach and go through the portal."

But the testing — and the device itself — are not without their problems. This invasive scanner, which catalogues your vital signs for non-medical reasons, seems like an uninvited doctor's exam and raises many privacy issues.

But DHS says this is not Big Brother. Once you are through the FAST portal, your scrutiny is over and records aren't kept. "Your data is dumped," said Burns. "The information is not maintained — it doesn't track who you are."

DHS is now planning an even wider array of screening technology, including an eye scanner next year and pheromone-reading technology by 2010.

The team will also be adding equipment that reads body movements, called "illustrative and emblem cues." According to Burns, this is achievable because people "move in reaction to what they are thinking, more or less based on the context of the situation."

FAST may also incorporate biological, radiological and explosive detection, but for now the primary focus is on identifying and isolating potential human threats.

And because FAST is a mobile screening laboratory, it could be set up at entrances to stadiums, malls and in airports, making it ever more difficult for terrorists to live and work among us.

Burns noted his team's goal is to "restore a sense of freedom." Once MALINTENT is rolled out in airports, it could give us a future where we can once again wander onto planes with super-sized cosmetics and all the bottles of water we can carry — and most importantly without that sense of foreboding that has haunted Americans since Sept. 11.

Allison Barrie, a security and terrorism consultant with the Commission for National Security in the 21st Century, is FOX News' security columnist.

Please go to LAST PAGE OF "Replies to this Discussion" to read NEWESTInformation

New non-invasive sensor can detect brainwaves remotely24 October 2002http://www.sussex.ac.uk/press_office/media/media260.shtmlScientists have developed a remarkable sensor that can record brainwaves without the need for electrodes to be inserted into the brain or even for them to be placed on the scalp.

Conventional electroencephalograms (EEGs) monitor electrical activity in the brain with electrodes placed either on the scalp (involving hair removal and skin abrasion) or inserted directly into the brain with needles. Now a non-invasive form of EEG has been devised by Professor Terry Clark and his colleagues in the Centre for Physical Electronics at the University of Sussex.

Instead of measuring charge flow through an electrode (with attendant distortions, in the case of scalp electrodes) the new system measures electric fields remotely, an advance made possible by new developments in sensor technology. Professor Clark says: "It's a new age as far as sensing the electrical dynamics of the body is concerned."

The Sussex researchers believe their new sensor will instigate major advances in the collection and display of electrical information from the brain, especially in the study of drowsiness and the human-machine interface.

"The possibilities for the future are boundless," says Professor Clark. "The advantages offered by these sensors compared with the currently used contact electrodes may act to stimulate new developments in multichannel EEG monitoring and in real-time electrical imaging of the brain."

"By picking up brain signals non-invasively, we could find ourselves controlling machinery with our thoughts alone: a marriage of mind and machine."

The same group of scientists has already made remote-sensing ECG units as well, which can detect heartbeats with no connections at all.

http://www.prisonplanet.com/scientists-read-minds-with-infrared-sca...
Scientists Read Minds With Infrared Scan: Optical Brain Imaging Decodes Preference With 80 Percent Accuracy
ScienceDaily (Feb. 11, 2009) — Researchers at Canada's largest children's rehabilitation hospital have developed a technique that uses infrared light brain imaging to decode preference – with the goal of ultimately opening the world of choice to children who can't speak or move.

In a study published this month in The Journal of Neural Engineering, Bloorview scientists demonstrate the ability to decode a person's preference for one of two drinks with 80 per cent accuracy by measuring the intensity of near-infrared light absorbed in brain tissue.

"This is the first system that decodes preference naturally from spontaneous thoughts," says Sheena Luu, the University of Toronto PhD student in biomedical engineering who led the study under the supervision of Tom Chau, Canada Research Chair in pediatric rehab engineering.

Most brain-computer interfaces designed to read thoughts require training. For example, in order to indicate yes to a question, the person needs to do an unrelated mental task – such as singing a song in their head.

The nine adults in Luu's study received no training. Prior to the study they rated eight drinks on a scale of one to five.

Wearing a headband fitted with fibre-optics that emit light into the pre-frontal cortex of the brain, they were shown two drinks on a computer monitor, one after the other, and asked to make a mental decision about which they liked more. "When your brain is active, the oxygen in your blood increases and depending on the concentration, it absorbs more or less light," Luu says. "In some people, their brains are more active when they don't like something, and in some people they're more active when they do like something."

After teaching the computer to recognize the unique pattern of brain activity associated with preference for each subject, the researchers accurately predicted which drink the participants liked best 80 per cent of the time.

"Preference is the basis for everyday decisions," Luu says. When children with disabilities can't speak or gesture to control their environment, they may develop a learned helplessness that impedes development.

In future, Luu envisions creating a portable, near-infrared sensor that rests on the forehead and relies on wireless technology, opening up the world of choice to children who can't speak or move.

Her work is part of Tom Chau's body-talk research, which aims to give children who are "locked in" by disability a way to express themselves through subtle body processes like breathing pattern, heart rate and brain activity.

Luu notes that the brain is too complex to ever allow decoding of a person's random thoughts. "However, if we limit the context – limit the question and available answers, as we have with predicting preference – then mind-reading becomes possible."

Bloorview Kids Rehab is Canada's largest children's rehabilitation hospital, fully affiliated with the University of Toronto. Visit http://www.bloorview.ca

Neuroscientist’s institutes in Sweden is developing a new era of global access mind reading and mind control technologies.
"Synthetic telepathy" is a two-way communication channel between a computer and the brain (Brain interface) Synthetic telepathy can be used for telemetric heath control and health care as well as a government tool for mind control.
Neuro scientists in many countries says they are afraid of these new technologies and are asking for an ethic debate before it is too late. They point out that the risk for abuse is big.

The Kingdom of Sweden has together with the Government of the United States of America an agreement for a co-operation developing products for "Home land security systems" [http://www.regeringen.se/content/1/c6/08/04/93/90057dc9.pdf].
Synthetic telepathy is part of a medical imaging technology that is developed under the name of "home land security systems".

Sweden’s role in "home land security systems" is obvious; it is due to Sweden’s superior knowledge of nano technology, advanced radar technology, optical sensor technology.
Sweden also has traditions and knowledge from developing and manufacturing weapons, for example through the companies Saab and Bofors.

I myself have been working for FOI the Swedish Defense Research Agency, [www.foi.se] department of defense and security, systems and technology.
I worked with complex technical systems for today’s requirement in both the military and civilian spheres.
When I was informed about the synthetic telepathy and Stockholm Brain Institute, I for ethical reasons decided to withdraw from all government projects and instead share this information to stop the victims from suffering.

Microwave imaging technology.
Sweden uses their leading technology in a ground to satellite "computer-microwave-sensor-camera"
(the camera is more known for over viewing climate changes).
The "computer microwave sensor camera" is for over viewing "mobile objects" equipped with advanced biometric body and face identification software and a GPS function for locating the object.
For indoor environments the camera uses an "indoor radar system" [www2.foi.se/rapp/foir1668.pdf][www.sige.ita.br/VIII_SIGE/GE/GE012.pdf]

The camera is the start up tool for identification and to give the computer a vocabulary for opening a dialog with the object.
The initializing process is to acoustically (bug) record the objects spoken language while the camera takes pictures of the brain patterns.
After the initializing process the computer switches to an expert system.[en.wikipedia.org/wiki/expert_system]
This software mathematically calculates and learns the rest of the objects thought patterns.
In a further step even neurons can be identified with a low energy system called "signal stakeout".

The "computer microwave sensor camera" has a function similar to FMRI cameras (functional magnetic resonance imaging).
The camera is an active multi sensor camera with a mix of light and sound frequencies ( frequencies are patented).
The camera is the active part in a radar system.
The system uses several passive optical sensors as receivers.
The system beams an amplifying signal to the objects head (or other part of the body that should be imaged).
The beam frequency penetrates buildings and walls of concrete but has a frequency range that also can extract pictures of what is behind.

The system creates a totally new range of possibilities to get long distance imaging/monitoring of patterns.
It can be used portable by hand, or fixed to overview a specific area used in an unmanned vessel or being linked via a short distance satellite.
This tool can also bug all kind of buildings, cars, flights etc.
It can be equipped with other software and be used as an indoor radar system to measure and make drawings of all types of buildings..

The system penetrates human skin and tissue to read brain patterns without causing any injury.
The sensor cameras antenna is adjusted close to the object.
It learns and records patterns of thoughts by giving them a resonance similar to vocal chords and the patterns used with speech..
The recorded patterns are then converted to a digital format (1/0).
The system can "speak" to the object with synthetic voices that only the specific addressed target/object can hear.
Microwave camera is developed by GigaHertz Centrum [www.chalmers.se]

The "access to brain"- technology can be used in medical health-care for measuring oxygen in brain blood flow, sensors can detect a hazard and send an alarm before a stroke appears.
It can also be used as a guidance system, e.g. for blind people.
The same microwave imaging technology can with alternative software identify oxygen flow patterns and learn computers to use the information for mind reading.
This new technology is capable to detect and map even human feelings.
"Synthetic telepathy" of course includes an advanced software for lie detecting, already used in Sweden outside the juridical system.

During the developing time of "synthetic telepathy" Stockholm Brain Institute/Professor Anders Lanser, Head of Department computational biology and neuro computing [www.nada.kth.se/~ala/] and Professor Martin Ingvar [Martin.Ingvar@ki.se] has taken the opportunity to record brain activity / thoughts and speech of several "victims" for many years. ( people used as guinea-pigs without their permission )
The tool for this mapping is the IBM supercomputer Blue Gene equipped with an enormous RAM capacity.

The torture methods used in "Synthetic Telepathy.
Victims are supposed to get a mental break down and be placed inside the psychiatric health care due to psychic disease related to "hearing voices".
Inside the health care system neuroscientist continue to register the effects of different types of drugs by measuring differences in mental status through speech and thoughts, when the medicated patient tries to get rid of the voices in their head (voices created by synthetic telepathy).
Mental status such as aggressively, social behavior, sleeping habits etc
This allows AstraZeneca (as part of SBI) to develop a new generation of psychoactive drugs.
In my opinion this is advanced torture.
Swedish neuroscientists are through the use of synthetic telepathy breaking the law of human rights.

Swedish Universities has for many years been working with part of this information channel, described as imagining project for a low energy system in telemetric health care.
Sweden’s health care system is already known as one of the best in the world, with this new technology the expenses will be lowered and it will be a great export product for Sweden.

Politics about "Synthetic telepathy".
U.S Government has tried to force the Swedish government and citizens to accept the FRA law. [http://en.wikipedia.org/wiki/FRA-law] Synthetic telepathy is a dangerous government weapon together with the FRA law.
Sweden with Telia Sonera is probably used as a anode for the U.S government to control information flow in and out of Russia.
The Swedish FRA-law is a joint venture and makes Sweden a part of the global over viewing net Echelon. [http://en.wikipedia.org/wiki/ECHELON].

Global access in all environments is the goal for Synthetic telepathy" in it’s new shape.
Several thousand people in banks, business, politics, police etc has during developing of synthetic telepathy been "bugged" by talking with the "work shops"(victims/guinea pigs) who’s thoughts has been read.
Businessmen in Norway, China, Russia and Japan has for the last couple of years been "bugged" the same way.
Swedish neuro scientists has by that drastically injured Sweden’s position as a neutral country.
Does the Swedish ministry of defense know?

This all means that under all circumstances Sweden’s Prime minister Mr. Fredrik Reinfeldt and Sweden’s ministry of foreign Mr. Carl Bildt is responsible for bugging thousands of people all over the world through synthetic telepathy.
Are they aware of that?

The Pentagon's Defense Advanced Research Projects Agency has tapped Northrop Grumman to develop binoculars that will tap the subconscious mind. The Cognitive Technology Threat Warning System program, informally called "Luke's Binoculars," combines advanced optics with electro-encephalogram electrodes that can, DARPA believes, be used to alert the wearer to a threat before the conscious mind has processed the information.

While they were considering a number of technologies for neural detections, it appears DARPA has settled on EEG. "HORNET will utilize a custom helmet equipped with electro-encephalogram electrodes placed on the scalp to record the user's continuous electrical brain activity," says Northrop. "The operator's neural responses to the presence or absence of potential threats will train the system's algorithms, which will continue to be refined over time so that the warfighter is always presented with items of relevance to his mission."

I described this project in an article last year for Wired News, when DARPA was still sifting through the concepts. The idea was to incorporate technology that detects neural patterns that indicate a possible threat -- the idea being that the subconscious mind detects threats faster than the conscious mind realizes (essentially, the binoculars bypass inhibitory reflexes). Some scientists I spoke with thought the projects was interesting -- particularly the advancements in optics and imaging -- but that neural detection technology wasn't nearly developed enough for the schedule DARPA anticipated (the agency wants to field the binoculars in just a few short years).

The most common sense challenge raised about these binocular by one scientist I spoke with was the whole issue of bypassing inhibitory reactions. Is that really something you want a soldier to do? It'll be interesting to see what the first soldiers to use these binoculars have to say about them.

Discernible patterns in brain activity then signalled where they were, they wrote in the journal Current Biology.

It would be very easy not to co-operate, and then it wouldn't work
Demis Hassabis
Researcher
Neurons in the hippocampus, also known as "place cells", activate when we move around to tell us where we are.

The team, based at University College London, then used specialised scanning equipment which measures changes in blood flow in the brain.

This allowed them to examine the activity of these cells as the participants - all young men with experience of playing videogames - moved around the virtual reality environment. The data was then passed through a computer.

"We asked whether we could see any interesting patterns in the neural activity that could tell us what the participants were thinking, or in this case where they were," said Professor Eleanor Maguire.

Are you lying?

"Surprisingly, just by looking at the brain data we could predict exactly where they were in the virtual reality environment. In other words we could 'read' their spatial memories."

"By looking at activity over tens of thousands of neurons, we can see that there must be a functional structure - a pattern - to how these memories are encoded."

But they stressed that the prospect of genuinely reading someone's most intimate thoughts - or working out if they were lying - was still a long way off.

Their participants were all willing subjects who allowed their brains to be trained and monitoring to take place.

"It would be very easy not to co-operate, and then it wouldn't work," said Demis Hassabis, who developed the computer programme to read the data. "These kind of scenarios would require a great technological leap."

Participants were asked to navigate between virtual reality rooms

It is brain diseases such as Alzheimer's which could stand to benefit from such research.

"Understanding how we learn and store memories could aid our understanding of conditions in which memory is compromised and potentially help patients in the rehabilitation process," said Professor Maguire.

Professor Clive Ballard, director of research at the Alzheimer's Society, said: "This exciting development will boost our understanding of the hippocampus, a key area affected in Alzheimer's disease and the most important part of the brain for memory.

"Learning more about how the brain works could help us work out which types of nerve cells are lost in Alzheimer's."

Rebecca Wood, of the Alzheimer's Research Trust, said the research was "fascinating".

She said: "Understanding how memories are formed may help researchers discover how this process goes wrong in diseases like Alzheimer's."

By Barry Michael
DARPA is At Least 8-10 years ahead of Anybody at the university laboratory level and even people like Raydeon, SRI, and the Rand Org. are only given parts of DARPAs' projects to work on, not the entire development project. So that even Independent laboratory reasearch remains Far behind DARPAs' developments.

What I Am telling you is that the Neural-computer interface is Completely perfected at this moment as a base or "Core Technology", and has Already been deployed for a number of years to monitor international gates at airports as well as our land crossings to Canada and Mexico.

There are a number of programs in place, some military and some intelligence community ralated, to locate people who exhibit high levels of brain wave "intensity" or "amplitude". The program that is in use presently not only locates these persons, but allows constant neural monitoring at remote distances of every nuance of their thought.

What is being reported in articles like the one I am commenting on is a deliberate attempt on the part of Someone to disperse information about the "accepted" or "popular" level of understanding of a much less advance version of the research that has actually Already taken place at DARPAs' level.

Do you Really think for a moment that a bunch of independent researchers doing what this article describes are invovled in anything other that Rediscovering what DARPA has Already discovered, And developed, And divested to the Defense Department , And which the Defense department has Already deployed.

In some cases this deployment has taken place into the Same R&D facilities which helped DARPA do the Initial research. They are currently acting as extentions of DARPAs' Neuroscience research program, exploring facets of the neural computer interface that DARPA does Not have Time to experiment with themselves, because they are busy with Other projects incorporating the Neural computer interface as a base technology, such as a mentally controlled heads up display. Oner that will appear wherever the pilot is looking, and only display those instruments or combinations of instruments that he "Thinks" he needs at any given moment.

Go to DARPAs' web site, call up the search engine, type in "Transcranial Electromagnetic Stimulation", and you will be taken to an expalnation of DARPAs' "Advanced Learning" project, designed to utilise the neural computer interface to "Stimulate" the learning cwebnters of the brain to a High degree of Utility, beyongd the normal individuals ability to learn.

Intended for the military and/or the intelligence community to teach people foreign language fluency and fluency with advanced tehnical systems operation in far less than previous learning cycles. This article is At Least 8-10 Years behind what is happening right Now. As far as "New Technologies", or even "Emergent Technologies", DARPA might even be as much as 10-12 years ahead of non-DARPA researchers.

They are in many cases working on "Merging" new technologies into "Emergent Hybrid Technologies.
Get with it. The future is now. This article is from the past.

The landmark study by British researchers demonstrates that powerful imaging technology is increasingly able to extract our innermost thoughts.

The feat prompted the team to call for an ethical debate on how brain imaging may be used in the future, and what safeguards can be put in place to protect people's privacy.

The study was part of an investigation aimed at learning how memories are created, stored and recalled in a part of the brain called the hippocampus.

By understanding the processes at work in the brain, scientists at theWellcome Trust Centre for Neuroimaging at University College Londonhope to get a better grasp of how Alzheimer's disease and strokes can destroy our memories and find ways to rehabilitate patients.

In the study, volunteers donned a virtual reality headset and were asked to make their way between four locations in a virtual building. Throughout the task, their brain activity was monitored using a technique called functional magnetic resonance imaging (fMRI).

Eleanor Maguire and Demis Hassabis then used a computer program to look for patterns in the volunteers' brain activity as they stood on virtual rugs in the four different locations. They found that particular collections of brain cells encoded the person's location in the virtual world, and they were able to use this to predict where each volunteer was standing.

"Remarkably, using this technology, we found we could accurately predict the position of an individual within this virtual environment, solely from the pattern of activity in their hippocampus," said Maguire.

"We could predict what memories a person was recalling, in this case the memory for their location in space," she added.

The study overturns neuroscientists' assumption that memories of our surroundings are encoded in the brain in an unpredictable way. The latest research suggests that this is not the case, and that the information is stored in our neurons in a very structured way that can be picked up by scanners.

The scientists could not tell where somebody was from a single brain scan. Instead, they had to perform several scans of volunteers in each location. Only afterwards were they able to find differences in brain activity that betrayed the person's location.

"We can rest easy in terms of issues surrounding mind reading. This requires the person to be cooperative, and to train the algorithms we use many instances of a particular memory," said Maguire. "It's not that we can put someone in a brain scanner and suddenly read their thoughts. It's quite an invovled process that's at a very early stage."

Though preliminary, the research raises questions about what may be possible with brain scanners in the future. Future advances in technology may make it possible to tell whether a person has ever been in a particular place, which could have enormous implications for the judicial system. Information from brain scans has already been used in court in India to help judge whether defendants are telling the truth or not.

Demis Hassabis, who co-authored the study in the journal Current Biology, said: "The current techniques are a long way away from being able to do those kinds of things, though in the future maybe that will become more possible. Maybe we're about 10 years away from doing that."

"It might be useful to start having those kinds of ethical discussions in the near future in preparation of that," he added.

Previous work in rats identified the hippocampus as a region of the brain that stores spatial memories. But experiments that measured the activity of handfuls of neurons in the animals' brains suggested there was no predictable pattern in how those memories were stored.

The discovery that spatial memories are encoded in a predictable way in our brains will give scientists confidence that other memories might be readable using brain scanners.

In the near term, Maguire said the research will shed light on some of the most debilitating neurodegenerative diseases of old age. "By using techniques like this we're learning more and more about how memories are laid down. If we can understand the processes involved in how you form and store and recollect memories, we can begin to understand how these pathological processes erode memories, and much further down the line, how we might help patients in a rehabilitation context," she said.

In a previous study, Maguire used brain scans to show that a brain region at the rear of the hippocampus known to be involved in learning directions and locations is enlarged in London taxi drivers.

http://www.wired.com/dangerroom/2007/05/my_story_in_tod/#previouspostBinoculars that Tap the Brain
By Sharon Weinberger May 1, 2007 | 8:24 am | Categories: Army and Marines, DarpaWatch, Gadgets and Gear
We wrote in the DANGER ROOM a few weeks ago about DARPA’s plans to develop some fancy new binoculars that would tap the the brain’s "neural signatures" to alert soldiers of danger. Today, I follow up with a longer story for Wired News about how these binoculars would work:

In a new effort dubbed "Luke’s Binoculars" — after the high-tech binoculars Luke Skywalker uses in
Star Wars
— the Defense Advanced Research Projects Agency is setting out to create its own version of this science-fiction hardware. And while the Pentagon’s R&D arm often focuses on technologies 20 years out, this new effort is dramatically different — Darpa says it expects to have prototypes in the hands of soldiers in three years.

The agency claims no scientific breakthrough is needed on the project — formally called the Cognitive Technology Threat Warning System. Instead, Darpa hopes to integrate technologies that have been simmering in laboratories for years, ranging from flat-field, wide-angle optics, to the use of advanced electroencephalograms, or EEGs, to rapidly recognize brainwave signatures.

Defense attorneys are for the first time submitting a controversial neurological lie-detection test as evidence in U.S. court.

In an upcoming juvenile-sex-abuse case in San Diego, the defense is hoping to get an fMRI scan, which shows brain activity based on oxygen levels, admitted to prove the abuse didn’t happen.

The technology is used widely in brain research, but hasn’t been fully tested as a lie-detection method. To be admitted into California court, any technique has to be generally accepted within the scientific community.

The company that did the brain scan, No Lie MRI, claims their test is over 90 percent accurate, but some scientists and lawyers are skeptical.

"The studies so far have been very interesting. I think they deserve further research. But the technology is very new, with very little research support, and no studies done in realistic situations," Hank Greely, the head of the Center for Law and the Biosciences at Stanford, wrote in an e-mail to Wired.com.

Lie detection has tantalized lawyers since before the polygraph was invented in 1921, but the accuracy of the tests has always been in question. Greely noted that American courts and scientists have "85 years of experience with the polygraph" and a wealth of papers that have tried to describe its accuracy. Yet they aren’t generally admissible in court, except in New Mexico.

Other attempts to spot deception using different brain signals continue, such as the EEG-based technique developed in India, where it has been used as evidence in court. And last year, attorneys tried to use fMRI evidence for chronic pain in a worker’s compensation claim, but the case was settled out of court. The San Diego case will be the first time fMRI lie-detection evidence, if admitted, is used in a U.S. court.

According to Emily Murphy, a behavioral neuroscientist at the Stanford Center for Law and the Biosciences who first reported on the fMRI evidence, the case is a child protection hearing to determine if the minor should stay in the home of the custodial parent accused of sexual abuse.

Apparently, the accused parent hired No Lie MRI, headquartered in San Diego with a testing facility in Tarzana, California, to do a brain scan. The company’s report says fMRI tests show the defendant’s claim of innocence is not a lie.

The company declined to be interviewed for this story, but its founder and CEO, Joel Huizenga, spoke to Wired.com in September about the technology.

"This is the first time in human history that anybody has been able to tell if someone else is lying," he said.

Though the company’s scientific board includes fMRI experts such as Christos Davatzikos, a radiologist at the University of Pennsylvania, some outside scientists and bioethicists question the reliability of the tests.

"Having studied all the published papers on fMRI-based lie detection, I personally wouldn’t put any weight on it in any individual case. We just don’t know enough about its accuracy in realistic situations," Greely said.

Laboratory studies using fMRI, which measures blood-oxygen levels in the brain, have suggested that when someone lies, the brain sends more blood to the ventrolateral area of the prefrontal cortex. In a very small number of studies, researchers have identified lying in study subjects with accuracy ranging from 76 percent to over 90 percent (pdf). But some scientists and lawyers like Greely doubt that those results will prove replicable outside the lab setting, and others say it just isn’t ready yet.

"It’s certainly something that is going to evolve and continue to get better and at some point, it will be ready for prime time. I’m just not sure it’s really there right now," said John Vanmeter, a neurologist at Georgetown’s Center for Functional and Molecular Imaging. "On the other hand, maybe it’s good that it’s going to start getting tested in the court system. It’s really been just a theoretical thing until now."

No Lie MRI licensed its technology from psychiatrist Daniel Langleben of the University of Pennsylvania. Langleben, like the company, declined to be interviewed for this article, but offered a recent editorial he co-authored in the Journal of the American Academy of Psychiatry and the Law on the "future of forensic functional brain imaging."

From the editorial, it’s clear that Langleben is a bit uneasy that his work has been commercially applied. He draws a clear distinction between "deception researchers" like himself and "the merchants of fMRI-based lie detection" and describes the "uneasy alliances between this industry and academia, brokered by university technology-commercialization departments."

Langleben has pushed for large-scale trials to determine the efficacy of fMRI-based deception-spotting. But in an interview conducted in late 2007, he doubted whether No Lie MRI and its competitor, Cephalos, had the resources to conduct the type of trials he wants.

"We need to run clinical trials with 200 to 300 people, so we can say, ‘This is the accuracy of this test,’" Langleben told Wired.com. "But only two or three companies are trying to develop the technology. Do those companies have deep pockets? No. Do clinical trials cost a lot? Yes."

In September, Huizenga said the company was trying to get a grant for a study on a large group of people. "To date there really has been no study that has tried to optimize fMRI for lie detection," he said.

But even if the science behind a technology isn’t fully established, Brooklyn Law School’s Edward Cheng, who studies scientific evidence in legal proceedings, said it might still be appropriate to use it in the courtroom.

"Technology doesn’t necessarily have to be bulletproof before it can come in, in court," Cheng.

He questioned whether society’s traditional methods of lie detection, that is to say, inspection by human beings, is any more reliable than the new technology.

"It’s not clear whether or not a somewhat reliable but foolproof fMRI machine is any worse than having a jury look at a witness," Cheng said. "It’s always important to think about what the baseline is. If you want the status quo, fine, but in this case, the status quo might not be all that good."

But the question of whether Cheng’s fMRI can be "somewhat reliable but foolproof" remains open.

Ed Vul, an fMRI researcher at the Kanwisher Lab at MIT, said that it was simply too easy for a suspect to make fMRI data of any type unusable.

"I don’t think it can be either reliable or practical. It is very easy to corrupt fMRI data," Vul said. "The biggest difficulty is that it’s very easy to make fMRI data unusable by moving a little, holding your breath, or even thinking about a bunch of random stuff."

A trained defendant might even be able to introduce bias into the fMRI data. In comparison with traditional lie-detection methods, fMRI appears more susceptible to gaming.

"So far as I can tell, there are many more reliable ways to corrupt data from an MRI machine than a classic polygraph machine," Vul said.

Elizabeth Phelps, a neuroscientist at New York University, agreed there is little evidence that fMRI is more reliable than previous lie-detection methods.

"When you build a model based on people in the laboratory, it may or may not be that applicable to someone who has practiced their lie over and over, or someone who has been accused of something," Phelps said. "I don’t think that we have any standard of evidence that this data is going to be reliable in the way that the courts should be admitting."

All these theoretical considerations will be put to the test for the first time in a San Diego courtroom soon. Stanford’s Murphy reported that the admissibility of the evidence in this particular case could rest on which scientific experts are allowed to comment on the evidence.

"The defense plans to claim fMRI-based lie detection (or “truth verification”) is accurate and generally accepted within the relevant scientific community in part by narrowly defining the relevant community as only those who research and develop fMRI-based lie detection," she wrote.

Murphy says that the relevant scientific community should be much larger, including a broader swath of neuroscientists, statisticians, and memory experts.

If the broader scientific community is included in the fact-finding, Greely doesn’t expect the evidence to be admitted.

"In a case where the issues were fully explored with good expert witnesses on both sides, it is very hard for me to believe that a judge would admit the results of fMRI-based lie detection today," Greely said.

But that’s not to say that lie-detection won’t eventually find a place in the courts, as the science and ethics of brain scanning solidify.

Researchers at Washington University have developed ways of tying humans and computers together.
By Liz Stoever
ST. LOUIS POST-DISPATCH
07/06/2009

It sounds like something from a science fiction movie: Sensors are surgically inserted in the brain to understand what you're thinking. Machines that can speak, move or process information — based on the fleeting thoughts in a person's imagination.

But it's not completely fictional. The technology is out there. A researcher in Wisconsin recently announced the ability to "think" updates onto the Twitter website. Locally, researchers at Washington University have developed even deeper ways of tying humans and computers together.

For Eric Leuthardt, 36, a neurologist at Washington University Medical School, it's about taking our relationship with computers to the next level.

"The idea is to basically connect people with devices and machines through their thoughts directly," he said.
Leuthardt's latest research involves giving computers the ability to understand speech imagined in the mind.

The research is a component of "Brain-Computer Interface Technology," which decodes brainwaves in a certain part of the brain. Computers are then programmed to understand those signals and perform an action accordingly.

But so far, only signals for imagined actions have been decoded. Moving on to decoding speech will make communication to computers from the mind easier.

Ultimately, Leuthardt said, the technology will better connect humans and machines. For those with disabilities, it will connect them more closely to the world.

"They can have a cognitive version of a mouse click that they could not otherwise control with their arms or their hands," he said.

Leuthardt began BCI technology related research in 2002. Just two years later, Leuthardt along with Daniel Moran, a biological engineer at Washington University, used the technology to develop video games that can be played with the mind.

Players control the game by imagining an action. For example, imagining the movement of the left hand may mean moving left, whereas imagining the movement of the tongue may mean to move up.

The Space Invaders video game has been tested on only 15-20 people so far because the sensors that read those brain signals go directly on or in the brain through surgery. Because patient testing would require surgery, children with epilepsy are given the chance to participate because they already have similar equipment placed in their brains that also locate electric signals in the brain.

Leuthardt is not alone. He is one of many researchers nationwide who are all collaborating to make this mind-reading technology available for home use in five to 10 years.

Since the introduction of BCI technology in the late 1980s, researchers have been thinking of practical applications for BCI technology though they are still in the process of making them available.

Research that will give people better control of prosthetic limbs is being conducted at Washington University's Computer Engineering Department by professor Bill Smart, who is researching the brain signals that predict finger movement.

"We're looking for electrical activity that predicts more relatively clean signals," he said.

Smart said this application of BCI technology would be available in about 10 years and would allow people with prosthetic arms to better grasp items.

A biomedical engineering doctoral student at the University of Wisconsin-Madison, Adam Wilson, 28, has already adapted the existing BCI technology to allow people to use Twitter without their hands. Justin Williams, a UW-Madison assistant professor of biomedical engineering, was involved in the project and was Wilson's adviser.

It took Wilson one day to combine the BCI technology with the simple interface for Twitter, a social networking website that features 140-character updates from users.

"I just brought two existing technologies together in an exciting way," he said. Updates are online at twitter.com/uwbci. Wilson said he decided to combine the two so people would know that the technology is out there.

The Twitter application, however, is less invasive than what Washington U. researchers are using. Unlike the implanted brain sensors, the ones Wilson used are less sensitive to brain signals, but can be placed atop the head like a swim cap. Those sensors are being tested by five to 10 people with disabilities at home, allowing them to tell their caretakers that the room is too hot, for example, Wilson said.

Researchers are still attacking some obstacles in their research. Leuthardt said they needed to make the implant in the brain much smaller. Researchers also have to figure out how to teach caretakers to use the system and get the cost down. The equipment used in labs currently costs tens of thousands of dollars, Wilson said.
"BCIs are very sophisticated systems," Wilson said. "They still need to be simplified to work in a home setting."

The researchers say that the variation of radio signals in a wireless network can reveal the movements of people behind closed doors or even a wall.

Joey Wilson and Neal Patwari, from the University of Utah, have used the principle of variance-based radio tomographic imaging. The system works by measuring interference between the nodes of wireless devices. If someone passes through that field, the device registers a change in the levels of resistance, and feeds that information back to a computer.

The system can currently only see about three feet through a wall, and is so far only capable of sensing motion. At this stage, it is not sophisticated enough to generate an actual image of what lies beyond the wall, but the research team is confident that this feature could be developed in time.

The researchers said the technology could be used in search and rescue operations, with emergency teams using the same radio technology used by Wi-Fi networks to build a web of sensors around a disaster site, revealing the location of victims and survivors.

"We envision a building imaging scenario similar to the following. Emergency responders, military forces, or police arrive at a scene where entry into a building is potentially dangerous. They deploy radio sensors around (and potentially on top of) the building area, either by throwing or launching them, or dropping them while moving around the building," wrote Wilson and Patwari on the arXiv science forum.

"The nodes immediately form a network and self-localise, perhaps using information about the size and shape of the building from a database (eg Google maps) and some known-location coordinates (eg using GPS). Then, nodes begin to transmit, making signal strength measurements on links which cross the building or area of interest. The received signal strength measurements of each link are transmitted back to a base station and used to estimate the positions of moving people and objects within the building."

"so you see the car horns/nosy from neighbours part of the cause and harassment
i don't ...I see it as the effect/symptom/consequence being assaulted by electronic harassment/mind control technology and part of the psychological impact…"

We have reason to be afraid of the Jesuits, who own and control the Vatican from behind the scenes. They appear to be the real new world order crime cabal. The Jesuit order was founded in 1534. Pope Clement VIX abolished the Jesuit order in 1773. The Jesuits then needed a new organization to hide behind so they created the Illuminati as a front. The Illuminati is simply a distraction to take the focus of attention away from the activities of the Jesuits. The Rothschilds did not come into power…See More

"so what you saying is even thou you going such extreme situation ,trumatic ...messes with your mind and body under the inflence of it ....it has no psylogical impact on YOU and it does not effect/alter how you function in socety or your…"