Scan a brain, read a mind?

Top row: Images of faces. Bottom row: Reconstructed images based on brain activity, as shown by Alan Cowen and colleagues.

STORY HIGHLIGHTS

The biggest limitation on "mind reading" is brain measurement

Thoughts could be used for wheelchairs, artificial limbs

Mathematical models are important for brain decoding

(CNN) -- What we write online may be intercepted, filtered and publicized, but we'd like to think that the thoughts and images in our heads are totally private.

For better or worse, science may change that. Over the last few years, researchers have made significant strides in decoding our thoughts based on brain activity.

How this would work is still at the very early stages of development. But, given what we can already do, it's not a huge leap to imagine that one day we could read the words of people's internal streams of thought, said Jack Gallant, a prominent neuroscientist at the University of California, Berkeley.

"I think decoding the little person in your brain -- we could do that today if we had a good enough method of measuring your brain activity," Gallant said.

Gallant predicts that in 50 years, thought-reading will be commonplace. We'll be wearing "Google Hats," he envisions, that are continuously decoding our thoughts. Such a wonder-cap might transmit and even translate our thoughts into foreign languages.

But Dr. Josef Parvizi, a Stanford University neurologist who also studies the relationship between brain and mind, is much more skeptical.

This story is part of CNN's Inside the Brain series.

"In order to really read thoughts with methods that are noninvasive, we have a long way to go," he said. "I think it is unwise and simply false to give the general public the impression that we are about to be able to read minds."

What you need to read thoughts

There are several limitations on "mind reading" directly from the brain, Gallant said. You need good mathematical models of brain function and high-speed computing. But the biggest challenge right now is measuring brain activity.

Scientists can measure electrical activity with EEG (electroencephalography) and changes in blood oxygen use with fMRI (functional magnetic resonance imaging). But these are really crude measurements of what's happening inside the brain.

EEG is a two-dimensional, limited signal from the brain. And fMRI is like measuring the total electricity usage in your office at specific times to figure out what's going on at everyone's desk, Gallant said. That wouldn't tell you what any particular person is working on; it's just a rough overall description of changes.

"The most optimistic estimates are that you can recover one one-millionth of the information that's available in the brain at any given point in time," Gallant said. "It's probably smaller than that. So, where we are today is just measuring a pale shadow of what you could potentially measure, if you had a better measurement technology."

Meanwhile, Parvizi's lab explores the brain with a completely different technique, making use of electrodes implanted in the brains of patients with severe epilepsy to do direct neural recordings at the brain's surface.

His group wants to know the specific functions of different brain areas so when surgeons cut out parts responsible for seizures, they know what to avoid. This method, however, has so far not extracted the actual content of thoughts and memories, and may not be generalizable to non-epileptic patients.

What we can do now

Despite these limitations of brain activity measurement, scientists have already been able to achieve remarkable results.

In this illustration, where there are similar colors, those brain regions respond similarly to movie clips, Jack Gallant's lab showed.

For instance, using fMRI scans, scientists can reconstruct a face that a person is viewing, as reported in a March 2014 study in the journal Neuroimage. The study was led by Alan Cowen, then an undergraduate at Yale University, who now studies with Gallant in graduate school.

Researchers analyzed how subjects responded to 300 faces while receiving fMRI scans, creating a statistical "library" of the way the brain reacts to facial images. They then used a computer algorithm to generate a mathematical description of the faces based on brain activity patterns.

Then, researchers scanned the six participants again while they viewed a new set of faces. By comparing the fMRI data from the 300 faces to the new scans, scientists were able to digitally draw the second set of faces that the participants saw based on brain activity.

The computer-reconstructed faces were not exact, but people were able to identify them, and researchers could sufficiently compare the pixel information between the reconstructions and originals by computer, accurately matching them between 60% and 70% of the time.

Marvin Chun, professor of psychology at Yale who co-authored the study, said it could have applications for studying disorders where perception of faces is impaired, such as prosopagnosia‎ and autism.

"We're very excited about it, because any increasing ability to read out activity from the brain and map it onto something useful like faces is going to have very broad usage scientifically," Chun said.

We may soon be able to upload memories

Wheelchair driven by thoughts

This research was inspired by studies that Gallant's group had done on determining which photographs people saw based on fMRI scans. Gallant and colleagues have also demonstrated this with videos; their 2011 study in Current Biology used fMRI and computational models to reconstruct movie clips that people viewed.

Even dreams may be knowable. Scientists led by Tomoyasu Horikawa at ATR Computational Neuroscience Laboratories, Kyoto, published a report last year in the journal Science suggesting it is possible to decode dreams based on brain activity in slumbering subjects, although this is also early stage research.

Such feats have a certain magical quality. But they still involve large, bulky machinery that can capture only a small slice of our conscious experience.

Scientists are also looking at how two brains can communicate with each other. A group at the University of Washington demonstrated last year that by sending brain signals over the Internet, one scientist could control another scientist's hand motion. But the recipient of the signal was not actively interpreting it; true two-way brain communication has been achieved in mice not but yet humans.

Neither Gallant nor Parvizi are primarily interested in decoding thoughts. Their fundamental goals involve understanding how the brain does what it does.

Nonetheless, their research has generated a lot of interest, and also hype about mind reading that is concerning to Parvizi.

"I don't think it serves science well, and I don't think it makes the general public appreciate how difficult it is to really understand the operation of the human brain," Parvizi said.

Beyond the novelty of "thinking" an e-mail, there are other important applications to this line of research. Thought-directed wheelchairs, artificial limbs and other assistance devices would be a huge benefit to people with paralysis and other disabilities. Scientists are making strides in this area in small studies.

Gallant's group is working on modeling how the brain responds to language and represents language in your mind.

Chun is working on studying attention, looking at what happens when people's minds are wandering out of "the zone" of experience.

Then there's the problem of memory, which Parvizi is working on: How your brain retrieves memories from the past.

"We can accurately decode that the patient is retrieving memories but we cannot decipher the memory content," he said.

The issue of mind reading brings up important ethical and public policy questions about privacy. Who can have access to your thoughts, and can you choose to keep certain things to yourself, or will even your strangest dreams be readily accessible? How will we control the use of mind-reading devices?

The actual technology may be far off, but Gallant insisted, "We need to start thinking about this now."