Archives

All posts for the day January 6th, 2015

allstaractivist note: I’m sorry to inform everyone who believes that the technology “they” allow us to see is already obsolete. It has been speculated that many of the technological advances in Physics, Bio Sciences, Computer Science, Neuroscience, Genomics, etc… are kept hidden from the general public, for various reasons. Those reasons could be financial gain, oppression, trade secrets, competitive advantage, national security, whatever. Bottom line is, what we think of as current technology is really about 100 years obsolete, more or less, on average. Many things that have been invented are completely unknown to us at all. Do I really need to elaborate on the applications for both covert surveillance and spying that reading your private thoughts implies? “Thought Police” horrifically, do exist.

The first people to see completely unknown tech are usually soldiers on the battlefield. It was reported by allied ground forces in Iraq (the first “war”) that Iraqi troops immediately surrendered when they witnessed vehicle mounted plasma weapons (DEW) emit “bolts of lightening” instantly shrink tanks down to one fifth of their original size. Plasma is many times hotter than the surface of the Sun and instantly dehydrates anything it contacts. There was also a video floating around Youtube posted by Islamic forces showing an American soldier wearing an “invisibility suit”. You could see the blurred form of a man running and climbing into a tank. Years later a couple of UC Berkeley researchers announced they had developed a fabric that can bend light around it thus giving the appearance of invisibility.

A good rule of thumb is this; when you see some splashy promotion of some fantastic new emerging technology, it is probably far more advanced than what is presented. It may even be that the “new” technology is just a “smoke screen” being used to explain the evidence of the real classified tech in use, such as weather modification. I hate to burst anyone’s bubble but that is my job as a Christian, I’ve been following emerging tech ever since I was a teen. I remember many scientists announcing breakthroughs and discoveries throughout the years only to fade into obscurity shortly thereafter. What do you think happened to them? Acquired? Recruited? Assassinated? That has been the case with many Iranian nuclear scientists. Think about it. Why would they tell us everything they have? What would be the advantage for them?

Well, not so fast. Whenever I read these papers and talk to the scientists, I end up feeling conflicted. What they’ve done–so far, anyway–really doesn’t live up to what most people have in mind when they think about mind reading. Then again, the stuff they actually can do is pretty amazing. And they’re getting better at it, little by little.

In pop culture, mind reading usually looks something like this: Somebody wears a goofy-looking cap with lots of wires and blinking lights while guys in white lab coats huddle around a monitor in another room to watch the movie that’s playing out in the person’s head, complete with cringe-inducing internal monologue.

We are not there yet.

“We can decode mental states to a degree,” said John-Dylan Haynes, a cognitive neuroscientist at Charité-Universitätsmedizin Berlin. “But we are far from a universal mind reading machine. For that you would need to be able to (a) take an arbitrary person, (b) decode arbitrary mental states and (c) do so without long calibration.”

“To me, mind reading is where something is wholly subjective and private, and I can’t tell from what you’re doing or looking at, what your mental state is,” said Frank Tong, a neuroscientist at Vanderbilt University. Tong draws a distinction between that kind of mind reading, and what he calls brain reading, which essentially involves using brain scans to figure out what’s on someone’s mind in situations where you could accomplish the same thing by simply looking over their shoulder or waiting a few seconds to see what they do next.

Most of the research to date falls in this second category.

Here’s a recent example. A team led by Marvin Chun, a cognitive neuroscientist at Yale, published a study last month in which they used brain scans to reconstruct pictures of faces that the subjects had been looking at during the scan. On one hand, whoa. Using machines and computers they produced a something like a printout of what people saw (see below). On the other hand, the researchers carefully controlled what the subjects saw.

In a recent study, scientists developed an algorithm that reconstructs images of faces (bottom row) based on fMRI scans done while people viewed the originals (top row). Image: Courtesy of Alan Cowen and Marvin Chun

In broad strokes, here’s what the Yale researchers did. They created mathematical descriptions of 300 images of faces. All were portraits, shot from the same angle. Then they did fMRI scans on six people to record the pattern of brain activity elicited by each of those 300 faces. Next, they fed those patterns of brain activity into a statistical matching algorithm they’d developed to serve as a kind of translator. After it’s been “trained” on lots of examples, the translator can look at a pattern of brain activity and predict the image that produced it.

To test the translator, the researchers re-scanned the same subjects as they looked at 30 new faces that weren’t in the original set of 300, and the computer created a reconstruction of each face it thinks the person saw. To find out how good these reconstructions were, the researchers recruited 261 people via Amazon’s Mechanical Turk and had them match up reconstructed images with originals. They got about 60 to 70 percent right, Chun says: better than chance, but far from perfect.

“In a sense I think it is a form of mind reading, because the computer algorithm doesn’t know what the person saw,” Chun said. The scans and the algorithms can’t yet reconstruct any old image that pops up in someone’s mind, though. “The step that has to happen next is doing this with imagined faces or faces recalled from memory,” Chun said. “Then it would be true mind reading.”

It’s an important distinction. Most “mind-reading” studies so far have focused on the here and now. “Vision is by far the easiest thing to work with,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. Gallant’s lab has done some of the most eye-catching work in this area, including a 2011 study that used fMRI scans to decode video imagery as people watched clips cut from Hollywood films (see video below). That’s partly because scientists have a relatively good idea of how visual information is represented in the brain (compared to, say, abstract thought), and partly because it’s easy to control what someone sees during an experiment.

The next step up in degree of difficulty, in Gallant’s view, and perhaps a baby step closer to the pop-culture concept of mind reading, is predicting what people are about to do. Scientists have had some success in doing this–in simple circumstances. In a 2008 study, for example, Haynes and colleagues asked people to arbitrarily press a button with either their left or right hand. The subjects’ brain activity gave their choice away seconds before they became conscious of it themselves.

An even bigger step towards “true mind reading,” as Chun says, would be decoding mental images, the kind of pictures you see in your mind’s eye as you’re staring off into space, or falling asleep.

Last year, Yukiyasu Kamitani and colleagues at the Advanced Telecommunications Research Institute International in Kyoto, Japan used fMRI scans to determine what objects people were dreaming about as they fell asleep (they confirmed this by waking them up and asking them, hundreds of times over the course of the study). The dream decoding was pretty rudimentary though. They could tell someone was dreaming about a car, for instance, but not what kind of car.

“The ultimate device for decoding mental imagery would be a device for decoding your internal monologue,” said Gallant. “It would be like talking to Siri except without even talking.” In some ways that might be easier to pull off than it sounds, Gallant says. “Language is a statistically constrained signal, the number of words you could think is a lot less than the number of images you could see.”

If decoding what people see and what they’re just about to do next is where the field is now, and decoding mental imagery is what’s on the horizon, Gallant says there’s yet another type of decoding that’s more like the distant frontier: decoding old memories. “If I ask you to picture your first grade teacher, you might be able to recall his or her name and call up a pretty rough mental image,” he said. “That information is buried in your brain, but you probably hadn’t thought about it for years,” Gallant said. Scientists don’t understand how old memories are encoded in our brains well enough to decode them, but some day they might.

If you’re starting to feel a panic attack coming on, take a deep breath and relax. Scientists are very, very far from being able to dredge up those best-forgotten memories from your grade school days (or worse, junior high). They don’t even want to.

The technology is still pretty limited. And there are other obstacles, such as individual differences. “Different peoples’ brains code information slightly differently, so you need to learn how a specific individual codes their mental states,” said Haynes. “There is only limited transfer from person to person.” Moreover, at least for the foreseeable future, brain decoding will require full cooperation (not to mention considerable patience) on the part of the subject.

The researchers doing this work are more interested in the scientific question of how the brain encodes things like perception, memory, and emotion than in the mad scientist pursuit of decoding people’s thoughts; it’s just that they’re two sides of the same coin. As scientists get better at one, they get better at the other.

As they do, their work will inevitably raise some tricky social and ethical questions.

Gallant worries about the implications for mental privacy, even though he thinks any truly worrying thought-stealing technology is still decades away. “I tend to be a pretty paranoid person,” he said. “As a scientist I’m not sure what to do other than to tell people we need to start thinking about this because somewhere down the road we’re going to be able to do it really well.”

In the meantime, we should keep our expectations in check–and keep a cautious eye on the future.

Plasma WeaponUC Berkeley Invisibility Suit developersInvisibility Suit being used on the battlefields of Iraq

My other blog: Justice for Jacqueline and Janessa Greig

September 9th was the fifth anniversary of the San Bruno gas pipeline explosion that killed (murdered) CPUC Gas Ratepayer Advocate Mrs. Jacqueline (Jackie) Greig and her thirteen year old daughter, Janessa. Mrs. Greig was the head of her department and was in charge of approving a 3.6 billion dollar rate increase proposal submitted by PG&E […]

Alan Wang (KGO Reporter) SAN FRANCISCO (KGO) — PG&E is waiting to get hit with criminal charges. The federal government is expected to go after the utility for that pipeline disaster in San Bruno more than three years ago. The gas explosion was always a crime in the eyes of Gayle Masuno whose 87-year old […]

Well, I just finished the story about attending the Subcommittee meeting and I must say, it wasn’t easy. It was difficult for several reasons but most of them had to do with me being new to blogging, especially this particular template that you see here. Even though both of my blogs are on WordPress (which […]