Saturday, 10 March 2012

Mind Reading Machines

UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.

I just can't believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."

Indeed, it's mindblowing. I'm simultaneously excited and terrified. This is how it works:

They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.

The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.

An 18-million-second picture palette

After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.

In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie "extracted" from their brain activity. It shows that this technique gives consistent results independent of what's being watched—or who's watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.

Right now, the resulting quality is not good, but the potential is enormous. Lead research author—and one of the lab test bunnies—Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines:

Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.

The brain recorders of the future

Imagine that. Capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.

This is the first time in history that we have been able to decode brain activity and reconstruct motion pictures in a computer screen. The path that this research opens boggles the mind. It reminds me of Brainstorm, the cult movie in which a group of scientists lead by Christopher Walken develops a machine capable of recording the five senses of a human being and then play them back into the brain itself.

This new development brings us closer to that goal which, I have no doubt, will happen at one point. Given the exponential increase in computing power and our understanding of human biology, I think this will arrive sooner than most mortals expect. Perhaps one day you would be able to go to sleep wearing a flexible band labeled Sony Dreamcam around your skull. [UC Berkeley]

'Mind-reading machine' can convert thoughts into speech

A mind reading machine is a step closer to reality after scientists discovered a way of translating people's thoughts into words.

Researchers have been able to translate brain signals into speech using sensors attached to the surface of the brain for the first time.

The breakthrough, which is up to 90 per cent accurate, offers a way to communicate for paralysed patients who cannot speak and could eventually lead to being able to read anyone thoughts.

"We were beside ourselves with excitement when it started working," said Professor Bradley Greger, a bioengineer at Utah University who led the team of researchers.

"It was just one of the moments when everything came together.

"We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralysed patients who cannot now speak.

"I would call it brain reading and we hope that in two or three years it will be available for use for paralysed patients."

The experimental breakthrough came when the team attached two button sized grids of 16 tiny electrodes to the speech centres of the brain of an epileptic patient. The sensors were attached to the surface of the brain The patient had had part of his skull removed for another operation to treat his condition.

Using the electrodes, the scientists recorded brain signals in a computer as the patient repeatedly read each of 10 words that might be useful to a paralysed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Then they got him to repeat the words to the computer and it was able to match the brain signals for each word 76 per cent to 90 per cent of the time. The computer picked up the patinet's brain waves as he talked and did not use any voice recognition software.

Because just thinking a word – and not saying it – is thought to produce the same brain signals, Prof Greger and his team believe that soon they will be able to have translation device and voice box that repeats the word you are thinking.

What is more, the brains of people who are paralysed are often healthy and produce the same signals as those in able bodied people – it is just they are blocked by injury from reaching the muscle.

The researchers said the method needs improvement, but could lead in a few years to clinical trials on paralysed people who cannot speak due to so-called "locked-in" syndrome.

“This is proof of concept,” Prof Greger said, “We’ve proven these signals can tell you what the person is saying well above chance.

"But we need to be able to do more words with more accuracy before it is something a patient really might find useful.”

People who eventually could benefit from a wireless device that converts thoughts into computer-spoken words include those paralysed by stroke, disease and injury, Prof Greger said.

People who are now “locked in” often communicate with any movement they can make – blinking an eye or moving a hand slightly – to arduously pick letters or words from a list.

The new device would allow them freedom to speak on their own.

"Even if we can just get them 30 or 40 words that could really give them so much better quality of life," said Prof Greger.

“It doesn’t mean the problem is completely solved and we can all go home. It means it works, and we now need to refine it so that people with locked-in syndrome could really communicate.”.

The study, published in the journal of Neural Engineering, used a new kinds of nonpenetrating microelectrodes that sit on the brain without poking into it.

The first was attached to the face motor cortex, which controls facial movement and is on the top left hand side of the brain.

The second was attached to the Wernicke's area, an area just above the left ear that acts as a sort of language translator for the brain.

Because the microelectrodes do not penetrate brain matter, they are considered safe to place on speech areas of the brain – something that cannot be done with penetrating electrodes that have been used in experimental devices to help paralysed people control a computer cursor or an artificial arm.

The researchers were most accurate – 85 per cent – in distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex.

They were less accurate – 76 per cent – when using signals from Wernicke’s area.

This material is published under Creative Commons Fair Use Copyright (unless an individual item is declared otherwise by copyright holder) – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…

No comments:

Post a Comment

Touch the Flame for the Newest Illuminations

See the newest paradigm shifting information by touching the flame & communicate via the LIVE CHAT below the bottom of this column

“Truth is stronger than factions”

-A nan

Synchronistic Search for Anything on New Illuminati

Loading...

Translate This Site Into Any Language (more or less)

Welcome to the new Enlightenment, an era when suppressed science, hidden history and the enlightening nature of reality are all revealed to those with eyes to see and ears to hear.

These are the thoughts and ideas of New Illuminati - bold forerunners and pioneers of new awareness all over the globe.

Notes on new emerging paradigms from the NEXUS New Times Magazine Founder R. Ayana, who lives in a remote Australian rainforest (and is no longer involved with the magazine) - Catching drops from the deluge in a paper cup since 1984.

§ 107.Limitations on exclusive rights: Fair use Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include — (1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work.

This material is published under Creative Commons Copyright – reproduction for non-profit use is OK. Awesome Inc. template. Powered by Blogger.

Claimer

All opinions, facts, debates and conjectures xpressed herein are xtrusions of macrocosmic consciousness into your field of awareness. The New Illuminati are not to be held responsible or accountable for flashes of insight, epiphany, curiosity, transformation or enlightenment experienced by any person, human or otherwise.