Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

sciencehabit writes "Scientists have developed an algorithm that converts simple grayscale images into musical soundscapes. Even people blind from birth can use the technology to 'see' their surroundings and navigate around a room. Equally intriguing, the part of the subject's brain responsible for vision was active during these tasks, suggesting our thinking about how the brain works may be wrong. Instead of a 'vision center' of the brain, for example, we may actually have a region that helps us 'see', whether that input comes from sight or sound."

I am not sure if "grayscale" is the most useful information to a blind person. A few years ago I tried on some ultra-sound goggles designed for the blind. The cool thing about it was that, with practice, by listening to the pulses, I could not only tell how close an object was, but also how rigid or dense it was. A pillow would sound very different from a rock. Just by listening, I could "look" at two soda cans on the shelf, and tell which one was empty. Of course, it gave no information about color.

Except that video doesn't prove anything of that sort. It merely shows that the participant has learned to recognize specific sound patterns. For all we know from the video, he may have been told the following:

First sound pattern playing

Interviewer: This is John. He is bald and has black eyes and a goatee.

Second sound pattern playing

Interviewer: This is Lisa....

...

...as a result of which, the participant is able to recognize the pattern and recite what he has learned.

Vision works in the following. Take a stereoscopic picture. The two images give you depth information. You can use edge detection algorithms to determine what pixels belong together as an object (segmentation) and reconstruct a cardboard cutout view of the world. From each 2D cutout figure, the brain finds the closest matching known 3D object and constructs an internal 3D representation of the scene with information consisting of two things; the object and it's orientation relative to the person (distance,

Vision works in the following. Take a stereoscopic picture. The two images give you depth information. You can use edge detection algorithms to determine what pixels belong together as an object (segmentation) and reconstruct a cardboard cutout view of the world. From each 2D cutout figure, the brain finds the closest matching known 3D object and constructs an internal 3D representation of the scene with information consisting of two things; the object and it's orientation relative to the person (distance, scale, rotation).

I get that, I work with 3D point clouds and stereo and RGBD sensors every day.

Now, you could replace the stereoscopic picture with the sound input. Then brain makes a closest match between the type of sound and known objects. Another method is to place an electrode grid on the tongue and a similar form of vision becomes possible.

I get that too. That is the idea and the claim, anyway. The point of my post was: The video posted with the ScienceMag article [sciencemag.org] does not prove that this is what happens! The task shown could have been accomplished by memorizing the sounds and the descriptions given by the interviewer. That's still fairly impressive. But from the video, we do not know whether the participant actually does what the authors, and you, say he does

My Daughter is legally blind. She has rod monochromatism often called achromatopsia (http://www.achromatopsia.info/) and doesn't see any colour (only grey scale).
She seems to think greyscale is quite useful. Now obviously colour would also be useful, but greyscale allows you to 'see' most things. In some states in the USA people with achromatopsia can drive using bioptic glasses (http://www.biopticdrivingusa.com/achromatopsia/). Of course the rest of the world sees this and thinks 'only in America'.
Anyway, this is interesting if not entirely new. Other students at my daughters school (for the visually impaired) also learn echo location - this is from my daughters school: http://www.abc.net.au/btn/stor... [abc.net.au]:-)

I'm glad to see a post that positively promotes development in science. For some reason, our "science" has stagnated lately in my opinion with "scientists" taking hard-line approaches to situations - they are no longer thinking out of the box and force everyone to think in the box or be ostracized and labeled "stupid". I agree that there is more likely a part of the brain that helps us "see". I believe that the data used by that brain center can be different to produce different results: 1) You see with your eyes the events happening in front of you. 2) You see with your mind when you recall a sequence of events, situation, or even dream.

Who says science has stagnated? I look around and I'm seeing rapid advances in science and technology and it's just getting more rapid. The only thing stagnating is people's ability to keep up and comprehend what's really happening.

I think wearing laser range finders around, and having pressure around your body depending on how close it is to objects could be more information too. I could be wrong, but I would like some feedback on this reddit post [reddit.com] I made a day or two ago. Can't hurt to discuss this stuff.

I have a friend who is working on his PHD in experimental psychology at the moment, and we had a discussion about this 4 or so years ago. Apparently this is the new paradigm of thinking in the neuroscience world, or has been on the verge of becoming the new paradigm for some time now. The brain is just a pattern recognition engine, in much the same way that we are able to wire a monkeys brain to a robotic arm, and over time have that brain adapt to control this new appendage, the brain will take any signa

I saw an episode of a documentary/show Extraordinary People on Ben Underwood, and it was incredible what he was capable of, including riding a bicycle and playing basketball using echolocation. This is without any computer program or aid. Unfortunately, he died in 2009. As far as I know, only a few people possess this skill, but it can apparently be taught. The world expert is probably Daniel Kish.http://en.wikipedia.org/wiki/H... [wikipedia.org]

But they're wrong in a more important way: We've believed for years that the visual cortex is actually a visualization center! It just happens that when we're awake and looking with our eyes the visualization is constrained by sensory inputs. Sensory (and even association) cortices are basically simulators, that contain our best models of the world (what we expect the world to be like, based on prior experience), and the parts of those models that are active are dynamically constrained in real time by senso

No different from watching a TV show with lots of flashing lights. My womenfolk can no longer watch some of those talent shows because the program makers completely overdid the lighting effects - they had spotlight patterns moving up and down behind the singer, spinning patterns all over the floor and the spotlight patterns moving sideways to each side of the stage.

Instead of a 'vision center' of the brain, for example, we may actually have a region that helps us 'see', whether that input comes from sight or sound."

How about instead of a 'vison center' you call it a 'spacial awareness center'. Then it fits the bill no matter where that information is coming from. Because vision can be broken down into "this is over here" and "that is over there". Now it would be interesting to know if the 'vison center' is active in the blind people who use echo-location, that w