What’s the News: Humans are eerily good at sifting the visual wheat from the chaff—just think of our penchant for word searches, Easter egg hunts, and lushly animated first-person shooters.

But how good are we really? To test the limits of these abilities, in a recent study neuroscientists gave subjects extremely difficult, high-speed Where’s Waldo-type search tasks studded with red herrings. But again and again, subjects found what they were looking for, leading the team to report that humans operate at a near-optimal level when it comes to visual searches—a skill that likely came in handy in our evolutionary history.

How the Heck:

For a fraction of a second, a cluster of short lines randomly colored gray or black and set at various angles, called “distracters,” flashed before subjects’ eyes. Half of the time, a single line whose orientation didn’t change across images was hidden among them, and subjects indicated whether this target had appeared.

Even with the images whipping by at high speed and the complicating effect of color, humans still detected the target at a level that’s near the best possible success rate, a number that’s defined by probability and takes in account how much an observer should weigh each of the pieces of information provided to them. “An optimal observer weights more reliable pieces of sensory evidence more heavily when making a perceptual judgment,” the researchers write in the paper. “For example, when two noisy sensory cues about a single underlying stimulus have to be combined, an optimal observer assigns higher weight to the cue that, on that trial, is most reliable.” In this study, the angle of the lines was a reliable characteristic—noticing it helped subjects determine if the target line was there—while color was not.

“We found that even in this complex task, people came close to being optimal in detecting the target,” the lead researcher said in a press release. “That means that humans can in a split second integrate information across space while taking into account the reliability of that information. That is important in our daily lives.”

The team thinks people use groups of networked neurons to perform this breathtakingly quick analysis, and they built a model neural network to show how it could happen.

What’s the Context: The team is interested in whether humans, on the neural level, use a strategy called Bayesian inference to figure out whether a target object is present. They incorporated information that wasn’t a reliable indicator of the target’s presence—color—into the tests to see how people dealt with it, a key factor in Bayesian inference.

The Future Holds: The next step is to up the difficulty of the test and see at what level this preternatural ability to see the target fails. This will give scientists more clues about how visual perception operates on the neural level.

Interesting stuff, but are we any better than other animals with similar visual systems?

http://neuro.bcm.edu/malab/ Wei Ji Ma

Just to clarify: the participants in our experiments made plenty of mistakes. Being optimal doesn’t mean being perfect – this is a common misunderstanding. It means using the best possible strategy given the noisy information that you are given. The best possible strategy is to evaluate on each trial the probability that the target object is present (this probability is different on each trial). If it is larger than 50%, you respond “yes”. If humans were not able to accurately and instantly compute this probability for each new screen, they would not behave optimally. We find that they do under a wide range of conditions.

Veronique Greenwood

@Wei Ji Ma, thanks for chiming in! Yes, I knew what you meant by “optimal”–your paper explained it well–but I’ve just tweaked the text in response to your comment. Hope it’s clearer now.

Ryan

“Optimal” seems a suboptimal word to use for this article. After reading Wei Ji Ma’s comment, I still don’t understand how using the word “optimal” still makes sense. How is this even a task that can be defined in terms of an “optimal” result? It seems to me there is no limit to the potential ability to identify visual patterns. The complexity of a pattern can increase ad infinitum. A better visual recognition system will be able to parse out more complex patterns with more fidelity than one that isn’t so good. Please rebut if I’m not right here. I was under the impression that having a big/well-working brain was that more patterns (visual or otherwise) can be logged and accessed for better prediction/identification as brain size/effectiveness/efficiency increase.

Ryan

Additionally, having good eyes that can send clear signals to the brain are also a huge factor. How would you call someone with myopia or far-sightedness as being close to optimal at recognizing visual patterns when their eyes aren’t a good enough sensor to provide the brain with clear information? Even someone with “good” vision has nowhere near the ability to pick out a mouse in a field like an owl does. Also, haven’t we known for a long time that a healthy brain whittles it’s circuits toward an “optimal” pattern for completing a given task?

Paul

How may knowing this researched fact affect our lives?

http://neuro.bcm.edu/malab/ Shaiyan

@Ryan, The “optimal” in optimal inference does not refer to a human’s absolute ability to perform in this task, only to how well he/she can incorporate the observations he/she has received. Imagine I ask you to play a game of Plinko. Except, because I am a neuroscientist, I have twisted the goal of the game to make it a bit more interesting. Rather than ask you to drop disks into the Plinko board, I drop them all from one particular slot at the top before you arrive. I call you in, and your job is to determine which location I dropped them from using their ending locations at the bottom.
Now, due to the inherent randomness in how the disks fall, you will never be able to tell exactly where I dropped them from. You will be able to make a good guess, however, because Plinko disks tend to end up more or less in a binomial distribution centered around the location I dropped them from. If you are aware of this distribution (or I train you to learn it), there is a unique, mathematically well defined way to determine the most likely location from which I dropped the disks.
Due to the noise in how the discs land, you will never be perfect in this task. There is, however, a best way to do this task, and that is the solution I told you about. If you know how the disk locations are determined (the binomial distribution), the best you can do is to use optimal inference. This is what the authors think the brain is doing: it represents the distribution from which the objects you see are drawn (a binomial distribution in this example), combines this distribution with the observations (disk locations), and makes the best possible guess it can.

Now to specifically answer your questions:

(1) Imagine I asked you to play this game, but rather than letting you see exactly where the disks are, I allowed you to look at it through foggy air and for only a fraction of a second. Now, not only is there randomness in how the disks actually land, but due to the external noise, there is also noise in where you think the disks are. I would not expect you (or any other creature) to do as well in this task as with good viewing conditions. Despite this handicap, if you know how the fog affects your vision (how it affects the distribution of light hitting your eyes), there is still a best, or optimal, way of doing the task. As long as you can know the distributions from which your observations come from, this method well defined.

(2) It is true that if you could simulate the exact physics of how the disks bounced around (and I don’t think anyone can), you could get a nearly perfect estimate, but there will always be some noise in your simulation and you will have to make a estimation at some point. With a bigger brain you could possibly represent the distribution of final disk locations better than with a binomial distribution. The experiment mentioned this article, however, controls very closely the distributions from which the images on the screen are drawn, so that the optimal strategy can be exactly determined. Having a bigger brain would not necessarily help in the task, since all it takes to do the optimal computation is knowing the distributions of the images and distribution of noise in the observations. The experiment does indeed find that humans, regardless of brain size or visual ability, are able to do these computations (or something very close to them) in order to make an optimal response.

@Paul, There are theoretical as well as practical implications of human optimality in perception. One goal of neuroscience is to understand how the brain is able to perform the complicated computations required to drive day-to-day behavior. If we observe a creature to behave optimally, we have a good clue that it employs certain specific computations with its brain, and we can peer inside the skull with neurophysiological techniques to see if this is in fact the case. Also, if we observe optimality in vision, there is good motivation to test other senses for the same behavior. Practically, this research has several applications. Neurology and psychiatry, for example, would benefit from a better understanding of brain function. It could be that strokes or lesions in specific parts of the brain cause deficits in such optimal computation, and therefore doctors could predict the effects of certain injuries or diagnose problems by testing the deficits in optimal behavior. Several psychiatric disorders have also been shown to cause specific, observable effects on visual perception, and beg for a better explanation of mechanism.
Artificial vision also stands to benefit from this research. Understanding how humans see the world will help the design of image recognition software. Humans have the amazing ability to understand the content of complicated, noisy pictures viewed very briefly, a task in which computer algorithms still require heavy development. It would be easy for you or I to look at a picture and identify what parts of the picture belong to what object. Even the most complex computer models, however, have large problems when dealing with mildly complicated images (imagine a dog seen through the gaps in a fence). With a better model one could write software to, for example, automatically tag images online with their contents or more accurately detect tumors in CT scans.