“Pay attention!” That is often very good advice, but sometimes the advice is hard to obey. The brain’s limited attentional resources can be overwhelmed when attention has to be distributed among multiple objects. And the challenge is even greater when the objects are moving. For example, imagine that you’re driving on Route 128 at rush hour. You must attend not only to your own car’s path, but also to the whims and surprising behaviors of the cars all around you. Working in my lab, Heather Sternshein and Yigal Agam, two PhD students in the Neuroscience Program, developed a novel electroencephalographic (EEG) technique to study how selective attention is apportioned in a task that can be described as “Route 128-on Sterioids.” We were especially interested in the neural correlates of failures of attention, the kind of failure that, on the road, might have serious consequences.

Subjects in our experiment watched as ten identical black discs moved about randomly on a computer display for ten seconds. The hard part was to keep track the entire time of particular, pre-designated target discs –either three, four or five. Because all ten moving discs were identical, there were no physical features to distinguish target from non-target discs. At the end of eight seconds, all discs came to standstill, and a subject tried to identify the discs that he or she had been tracking. The task required attentive tracking of a subset of identical multiple moving objects, something even more challenging than navigating Route 128 at rush hour.

Every once in a while during the eight-second tracking period, one of the ten discs flashed brightly for 100 msec. Sometimes, the flashed disc was a target disc, that is, one the subject was trying to track; sometimes the flashed disc was a non-target disc, that is, one that the subject could be ignoring. The flash evoked a response in the subject’s brain, and our EEG system picked up that response from the subject’s scalp. Knowing that the evoked response would be larger if the flash were delivered to an object that was being attended, we used responses to target and non-targets as an index of how attention was distributed among the multiple moving objects. We focused our analysis on electrodes located over occipital and parietal lobes, toward the brain’s posterior.

As expected, the relative sizes of responses to the two kinds of stimuli differed: on average, flashes on target discs evoked larger responses than flashes on non-target discs. This difference confirmed that on average subjects were paying more attention to targets than to non-targets. But as the number of discs that had to be tracked increased —from three to four to five– subjects found the task increasingly harder, and made more errors when they had to identify the discs that they had been trying to track. The EEG revealed the neural correlate of these failures of attention. The difference between evoked responses to flashed targets and flashed non-targets decreased as the number of targets increased. This shrinking difference between the two sets of neural responses could explain the systematic increase in errors as the the number of targets increased. As additional items have to be kept track of, it becomes harder for subjects to apportion attentional resources in a way that preserves a sufficient advantage for targets over non-targets. As a result, subjects make more errors –mistaking non-targets for targets.

We plan to adapt this basic experimental strategy to study the neural basis of attention in various groups whose performance on our task is likely to abnormal: older adults (who show impaired behavioral performance) and habitual video-game players (who show far-better-than normal performance). Yigal Agam is now at MGH’s Martinos Center; Heather Sternshein is in the Department of Neurobiology, Harvard University.

The Center of Excellence for Learning in Education, Science and Technology (CELEST) is holding a workshop on its cross-function initiative Dynamic Coding in Neural Signals at Boston University (677 Beacon Street, Room B02) on July 29, 2011. from 1:00 – 5:45 pm. The workshop is free and open to the public. There will be talks by invited speakers from 1 – 4:15, including presentations by Don Katz (“Perceptual processing via coherent sequences of ensemble states”) and Paul Miller (“Stochastic transitions between discrete states in models of taste processing and decision-making”). a student and postdoc poster session will follow, with ample opportunity for discussion between presenters and workshop attendees.

CELEST is a joint venture of scientists at four Boston-area universities including Brandeis and is sponspored by the National Science Foundation. Robert Sekuler, Louis and Frances Salvage Professor of Psychology at Brandeis, is a co-Principal Investigator, and Biology and Neuroscience faculty Gina Turrigiano and Paul Miller are also involved in the center.

Have you ever heard the phrase ‘the eyes are a window to the soul’? New research from Dr. Robert Sekuler’s Vision Lab suggests that the eyes may be a window to the brain as well. In an article published in this month’s issue of the Journal of Vision, Neuroscience grad student Jessica Maryott (PhD ’09) and Psychology grad student Abigail Noyce showed that as participants learn, their eye movements change in a way that lets scientists investigate how that learning takes place, specifically in response to unexpected events.

Participants in the study watched as a disk moved on a computer screen in a zig-zag path; they then reproduced its trajectory from memory. Each path was repeated several times, allowing the researchers to examine the learning process as participants became familiar with the pattern and more accurate at reproducing it. Researchers also measured participants’ eye movements as they watched the disk move, and examined learning-related changes in those as well. The results suggest that eye movements reflect the participant’s level of learning by actually predicting where the disk will be going next.

Sometimes, part of the disk’s path changed after several repetitions going in the opposite direction (a 180 degree change, shown in the green trace on the figure), without warning to the participant. This caused participants to make a prediction error: the actual motion of the disk no longer matched the pattern they had learned, but their eyes moved in the direction of the expected movement (positive velocity) until they were able to correct the error (this is when the green trace reverses velocity and goes below 0 in the figure). After such a prediction error, when the pattern appeared again, participants’ eye movements showed that the previous prediction error produced fast ‘one-shot’ learning, and participants now expected to see the new version of the path (shown in the blue trace, which goes in the new expected direction, 180 degrees from the old – thus showing a negative velocity). The researchers concluded that unexpected events (like the induced error) have high salience for learning. These results suggest that humans have a cognitive system which monitors how well sensory input matches predictions, and responds to errors with sudden, strong learning about the new situation.

A recent paper in Neuroimage by Brandeis Neuroscience Ph. D. program alumnus Yigal Agam, Professor Robert Sekuler and coworkers attempts to answer the question. To identify the earliest neural signs of recognition memory, they used event related potentials collected from human observers engaged in a visual short term memory task. Their results point to an initial feed-forward interaction that underlies comparisons between what is being current seen and what has been stored in memory. The locus of these earliest recognition-related potentials is consistent with the idea that visual areas of the brain contribute to temporary storage of visual information for use in ongoing tasks. This study provides a first look into early neural activity that supports the processing of visual information during short-term memory.