Rachel Wu - Welcome

Jeff Elman - Opening Statements

There are extensive literatures on studies of attention and studies of learning in both human and non-human animals, but surprisingly little research has addressed how attention and learning interact. This separation is misleading, even as a first approximation, because attention enables learning and learning guides attention in an information-rich environment. This fundamental interaction between attention and learning is most evident in human infants, who must actively select what to learn when facing intractable levels of informational complexity. I will present studies from two streams of research showing both sides of the interaction in infants, as well as adults in infant learning situations (i.e., not given many explicit task demands). Studying the attention and learning strategies of infants, arguably the best learners, not only helps us understand them, but also provides insights into lifelong learning.

Selective attention has been subject to intensive investigation, and we made significant progress in elucidating its neural mechanisms. However, most of these investigations focus on attentional modulations of sensory perception, and less so on the goal of attention. Why do we direct attention in the context of a task, and how do we decide when and to what to attend? Working in the system of spatial attention and eye movement control, I propose that attention (and specifically, eye movements) are information seeking mechanisms whose goal is to select accurate predictors and reduce uncertainty for subsequent actions (e.g., looking at the traffic light at an intersection). This implies that oculomotor decisions are guided by a metric of the validity (or accuracy) of sensory cues. I discuss new results from our laboratory regarding the coding of cue validity in the lateral intraparietal area and its relation to the reward and uncertainty in a given task.

Nathaniel Daw - Multiple learning systems, automaticity and attention

3 years ago

Nathaniel Daw
New York University

Much evidence supports the idea that the brain contains multiple dissociable systems for decision making. We have argued that this dissociation can be quantitatively understood in terms of two distinct computational algorithms for learning to choose advantageous actions, known as model-based and model-free RL. But since this theory mainly originates in the computational neuroscience and behavioral psychology of animal learning, it is less clear how this dichotomy relates to other seemingly related notions of automatic vs deliberative control from human cognitive neuroscience. It is also unclear how the brain arbitrates between the two controllers. I report experiments suggesting that model-free learning indeed dominates when automaticity is expected -- e.g., under dual-task inteference and following acute stress -- while model-based learning is strongest when subjects are engaging more attentive cognitive control.

Aaron Seitz - How attention and reinforcement guide learning

3 years ago

Aaron Seitz
University of California, Riverside

Both attention and reinforcement are known to guide learning, however, it is difficult to disentangle their individual roles in the learning process. Here, I present a series of studies using the framework of task-irrelevant learning, where we dissociate reinforcement and directed attentional signals by examining how attention and reinforcement related to one task guides learning on a secondary, independent, task. I suggest that reinforcement up-regulates learning nonspecifically (benefitting even task-irrelevant stimuli), whereas directed attention is selective and regulates stimulus signals according to behavioral goals (ie up-regulates stimuli of interest and down-regulates distracting stimuli). This simple dissociation between directed attention and reinforcement both fits within existing literatures and explains otherwise counterintuitive findings.