EyeLink Eye Tracking Publications Library

All 7000+ peer-reviewed EyeLink eye tracking publications (up to 2018) are listed below in alphabetical order based on the first author. You can search the publications library using key words such as Visual Search, Smooth Pursuit, Parkinsons, etc. You can also search for individual author names. Eye tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye tracking paper, please email us!

Do the target-distractor and distractor-distractor similarity relationships known to exist for simple stimuli extend to real-world objects, and are these effects expressed in search guidance or target verification? Parts of photorealistic distractors were replaced with target parts to create four levels of target-distractor similarity under heterogenous and homogenous conditions. We found that increasing target-distractor similarity and decreasing distractor-distractor similarity impaired search guidance and target verification, but that target-distractor similarity and heterogeneity/homogeneity interacted only in measures of guidance; distractor homogeneity lessens effects of target-distractor similarity by causing gaze to fixate the target sooner, not by speeding target detection following its fixation.

@article{Alexander2014,
title = {Are summary statistics enough? Evidence for the importance of shape in guiding visual search},
author = {Robert G Alexander and Joseph Schmidt and Gregory J Zelinsky},
doi = {10.1080/13506285.2014.890989},
year = {2014},
date = {2014-01-01},
journal = {Visual Cognition},
volume = {22},
number = {3},
pages = {595--609},
abstract = {Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search.

@article{Alexander2018,
title = {Occluded information is restored at preview but not during visual search},
author = {Robert G Alexander and Gregory J Zelinsky},
doi = {10.1167/18.11.4},
year = {2018},
date = {2018-01-01},
journal = {Journal of Vision},
volume = {18},
number = {11},
pages = {1--16},
abstract = {Objects often appear with some amount of occlusion. We fill in missing information using local shape features even before attending to those objects—a process called amodal completion. Here we explore the possibility that knowledge about common realistic objects can be used to ‘‘restore'' missing information even in cases where amodal completion is not expected. We systematically varied whether visual search targets were occluded or not, both at preview and in search displays. Button-press responses were longest when the preview was unoccluded and the target was occluded in the search display. This pattern is consistent with a target-verification process that uses the features visible at preview but does not restore missing information in the search display. However, visual search guidance was weakest whenever the target was occluded in the search display, regardless of whether it was occluded at preview. This pattern suggests that information missing during the preview was restored and used to guide search, thereby resulting in a feature mismatch and poor guidance. If this process were preattentive, as with amodal completion, we should have found roughly equivalent search guidance across all conditions because the target would always be unoccluded or restored, resulting in no mismatch. We conclude that realistic objects are restored behind occluders during search target preview, even in situations not prone to amodal completion, and this restoration does not occur preattentively during search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Objects often appear with some amount of occlusion. We fill in missing information using local shape features even before attending to those objects—a process called amodal completion. Here we explore the possibility that knowledge about common realistic objects can be used to ‘‘restore'' missing information even in cases where amodal completion is not expected. We systematically varied whether visual search targets were occluded or not, both at preview and in search displays. Button-press responses were longest when the preview was unoccluded and the target was occluded in the search display. This pattern is consistent with a target-verification process that uses the features visible at preview but does not restore missing information in the search display. However, visual search guidance was weakest whenever the target was occluded in the search display, regardless of whether it was occluded at preview. This pattern suggests that information missing during the preview was restored and used to guide search, thereby resulting in a feature mismatch and poor guidance. If this process were preattentive, as with amodal completion, we should have found roughly equivalent search guidance across all conditions because the target would always be unoccluded or restored, resulting in no mismatch. We conclude that realistic objects are restored behind occluders during search target preview, even in situations not prone to amodal completion, and this restoration does not occur preattentively during search.

@article{Alink2012,
title = {Auditory motion direction encoding in auditory cortex and high-level visual cortex},
author = {Arjen Alink and Felix Euler and Nikolaus Kriegeskorte and Wolf Singer and Axel Kohler},
doi = {10.1002/hbm.21263},
year = {2012},
date = {2012-01-01},
journal = {Human Brain Mapping},
volume = {33},
number = {4},
pages = {969--978},
abstract = {The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/ from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the V5þ). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direc- tion of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5þ. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high- level visual cortex in healthy humans.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/ from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the V5þ). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direc- tion of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5þ. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high- level visual cortex in healthy humans.

@article{Alizadeh2018a,
title = {Caudal Intraparietal Sulcus and three-dimensional vision: A combined functional magnetic resonance imaging and single-cell study},
author = {Amir Mohammad Alizadeh and Ilse {Van Dromme} and Bram Ernst Verhoef and Peter Janssen},
doi = {10.1016/j.neuroimage.2017.10.045},
year = {2018},
date = {2018-01-01},
journal = {NeuroImage},
volume = {166},
pages = {46--59},
publisher = {Elsevier Ltd},
abstract = {The cortical network processing three-dimensional (3D) object structure defined by binocular disparity spans both the ventral and dorsal visual streams. However, very little is known about the neural representation of 3D structure at intermediate levels of the visual hierarchy. Here, we investigated the neural selectivity for 3D surfaces in the macaque Posterior Intraparietal area (PIP) in the medial bank of the caudal intraparietal sulcus (IPS). We first identified a region sensitive to depth-structure information in the medial bank of the caudal IPS using functional Magnetic Resonance Imaging (fMRI), and then recorded single-cell activity within this fMRI activation in the same animals. Most PIP neurons were selective for the 3D orientation of planar surfaces (first-order disparity) at very short latencies, whereas a very small fraction of PIP neurons were selective for curved surfaces (second-order disparity). A linear support vector machine classifier could reliably identify the direction of the disparity gradient in planar and curved surfaces based on the responses of a population of disparity-selective PIP neurons. These results provide the first detailed account of the neuronal properties in area PIP, which occupies an intermediate position in the hierarchy of visual areas involved in processing depth structure from disparity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The cortical network processing three-dimensional (3D) object structure defined by binocular disparity spans both the ventral and dorsal visual streams. However, very little is known about the neural representation of 3D structure at intermediate levels of the visual hierarchy. Here, we investigated the neural selectivity for 3D surfaces in the macaque Posterior Intraparietal area (PIP) in the medial bank of the caudal intraparietal sulcus (IPS). We first identified a region sensitive to depth-structure information in the medial bank of the caudal IPS using functional Magnetic Resonance Imaging (fMRI), and then recorded single-cell activity within this fMRI activation in the same animals. Most PIP neurons were selective for the 3D orientation of planar surfaces (first-order disparity) at very short latencies, whereas a very small fraction of PIP neurons were selective for curved surfaces (second-order disparity). A linear support vector machine classifier could reliably identify the direction of the disparity gradient in planar and curved surfaces based on the responses of a population of disparity-selective PIP neurons. These results provide the first detailed account of the neuronal properties in area PIP, which occupies an intermediate position in the hierarchy of visual areas involved in processing depth structure from disparity.

@article{Alizadeh2018b,
title = {Single-cell responses to three-dimensional structure in a functionally defined patch in macaque area TEO},
author = {Amir-Mohammad Alizadeh and Ilse C {Van Dromme} and Peter Janssen},
doi = {10.1152/jn.00198.2018},
year = {2018},
date = {2018-01-01},
journal = {Journal of Neurophysiology},
volume = {120},
number = {6},
pages = {2806--2818},
abstract = {Both dorsal and ventral visual pathways harbor several areas sensitive to gradients of binocular disparity (i.e., higher-order disparity). Although a wealth of information exists about disparity processing in early visual (V1, V2, and V3) and end-stage areas, TE in the ventral stream, and the anterior intraparietal area (AIP) in the dorsal stream, little is known about midlevel area TEO in the ventral pathway. We recorded single-unit responses to disparity-defined curved stimuli in a functional magnetic resonance imaging (fMRI) activation elicited by curved surfaces compared with flat surfaces in the macaque area TEO. This fMRI activation contained a small proportion of disparity- selective neurons, with very few of them second-order disparity selective. Overall, this population of TEO neurons did not preserve its three-dimensional structure selectivity across positions in depth, in- dicating a lack of higher-order disparity selectivity, but showed stronger responses to flat surfaces than to curved surfaces, as predicted by the fMRI experiment. The receptive fields of the responsive TEO cells were relatively small and generally foveal. A linear support vector machine classifier showed that this population of disparity-selective TEO neurons contains reliable information about the sign of curvature and the position in depth of the stimulus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Both dorsal and ventral visual pathways harbor several areas sensitive to gradients of binocular disparity (i.e., higher-order disparity). Although a wealth of information exists about disparity processing in early visual (V1, V2, and V3) and end-stage areas, TE in the ventral stream, and the anterior intraparietal area (AIP) in the dorsal stream, little is known about midlevel area TEO in the ventral pathway. We recorded single-unit responses to disparity-defined curved stimuli in a functional magnetic resonance imaging (fMRI) activation elicited by curved surfaces compared with flat surfaces in the macaque area TEO. This fMRI activation contained a small proportion of disparity- selective neurons, with very few of them second-order disparity selective. Overall, this population of TEO neurons did not preserve its three-dimensional structure selectivity across positions in depth, in- dicating a lack of higher-order disparity selectivity, but showed stronger responses to flat surfaces than to curved surfaces, as predicted by the fMRI experiment. The receptive fields of the responsive TEO cells were relatively small and generally foveal. A linear support vector machine classifier showed that this population of disparity-selective TEO neurons contains reliable information about the sign of curvature and the position in depth of the stimulus.

@article{Allard2014,
title = {Age-related differences in neural recruitment during the use of cognitive reappraisal and selective attention as emotion regulation strategies},
author = {Eric S Allard and Elizabeth A Kensinger},
doi = {10.3389/fpsyg.2014.00296},
year = {2014},
date = {2014-01-01},
journal = {Frontiers in Psychology},
volume = {5},
pages = {1--10},
abstract = {The present study examined age differences in the timing and neural recruitment within lateral and medial PFC while younger and older adults hedonically regulated their responses to unpleasant film clips. When analyses focused on activity during the emotional peak of the film clip (the most emotionally salient portion of the film), several age differences emerged. When comparing regulation to passive viewing (combined effects of selective attention and reappraisal) younger adults showed greater regulation related activity in lateral PFC (DLPFC, VLPFC, OFC) and medial PFC (ACC) while older adults showed greater activation within a region DLPFC. When assessing distinct effects of the regulation conditions, an ANOVA revealed a significant Age x Regulation Condition interaction within bilateral DLPFC and ACC; older adults but not young adults showed greater recruitment within these regions for reappraisal than selective attention. When examining activity at the onset of the film clip and at its emotional peak, the timing of reappraisal-related activity within VLPFC differed between age groups: younger adults showed greater activity at film onset while older adults showed heightened activity during the peak. Our results suggest that older adults rely more heavily on PFC recruitment when engaging cognitively demanding reappraisal strategies while PFC-mediated regulation might not be as task-specific for younger adults. Older adults' greater reliance on cognitive control processing during emotion regulation may also be reflected in the time needed to implement these strategies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The present study examined age differences in the timing and neural recruitment within lateral and medial PFC while younger and older adults hedonically regulated their responses to unpleasant film clips. When analyses focused on activity during the emotional peak of the film clip (the most emotionally salient portion of the film), several age differences emerged. When comparing regulation to passive viewing (combined effects of selective attention and reappraisal) younger adults showed greater regulation related activity in lateral PFC (DLPFC, VLPFC, OFC) and medial PFC (ACC) while older adults showed greater activation within a region DLPFC. When assessing distinct effects of the regulation conditions, an ANOVA revealed a significant Age x Regulation Condition interaction within bilateral DLPFC and ACC; older adults but not young adults showed greater recruitment within these regions for reappraisal than selective attention. When examining activity at the onset of the film clip and at its emotional peak, the timing of reappraisal-related activity within VLPFC differed between age groups: younger adults showed greater activity at film onset while older adults showed heightened activity during the peak. Our results suggest that older adults rely more heavily on PFC recruitment when engaging cognitively demanding reappraisal strategies while PFC-mediated regulation might not be as task-specific for younger adults. Older adults' greater reliance on cognitive control processing during emotion regulation may also be reflected in the time needed to implement these strategies.

@article{Allard2014a,
title = {Age-related differences in functional connectivity during cognitive emotion regulation},
author = {Eric S Allard and Elizabeth A Kensinger},
doi = {10.1093/geronb/gbu108},
year = {2014},
date = {2014-01-01},
journal = {Journals of Gerontology - Series B Psychological Sciences and Social Sciences},
volume = {69},
number = {6},
pages = {852--860},
abstract = {OBJECTIVES: Successful emotion regulation partly depends on our capacity to modulate emotional responses through the use of cognitive strategies. Age may affect the strategies employed most often; thus, we examined younger and older adults' neural network connectivity when employing two different strategies: cognitive reappraisal and selective attention.$backslash$n$backslash$nMETHOD: The current study used psychophysiological interaction analyses to examine functional connectivity with a region of anterior cingulate cortex (ACC) because it is a core part of an emotion regulation network showing relative structural preservation with age.$backslash$n$backslash$nRESULTS: Functional connectivity between ACC and prefrontal cortex (PFC) was greater for reappraisal relative to selective attention and passive viewing conditions for both age groups. For younger adults, ACC was more strongly connected with lateral dorsolateral PFC, ventrolateral PFC, dorsomedial PFC, and posterior cingulate regions during reappraisal. For older adults, stronger connectivity during reappraisal was observed primarily in ventromedial PFC and orbitofrontal cortex.$backslash$n$backslash$nDISCUSSION: Our results suggest that although young and older adults engage PFC networks during regulation, and particularly during reappraisal, the regions within these networks might differ. Additionally, these results clarify that, despite prior evidence for age-related declines in the structure and function of those regions, older adults are able to recruit ACC and PFC regions as part of coherent network during emotion regulation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

OBJECTIVES: Successful emotion regulation partly depends on our capacity to modulate emotional responses through the use of cognitive strategies. Age may affect the strategies employed most often; thus, we examined younger and older adults' neural network connectivity when employing two different strategies: cognitive reappraisal and selective attention.$backslash$n$backslash$nMETHOD: The current study used psychophysiological interaction analyses to examine functional connectivity with a region of anterior cingulate cortex (ACC) because it is a core part of an emotion regulation network showing relative structural preservation with age.$backslash$n$backslash$nRESULTS: Functional connectivity between ACC and prefrontal cortex (PFC) was greater for reappraisal relative to selective attention and passive viewing conditions for both age groups. For younger adults, ACC was more strongly connected with lateral dorsolateral PFC, ventrolateral PFC, dorsomedial PFC, and posterior cingulate regions during reappraisal. For older adults, stronger connectivity during reappraisal was observed primarily in ventromedial PFC and orbitofrontal cortex.$backslash$n$backslash$nDISCUSSION: Our results suggest that although young and older adults engage PFC networks during regulation, and particularly during reappraisal, the regions within these networks might differ. Additionally, these results clarify that, despite prior evidence for age-related declines in the structure and function of those regions, older adults are able to recruit ACC and PFC regions as part of coherent network during emotion regulation.

Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states.

@article{Allen2018,
title = {EEG signatures of dynamic functional network connectivity states},
author = {E A Allen and E Damaraju and T Eichele and L Wu and V D Calhoun},
doi = {10.1007/s10548-017-0546-2},
year = {2018},
date = {2018-01-01},
journal = {Brain Topography},
volume = {31},
number = {1},
pages = {101--116},
publisher = {Springer US},
abstract = {The human brain operates by dynamically mod- ulating different neural populations to enable goal directed behavior. The synchrony or lack thereof between different brain regions is thought to correspond to observed functional connectivity dynamics in resting state brain imaging data. In a large sample of healthy human adult subjects and utilizing a sliding windowed correlation method on functional imaging data, earlier we demonstrated the presence of seven distinct functional connectivity states/patterns between different brain networks that reliably occur across time and subjects. Whether these connectivity states correspond to meaningful electrophysiological signatures was not clear. In this study, using a dataset with concurrent EEG and resting state functional imaging data acquired during eyes open and eyes closed states, we demonstrate the replicability of previous findings in an independent sample, and identify EEG spectral signatures associated with these functional network connectivity changes. Eyes open and eyes closed conditions show common and different connectivity patterns that are associated with distinct EEG spectral signatures. Certain connectivity states are more prevalent in the eyes open case and some occur only in eyes closed state. Both conditions exhibit a state of increased thalamo-cortical anticorrelation associated with reduced EEG spec- tral alpha power and increased delta and theta power possi- bly reflecting drowsiness. This state occurs more frequently in the eyes closed state. In summary, we find a link between dynamic connectivity in fMRI data and concurrently collected EEG data, including a large effect of vigilance on functional connectivity. As demonstrated with EEG and fMRI, the stationarity of connectivity cannot be assumed, even for relatively short periods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The human brain operates by dynamically mod- ulating different neural populations to enable goal directed behavior. The synchrony or lack thereof between different brain regions is thought to correspond to observed functional connectivity dynamics in resting state brain imaging data. In a large sample of healthy human adult subjects and utilizing a sliding windowed correlation method on functional imaging data, earlier we demonstrated the presence of seven distinct functional connectivity states/patterns between different brain networks that reliably occur across time and subjects. Whether these connectivity states correspond to meaningful electrophysiological signatures was not clear. In this study, using a dataset with concurrent EEG and resting state functional imaging data acquired during eyes open and eyes closed states, we demonstrate the replicability of previous findings in an independent sample, and identify EEG spectral signatures associated with these functional network connectivity changes. Eyes open and eyes closed conditions show common and different connectivity patterns that are associated with distinct EEG spectral signatures. Certain connectivity states are more prevalent in the eyes open case and some occur only in eyes closed state. Both conditions exhibit a state of increased thalamo-cortical anticorrelation associated with reduced EEG spec- tral alpha power and increased delta and theta power possi- bly reflecting drowsiness. This state occurs more frequently in the eyes closed state. In summary, we find a link between dynamic connectivity in fMRI data and concurrently collected EEG data, including a large effect of vigilance on functional connectivity. As demonstrated with EEG and fMRI, the stationarity of connectivity cannot be assumed, even for relatively short periods.

@article{Allman2010,
title = {Effect of D-amphetamine on inhibition and motor planning as a function of baseline performance.},
author = {Ava-Ann Allman and Chawki Benkelfat and France Durand and Igor Sibon and Alain Dagher and Marco Leyton and Glen B Baker and Gillian A O'Driscoll},
doi = {10.1007/s00213-010-1912-x},
year = {2010},
date = {2010-01-01},
journal = {Psychopharmacology},
volume = {211},
number = {4},
pages = {423--33},
abstract = {RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group

@article{Allman2012,
title = {Effects of methylphenidate on basic and higher-order oculomotor functions},
author = {Ava-Ann Allman and Ulrich Ettinger and Ridha Joober and Gillian A O'Driscoll},
doi = {10.1177/0269881112446531},
year = {2012},
date = {2012-01-01},
journal = {Journal of Psychopharmacology},
volume = {26},
number = {11},
pages = {1471--1479},
abstract = {Eye movements are sensitive indicators of pharmacological effects on sensorimotor and cognitive processing. Methylphenidate (MPH) is one of the most prescribed medications in psychiatry. It is increasingly used as a cognitive enhancer by healthy individuals. However, little is known of its effect on healthy cognition. Here we used oculomotor tests to evaluate the effects of MPH on basic oculomotor and executive functions. Twenty-nine males were given 20mg of MPH orally in a double-blind placebo-controlled crossover design. Participants performed visually-guided saccades, sinusoidal smooth pursuit, predictive saccades and antisaccades one hour post-capsule administration. Heart rate and blood pressure were assessed prior to capsule administration, and again before and after task performance. Visually-guided saccade latency decreased with MPH (ptextless0.004). Smooth pursuit gain increased on MPH (ptextless0.001) and number of saccades during pursuit decreased (ptextless0.001). Proportion of predictive saccades increased on MPH (ptextless0.004), specifically in conditions with predictable timing. Peak velocity of predictive saccades increased with MPH (ptextless0.01). Antisaccade errors and latency were unaffected. Physiological variables were also unaffected. The effects on visually-guided saccade latency and peak velocity are consistent with MPH effects on dopamine in basal ganglia. The improvements in predictive saccade conditions and smooth pursuit suggest effects on timing functions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Eye movements are sensitive indicators of pharmacological effects on sensorimotor and cognitive processing. Methylphenidate (MPH) is one of the most prescribed medications in psychiatry. It is increasingly used as a cognitive enhancer by healthy individuals. However, little is known of its effect on healthy cognition. Here we used oculomotor tests to evaluate the effects of MPH on basic oculomotor and executive functions. Twenty-nine males were given 20mg of MPH orally in a double-blind placebo-controlled crossover design. Participants performed visually-guided saccades, sinusoidal smooth pursuit, predictive saccades and antisaccades one hour post-capsule administration. Heart rate and blood pressure were assessed prior to capsule administration, and again before and after task performance. Visually-guided saccade latency decreased with MPH (ptextless0.004). Smooth pursuit gain increased on MPH (ptextless0.001) and number of saccades during pursuit decreased (ptextless0.001). Proportion of predictive saccades increased on MPH (ptextless0.004), specifically in conditions with predictable timing. Peak velocity of predictive saccades increased with MPH (ptextless0.01). Antisaccade errors and latency were unaffected. Physiological variables were also unaffected. The effects on visually-guided saccade latency and peak velocity are consistent with MPH effects on dopamine in basal ganglia. The improvements in predictive saccade conditions and smooth pursuit suggest effects on timing functions.

@article{Alotaibi2017,
title = {Cultural differences in attention: Eye movement evidence from a comparative visual search task},
author = {Albandri Alotaibi and Geoffrey Underwood and Alastair D Smith},
doi = {10.1016/j.concog.2017.09.002},
year = {2017},
date = {2017-01-01},
journal = {Consciousness and Cognition},
volume = {55},
pages = {254--265},
abstract = {Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention.

@article{Alsadoon2015,
title = {Textual input enhancement for vowel blindness: A study with Arabic ESL learners},
author = {Reem Alsadoon and Trude Heift},
doi = {10.1111/modl.12188},
year = {2015},
date = {2015-01-01},
journal = {The Modern Language Journal},
volume = {99},
number = {1},
pages = {57--79},
abstract = {This study explores the impact of textual input enhancement on the noticing and intake of English vowels by Arabic L2 learners of English. Arabic L1 speakers are known to experience vowel blindness, commonly defined as a difficulty in the textual decoding and encoding of English vowels due to an insufficient decoding of the word form. Thirty beginner ESL learners participated in a training study during which the experimental group received textual input enhancement on English vowels. Students completed a pretest and an immediate and delayed posttest. An eye-tracker recorded students' eye fixations during the treatment phase. Results indicate that vowel blindness was significantly reduced for the experimental group who received vowel training in the form of textual input enhancement. This might be due to a longer focus on the target words as suggested by our eye-tracking data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

This study explores the impact of textual input enhancement on the noticing and intake of English vowels by Arabic L2 learners of English. Arabic L1 speakers are known to experience vowel blindness, commonly defined as a difficulty in the textual decoding and encoding of English vowels due to an insufficient decoding of the word form. Thirty beginner ESL learners participated in a training study during which the experimental group received textual input enhancement on English vowels. Students completed a pretest and an immediate and delayed posttest. An eye-tracker recorded students' eye fixations during the treatment phase. Results indicate that vowel blindness was significantly reduced for the experimental group who received vowel training in the form of textual input enhancement. This might be due to a longer focus on the target words as suggested by our eye-tracking data.

@article{Alsius2016,
title = {High visual resolution matters in audiovisual speech perception, but only for some},
author = {Agn{è}s Alsius and Rachel V Wayne and Martin Paré and Kevin G Munhall},
doi = {10.3758/s13414-016-1109-4},
year = {2016},
date = {2016-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {78},
number = {5},
pages = {1472--1487},
publisher = {Attention, Perception, & Psychophysics},
abstract = {The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

@article{Altmann1999,
title = {Incremental interpretation at verbs: Restricting the domain of subsequent reference},
author = {Gerry T M Altmann and Yuki Kamide},
doi = {10.1016/S0010-0277(99)00059-1},
year = {1999},
date = {1999-01-01},
journal = {Cognition},
volume = {73},
number = {3},
pages = {247--264},
abstract = {Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.

@article{Altmann2004,
title = {Shape saliency modulates contextual processing in the human lateral occipital complex},
author = {Christian F Altmann and Arne Deubelius and Zoe Kourtzi},
doi = {10.1162/089892904970825},
year = {2004},
date = {2004-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {16},
number = {5},
pages = {794--804},
abstract = {Visual context influences our perception of target objects in natural scenes. However, little is known about the analysis of context information and its role in shape perception in the human brain. We investigated whether the human lateral occipital complex (LOC), known to be involved in the visual analysis of shapes, also processes information about the context of shapes within cluttered scenes. We employed an fMRI adaptation paradigm in which fMRI responses are lower for two identical than for two different stimuli presented consecutively. The stimuli consisted of closed target contours defined by aligned Gabor elements embedded in a background of randomly oriented Gabors. We measured fMRI adaptation in the LOC across changes in the context of the target shapes by manipulating the position and orientation of the background elements. No adaptation was observed across context changes when the background elements were presented in the same plane as the target elements. However, adaptation was observed when the grouping of the target elements was enhanced in a bottom-up (i.e., grouping by disparity or motion) or top-down (i.e., shape priming) manner and thus the saliency of the target shape increased. These findings suggest that the LOC processes information not only about shapes, but also about their context. This processing of context information in the LOC is modulated by figure-ground segmentation and grouping processes. That is, neural populations in the LOC encode context information when relevant to the perception of target shapes, but represent salient targets independent of context changes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Visual context influences our perception of target objects in natural scenes. However, little is known about the analysis of context information and its role in shape perception in the human brain. We investigated whether the human lateral occipital complex (LOC), known to be involved in the visual analysis of shapes, also processes information about the context of shapes within cluttered scenes. We employed an fMRI adaptation paradigm in which fMRI responses are lower for two identical than for two different stimuli presented consecutively. The stimuli consisted of closed target contours defined by aligned Gabor elements embedded in a background of randomly oriented Gabors. We measured fMRI adaptation in the LOC across changes in the context of the target shapes by manipulating the position and orientation of the background elements. No adaptation was observed across context changes when the background elements were presented in the same plane as the target elements. However, adaptation was observed when the grouping of the target elements was enhanced in a bottom-up (i.e., grouping by disparity or motion) or top-down (i.e., shape priming) manner and thus the saliency of the target shape increased. These findings suggest that the LOC processes information not only about shapes, but also about their context. This processing of context information in the LOC is modulated by figure-ground segmentation and grouping processes. That is, neural populations in the LOC encode context information when relevant to the perception of target shapes, but represent salient targets independent of context changes.

@article{Altmann2004a,
title = {Language-mediated eye movements in the absence of a visual world: The 'blank screen paradigm'},
author = {Gerry T M Altmann},
doi = {10.1016/j.cognition.2004.02.005},
year = {2004},
date = {2004-01-01},
journal = {Cognition},
volume = {93},
number = {2},
pages = {B79--87},
abstract = {The 'visual world paradigm' typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding visual scene had been displayed and then removed. For a scene depicting a man, a woman, a cake, and a newspaper, the eyes were subsequently directed, during 'eat' in 'the man will eat the cake', towards where the cake had previously been located even though the screen had been blank for over 2 s. The rapidity of these movements mirrored the anticipatory eye movements observed in previous studies [Cognition 73 (1999) 247; J. Mem. Lang. 49 (2003) 133]. Thus, anticipatory eye movements are not dependent on a concurrent visual scene, but are dependent on a mental record of the scene that is independent of whether the visual scene is still present.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The 'visual world paradigm' typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding visual scene had been displayed and then removed. For a scene depicting a man, a woman, a cake, and a newspaper, the eyes were subsequently directed, during 'eat' in 'the man will eat the cake', towards where the cake had previously been located even though the screen had been blank for over 2 s. The rapidity of these movements mirrored the anticipatory eye movements observed in previous studies [Cognition 73 (1999) 247; J. Mem. Lang. 49 (2003) 133]. Thus, anticipatory eye movements are not dependent on a concurrent visual scene, but are dependent on a mental record of the scene that is independent of whether the visual scene is still present.

@article{Altmann2007,
title = {The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing},
author = {Gerry T M Altmann and Yuki Kamide},
doi = {10.1016/j.jml.2006.12.004},
year = {2007},
date = {2007-01-01},
journal = {Journal of Memory and Language},
volume = {57},
number = {4},
pages = {502--518},
abstract = {Two experiments explored the representational basis for anticipatory eye movements. Participants heard 'the man will drink ...' or 'the man has drunk ...' (Experiment 1) or 'the man will drink all of ...' or 'the man has drunk all of ...' (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst other things). There were more saccades towards the empty wine glass in the past tensed conditions than in the future tense conditions; the converse pattern obtained for looks towards the full glass of beer. We argue that these anticipatory eye movements reflect sensitivity to objects' affordances, and develop an account of the linkage between language processing and visual attention that can account not only for looks towards named objects, but also for those cases (including anticipatory eye movements) where attention is directed towards objects that are not being named.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Two experiments explored the representational basis for anticipatory eye movements. Participants heard 'the man will drink ...' or 'the man has drunk ...' (Experiment 1) or 'the man will drink all of ...' or 'the man has drunk all of ...' (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst other things). There were more saccades towards the empty wine glass in the past tensed conditions than in the future tense conditions; the converse pattern obtained for looks towards the full glass of beer. We argue that these anticipatory eye movements reflect sensitivity to objects' affordances, and develop an account of the linkage between language processing and visual attention that can account not only for looks towards named objects, but also for those cases (including anticipatory eye movements) where attention is directed towards objects that are not being named.

@article{Altmann2009,
title = {Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation},
author = {Gerry T M Altmann and Yuki Kamide},
doi = {10.1016/j.cognition.2008.12.005},
year = {2009},
date = {2009-01-01},
journal = {Cognition},
volume = {111},
number = {1},
pages = {55--71},
publisher = {Elsevier B.V.},
abstract = {Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).

@article{Altmann2011,
title = {Language can mediate eye movement control within 100milliseconds, regardless of whether there is anything to move the eyes to},
author = {Gerry T M Altmann},
doi = {10.1016/j.actpsy.2010.09.009},
year = {2011},
date = {2011-01-01},
journal = {Acta Psychologica},
volume = {137},
number = {2},
pages = {190--200},
publisher = {Elsevier B.V.},
abstract = {The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word.

@article{Altschuler2014,
title = {The effort to close the gap: Tracking the development of illusory contour processing from childhood to adulthood with high-density electrical mapping},
author = {Ted S Altschuler and Sophie Molholm and John S Butler and Manuel R Mercier and Alice B Brandwein and John J Foxe},
doi = {10.1016/j.neuroimage.2013.12.029},
year = {2014},
date = {2014-01-01},
journal = {NeuroImage},
volume = {90},
pages = {360--373},
abstract = {The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200. ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400. ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N. = 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5. years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200. ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400. ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N. = 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5. years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent.

@article{Amano2012,
title = {Human neural responses involved in spatial pooling of locally ambiguous motion signals},
author = {Kaoru Amano and Tsunehiro Takeda and Tomoki Haji and Masahiko Terao and Kazushi Maruya and Kenji Matsumoto and Ikuya Murakami and Shin'ya Nishida},
doi = {10.1152/jn.00821.2011},
year = {2012},
date = {2012-01-01},
journal = {Journal of Neurophysiology},
volume = {107},
number = {12},
pages = {3493--3508},
abstract = {Early visual motion signals are local and one-dimensional (1-D). For specification of global two-dimensional (2-D) motion vectors, the visual system should appropriately integrate these signals across orientation and space. Previous neurophysiological studies have suggested that this integration process consists of two computational steps (estimation of local 2-D motion vectors, followed by their spatial pooling), both being identified in the area MT. Psychophysical findings, however, suggest that under certain stimulus conditions, the human visual system can also compute mathematically correct global motion vectors from direct pooling of spatially distributed 1-D motion signals. To study the neural mechanisms responsible for this novel 1-D motion pooling, we conducted human magnetoencephalography (MEG) and functional MRI experiments using a global motion stimulus comprising multiple moving Gabors (global-Gabor motion). In the first experiment, we measured MEG and blood oxygen level-dependent responses while changing motion coherence of global-Gabor motion. In the second experiment, we investigated cortical responses correlated with direction-selective adaptation to the global 2-D motion, not to local 1-D motions. We found that human MT complex (hMT+) responses show both coherence dependency and direction selectivity to global motion based on 1-D pooling. The results provide the first evidence that hMT+ is the locus of 1-D motion pooling, as well as that of conventional 2-D motion pooling.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Early visual motion signals are local and one-dimensional (1-D). For specification of global two-dimensional (2-D) motion vectors, the visual system should appropriately integrate these signals across orientation and space. Previous neurophysiological studies have suggested that this integration process consists of two computational steps (estimation of local 2-D motion vectors, followed by their spatial pooling), both being identified in the area MT. Psychophysical findings, however, suggest that under certain stimulus conditions, the human visual system can also compute mathematically correct global motion vectors from direct pooling of spatially distributed 1-D motion signals. To study the neural mechanisms responsible for this novel 1-D motion pooling, we conducted human magnetoencephalography (MEG) and functional MRI experiments using a global motion stimulus comprising multiple moving Gabors (global-Gabor motion). In the first experiment, we measured MEG and blood oxygen level-dependent responses while changing motion coherence of global-Gabor motion. In the second experiment, we investigated cortical responses correlated with direction-selective adaptation to the global 2-D motion, not to local 1-D motions. We found that human MT complex (hMT+) responses show both coherence dependency and direction selectivity to global motion based on 1-D pooling. The results provide the first evidence that hMT+ is the locus of 1-D motion pooling, as well as that of conventional 2-D motion pooling.

@article{Amemori2012,
title = {Localized microstimulation of primate pregenual cingulate cortex induces negative decision-making},
author = {Ken Ichi Amemori and Ann M Graybiel},
doi = {10.1038/nn.3088},
year = {2012},
date = {2012-01-01},
journal = {Nature Neuroscience},
volume = {15},
number = {5},
pages = {776--785},
abstract = {The pregenual anterior cingulate cortex (pACC) has been implicated in human anxiety disorders and depression, but the circuit-level mechanisms underlying these disorders are unclear. In healthy individuals, the pACC is involved in cost-benefit evaluation. We developed a macaque version of an approach-avoidance decision task used to evaluate anxiety and depression in humans and, with multi-electrode recording and cortical microstimulation, we probed pACC function as monkeys performed this task. We found that the macaque pACC has an opponent process-like organization of neurons representing motivationally positive and negative subjective value. Spatial distribution of these two neuronal populations overlapped in the pACC, except in one subzone, where neurons with negative coding were more numerous. Notably, microstimulation in this subzone, but not elsewhere in the pACC, increased negative decision-making, and this negative biasing was blocked by anti-anxiety drug treatment. This cortical zone could be critical for regulating negative emotional valence and anxiety in decision-making.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The pregenual anterior cingulate cortex (pACC) has been implicated in human anxiety disorders and depression, but the circuit-level mechanisms underlying these disorders are unclear. In healthy individuals, the pACC is involved in cost-benefit evaluation. We developed a macaque version of an approach-avoidance decision task used to evaluate anxiety and depression in humans and, with multi-electrode recording and cortical microstimulation, we probed pACC function as monkeys performed this task. We found that the macaque pACC has an opponent process-like organization of neurons representing motivationally positive and negative subjective value. Spatial distribution of these two neuronal populations overlapped in the pACC, except in one subzone, where neurons with negative coding were more numerous. Notably, microstimulation in this subzone, but not elsewhere in the pACC, increased negative decision-making, and this negative biasing was blocked by anti-anxiety drug treatment. This cortical zone could be critical for regulating negative emotional valence and anxiety in decision-making.

@article{Amemori2015a,
title = {Motivation and affective judgments differentially recruit neurons in the primate dorsolateral prefrontal and anterior cingulate cortex},
author = {Ken Ichi Amemori and Satoko Amemori and A M Graybiel},
doi = {10.1523/JNEUROSCI.1731-14.2015},
year = {2015},
date = {2015-01-01},
journal = {Journal of Neuroscience},
volume = {35},
number = {5},
pages = {1939--1953},
abstract = {The judgment of whether to accept or to reject an offer is determined by positive and negative affect related to the offer, but affect also induces motivational responses. Rewarding and aversive cues influence the firing rates of many neurons in primate prefrontal and cingulate neocortical regions, but it still is unclear whether neurons in these regions are related to affective judgment or to motivation.To address this issue, we recorded simultaneously the neuronal spike activities of single units in the dorsolateral prefrontal cortex (dlPFC) and the anterior cingulate cortex (ACC) of macaque monkeys as they performed approach–avoidance (Ap–Av) and approach–approach (Ap–Ap) decision-making tasks that can behaviorally dissociate affective judgment and motivation. Notably, neurons having activity correlated with motivational condition could be distinguished from neurons having activity related to affective judgment, especially in the Ap–Av task. Although many neurons in both regions exhibited similar, selective patterns of task-related activity, we found a larger proportion of neurons activated in low motivational conditions in the dlPFC than in the ACC, and the onset of this activity was signifi- cantly earlier in the dlPFC than in the ACC. Furthermore, the temporal onsets of affective judgment represented by neuronal activities were significantly slower in the low motivational conditions than in the other conditions. These findings suggest that motivation and affective judgment both recruit dlPFC and ACC neurons but with differential degrees of involvement and timing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The judgment of whether to accept or to reject an offer is determined by positive and negative affect related to the offer, but affect also induces motivational responses. Rewarding and aversive cues influence the firing rates of many neurons in primate prefrontal and cingulate neocortical regions, but it still is unclear whether neurons in these regions are related to affective judgment or to motivation.To address this issue, we recorded simultaneously the neuronal spike activities of single units in the dorsolateral prefrontal cortex (dlPFC) and the anterior cingulate cortex (ACC) of macaque monkeys as they performed approach–avoidance (Ap–Av) and approach–approach (Ap–Ap) decision-making tasks that can behaviorally dissociate affective judgment and motivation. Notably, neurons having activity correlated with motivational condition could be distinguished from neurons having activity related to affective judgment, especially in the Ap–Av task. Although many neurons in both regions exhibited similar, selective patterns of task-related activity, we found a larger proportion of neurons activated in low motivational conditions in the dlPFC than in the ACC, and the onset of this activity was signifi- cantly earlier in the dlPFC than in the ACC. Furthermore, the temporal onsets of affective judgment represented by neuronal activities were significantly slower in the low motivational conditions than in the other conditions. These findings suggest that motivation and affective judgment both recruit dlPFC and ACC neurons but with differential degrees of involvement and timing.

@article{Amemori2015b,
title = {A non-invasive head-holding device for chronic neural recordings in awake behaving monkeys},
author = {Satoko Amemori and Ken ichi Amemori and Margaret L Cantor and Ann M Graybiel},
doi = {10.1016/j.jneumeth.2014.11.006},
year = {2015},
date = {2015-01-01},
journal = {Journal of Neuroscience Methods},
volume = {240},
pages = {154--160},
publisher = {Elsevier B.V.},
abstract = {Background: We have developed a novel head-holding device for behaving non-human primates that affords stability suitable for reliable chronic electrophysiological recording experiments. The device is completely non-invasive, and thus avoids the risk of infection and other complications that can occur with the use of conventional, surgically implanted head-fixation devices. New method: The device consists of a novel non-invasive head mold and bar clamp holder, and is customized to the shape of each monkey's head. The head-holding device that we introduce, combined with our recording system and reflection-based eye-tracking system, allows for chronic behavioral experiments and single-electrode or multi-electrode recording, as well as manipulation of brain activity. Results and comparison with existing methods: With electrodes implanted chronically in multiple brain regions, we could record neural activity from cortical and subcortical structures with stability equal to that recorded with conventional head-post fixation. Consistent with the non-invasive nature of the device, we could record neural signals for more than two years with a single implant. Importantly, the monkeys were able to hold stable eye fixation positions while held by this device, demonstrating the possibility of analyzing eye movement data with only the gentle restraint imposed by the non-invasive head-holding device. Conclusions: We show that the head-holding device introduced here can be extended to the head holding of smaller animals, and note that it could readily be adapted for magnetic resonance brain imaging over extended periods of time.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Background: We have developed a novel head-holding device for behaving non-human primates that affords stability suitable for reliable chronic electrophysiological recording experiments. The device is completely non-invasive, and thus avoids the risk of infection and other complications that can occur with the use of conventional, surgically implanted head-fixation devices. New method: The device consists of a novel non-invasive head mold and bar clamp holder, and is customized to the shape of each monkey's head. The head-holding device that we introduce, combined with our recording system and reflection-based eye-tracking system, allows for chronic behavioral experiments and single-electrode or multi-electrode recording, as well as manipulation of brain activity. Results and comparison with existing methods: With electrodes implanted chronically in multiple brain regions, we could record neural activity from cortical and subcortical structures with stability equal to that recorded with conventional head-post fixation. Consistent with the non-invasive nature of the device, we could record neural signals for more than two years with a single implant. Importantly, the monkeys were able to hold stable eye fixation positions while held by this device, demonstrating the possibility of analyzing eye movement data with only the gentle restraint imposed by the non-invasive head-holding device. Conclusions: We show that the head-holding device introduced here can be extended to the head holding of smaller animals, and note that it could readily be adapted for magnetic resonance brain imaging over extended periods of time.

@article{Amemori2018,
title = {Striatal microstimulation induces persistent and repetitive negative decision-making predicted by striatal beta-band oscillation},
author = {Ken ichi Amemori and Satoko Amemori and Daniel J Gibson and Ann M Graybiel},
doi = {10.1016/j.neuron.2018.07.022},
year = {2018},
date = {2018-01-01},
journal = {Neuron},
volume = {99},
number = {4},
pages = {829--841},
publisher = {Elsevier Inc.},
abstract = {Persistent thoughts inducing irrationally pessimistic and repetitive decisions are often symptoms of mood and anxiety disorders. Regional neural hyper-activities have been associated with these disorders, but it remains unclear whether there is a specific brain region causally involved in these persistent valuations. Here, we identified potential sources of such persistent states by microstimulating the striatum of macaques performing a task by which we could quantitatively estimate their subjective pessimistic states using their choices to accept or reject conflicting offers. We found that this microstimulation induced irrationally repetitive choices with negative evaluations. Local field potentials recorded in the same microstimulation sessions exhibited modulations of beta-band oscillatory activity that paralleled the persistent negative states influencing repetitive decisions. These findings demonstrate that local striatal zones can causally affect subjective states influencing persistent negative valuation and that abnormal beta-band oscillations can be associated with persistency in valuation accompanied by an anxiety-like state.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Persistent thoughts inducing irrationally pessimistic and repetitive decisions are often symptoms of mood and anxiety disorders. Regional neural hyper-activities have been associated with these disorders, but it remains unclear whether there is a specific brain region causally involved in these persistent valuations. Here, we identified potential sources of such persistent states by microstimulating the striatum of macaques performing a task by which we could quantitatively estimate their subjective pessimistic states using their choices to accept or reject conflicting offers. We found that this microstimulation induced irrationally repetitive choices with negative evaluations. Local field potentials recorded in the same microstimulation sessions exhibited modulations of beta-band oscillatory activity that paralleled the persistent negative states influencing repetitive decisions. These findings demonstrate that local striatal zones can causally affect subjective states influencing persistent negative valuation and that abnormal beta-band oscillations can be associated with persistency in valuation accompanied by an anxiety-like state.

@article{Amenta2015,
title = {The fruitless effort of growing a fruitless tree: Early morpho-orthographic and morpho-semantic effects in sentence reading},
author = {Simona Amenta and Marco Marelli and Davide Crepaldi},
doi = {10.1037/xlm0000104},
year = {2015},
date = {2015-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {41},
number = {5},
pages = {1587--1596},
abstract = {In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way according to the sentence context to which they belong. This way, each target word was embedded in a sentence eliciting either its transparent or opaque interpretation. We analyzed whether the effect of stem frequency changes according to whether the (very same) word is read as a genuine derivation (transparent context) versus as a pseudoderived word (opaque context). Analysis of the first fixation durations revealed a stem-word frequency effect in both opaque and transparent contexts, thus showing that stems were accessed whether or not they contributed to word meaning, that is, word decomposition is indeed blind to semantics. However, while the stem-word frequency effect was facilitatory in the transparent context, it was inhibitory in the opaque context, thus showing an early involvement of semantic representations. This pattern of data is revealed by words with short suffixes. These results indicate that derived and pseudoderived words are segmented into their constituent morphemes also in natural reading; however, this blind-to-semantics process activates morpheme representations that are semantically connoted.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way according to the sentence context to which they belong. This way, each target word was embedded in a sentence eliciting either its transparent or opaque interpretation. We analyzed whether the effect of stem frequency changes according to whether the (very same) word is read as a genuine derivation (transparent context) versus as a pseudoderived word (opaque context). Analysis of the first fixation durations revealed a stem-word frequency effect in both opaque and transparent contexts, thus showing that stems were accessed whether or not they contributed to word meaning, that is, word decomposition is indeed blind to semantics. However, while the stem-word frequency effect was facilitatory in the transparent context, it was inhibitory in the opaque context, thus showing an early involvement of semantic representations. This pattern of data is revealed by words with short suffixes. These results indicate that derived and pseudoderived words are segmented into their constituent morphemes also in natural reading; however, this blind-to-semantics process activates morpheme representations that are semantically connoted.

@article{Ameqrane2014,
title = {Implicit and explicit timing in oculomotor control},
author = {Ilhame Ameqrane and Pierre Pouget and Nicolas Wattiez and Roger Carpenter and Marcus Missal},
doi = {10.1371/journal.pone.0093958},
year = {2014},
date = {2014-01-01},
journal = {PLoS ONE},
volume = {9},
number = {4},
pages = {1--11},
abstract = {The passage of time can be estimated either explicitly, e.g. before leaving home in the morning, or implicitly, e.g. when catching a flying ball. In the present study, the latency of saccadic eye movements was used to evaluate differences between implicit and explicit timing. Humans were required to make a saccade between a central and a peripheral position on a computer screen. The delay between the extinction of a central target and the appearance of an eccentric target was the independent variable that could take one out of four different values (400, 900, 1400 or 1900 ms). In target trials, the delay period lasted for one of the four durations randomly. At the end of the delay, a saccade was initiated by the appearance of an eccentric target. Cue&target trials were similar to target trials but the duration of the delay was visually cued. In probe trials, the duration of the upcoming delay was cued, but there was no eccentric target and subjects had to internally generate a saccade at the estimated end of the delay. In target and cue&target trials, the mean and variance of latency distributions decreased as delay duration increased. In cue&target trials latencies were shorter. In probe trials, the variance increased with increasing delay duration and scalar variability was observed. The major differences in saccadic latency distributions were observed between visually-guided (target and cue&target trials) and internally-generated saccades (probe trials). In target and cue&target trials the timing of the response was implicit. In probe trials, the timing of the response was internally-generated and explicitly based on the duration of the visual cue. Scalar timing was observed only during probe trials. This study supports the hypothesis that there is no ubiquitous timing system in the brain but independent timing processes active depending on task demands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The passage of time can be estimated either explicitly, e.g. before leaving home in the morning, or implicitly, e.g. when catching a flying ball. In the present study, the latency of saccadic eye movements was used to evaluate differences between implicit and explicit timing. Humans were required to make a saccade between a central and a peripheral position on a computer screen. The delay between the extinction of a central target and the appearance of an eccentric target was the independent variable that could take one out of four different values (400, 900, 1400 or 1900 ms). In target trials, the delay period lasted for one of the four durations randomly. At the end of the delay, a saccade was initiated by the appearance of an eccentric target. Cue&target trials were similar to target trials but the duration of the delay was visually cued. In probe trials, the duration of the upcoming delay was cued, but there was no eccentric target and subjects had to internally generate a saccade at the estimated end of the delay. In target and cue&target trials, the mean and variance of latency distributions decreased as delay duration increased. In cue&target trials latencies were shorter. In probe trials, the variance increased with increasing delay duration and scalar variability was observed. The major differences in saccadic latency distributions were observed between visually-guided (target and cue&target trials) and internally-generated saccades (probe trials). In target and cue&target trials the timing of the response was implicit. In probe trials, the timing of the response was internally-generated and explicitly based on the duration of the visual cue. Scalar timing was observed only during probe trials. This study supports the hypothesis that there is no ubiquitous timing system in the brain but independent timing processes active depending on task demands.

@article{Amit2017,
title = {Temporal dynamics of saccades explained by a self-paced process},
author = {Roy Amit and Dekel Abeles and Izhar Bar-Gad and Shlomit Yuval-Greenberg},
doi = {10.1038/s41598-017-00881-7},
year = {2017},
date = {2017-12-01},
journal = {Scientific Reports},
volume = {7},
number = {1},
pages = {1--15},
publisher = {Springer US},
abstract = {Sensory organs are thought to sample the environment rhythmically thereby providing periodic perceptual input. Whisking and sniffing are governed by oscillators which impose rhythms on the motor-control of sensory acquisition and consequently on sensory input. Saccadic eye movements are the main visual sampling mechanism in primates, and were suggested to constitute part of such a rhythmic exploration system. In this study we characterized saccadic rhythmicity, and examined whether it is consistent with autonomous oscillatory generator or with self-paced generation. Eye movements were tracked while observers were either free-viewing a movie or fixating a static stimulus. We inspected the temporal dynamics of exploratory and fixational saccades and quantified their first-order and high-order dependencies. Data were analyzed using methods derived from spike-train analysis, and tested against mathematical models and simulations. The findings show that saccade timings are explained by first-order dependencies, specifically by their refractory period. Saccade-timings are inconsistent with an autonomous pace-maker but are consistent with a “self-paced” generator, where each saccade is a link in a chain of neural processes that depend on the outcome of the saccade itself. We propose a mathematical model parsimoniously capturing various facets of saccade-timings, and suggest a possible neural mechanism producing the observed dynamics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Sensory organs are thought to sample the environment rhythmically thereby providing periodic perceptual input. Whisking and sniffing are governed by oscillators which impose rhythms on the motor-control of sensory acquisition and consequently on sensory input. Saccadic eye movements are the main visual sampling mechanism in primates, and were suggested to constitute part of such a rhythmic exploration system. In this study we characterized saccadic rhythmicity, and examined whether it is consistent with autonomous oscillatory generator or with self-paced generation. Eye movements were tracked while observers were either free-viewing a movie or fixating a static stimulus. We inspected the temporal dynamics of exploratory and fixational saccades and quantified their first-order and high-order dependencies. Data were analyzed using methods derived from spike-train analysis, and tested against mathematical models and simulations. The findings show that saccade timings are explained by first-order dependencies, specifically by their refractory period. Saccade-timings are inconsistent with an autonomous pace-maker but are consistent with a “self-paced” generator, where each saccade is a link in a chain of neural processes that depend on the outcome of the saccade itself. We propose a mathematical model parsimoniously capturing various facets of saccade-timings, and suggest a possible neural mechanism producing the observed dynamics.

@article{Amlot2003,
title = {Multimodal visual–somatosensory integration in saccade generation},
author = {Richard Aml{ô}t and Robin Walker and Jon Driver and Charles Spence},
doi = {10.1016/S0028-3932(02)00139-2},
year = {2003},
date = {2003-01-01},
journal = {Neuropsychologia},
volume = {41},
number = {1},
pages = {1--15},
abstract = {Neurophysiological studies have demonstrated multisensory interaction effects in the neural structures involved in saccade generation when visual, auditory or somatosensory stimuli are presented bimodally. Visual–auditory interaction effects have been demonstrated in numerous behavioural studies of saccades but little is known about interaction effects involving somatosensory stimuli. The present study examined visual–somatosensory interaction effects on saccade generation using a multisensory paradigm, whereby task-irrelevant distractors appeared spatially-coincident with, or remote from the designated saccade target. Somatosensory distractors reduced the latency of saccades when presented before the visual target and the greatest facilitation effectwas observed with spatially-coincident stimuli.Visual distractors spatially-coincident with a somatosensory target reduced latency (and increased peak velocity) when presented before and after the target.Visual distractors contralateral to somatosensory targets increased saccade latency and produced high error rates of saccades made to the distractor. The high error rates and latencymodulation with visual distractors is consistent with a bias for visual stimuli in the saccadic system. In the visual target condition, saccade latency was modulated by a somatosensory distractor that was entirely task-irrelevant and this effect was always greatest with spatially-coincident distractors. The multisensory distractor effects are discussed in terms of saccades being programmed to the non-target modality, the early triggering of a non-spatial saccade ‘when' signal, and multisensory neuronal enhancement effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Neurophysiological studies have demonstrated multisensory interaction effects in the neural structures involved in saccade generation when visual, auditory or somatosensory stimuli are presented bimodally. Visual–auditory interaction effects have been demonstrated in numerous behavioural studies of saccades but little is known about interaction effects involving somatosensory stimuli. The present study examined visual–somatosensory interaction effects on saccade generation using a multisensory paradigm, whereby task-irrelevant distractors appeared spatially-coincident with, or remote from the designated saccade target. Somatosensory distractors reduced the latency of saccades when presented before the visual target and the greatest facilitation effectwas observed with spatially-coincident stimuli.Visual distractors spatially-coincident with a somatosensory target reduced latency (and increased peak velocity) when presented before and after the target.Visual distractors contralateral to somatosensory targets increased saccade latency and produced high error rates of saccades made to the distractor. The high error rates and latencymodulation with visual distractors is consistent with a bias for visual stimuli in the saccadic system. In the visual target condition, saccade latency was modulated by a somatosensory distractor that was entirely task-irrelevant and this effect was always greatest with spatially-coincident distractors. The multisensory distractor effects are discussed in terms of saccades being programmed to the non-target modality, the early triggering of a non-spatial saccade ‘when' signal, and multisensory neuronal enhancement effects.

@article{Amlot2006,
title = {Are somatosensory saccades voluntary or reflexive?},
author = {Richard Aml{ô}t and Robin Walker},
doi = {10.1007/s00221-005-0116-9},
year = {2006},
date = {2006-01-01},
journal = {Experimental Brain Research},
volume = {168},
number = {4},
pages = {557--565},
abstract = {The present study examines whether the distinction between voluntary (endogenous) and reflexive (stimulus-elicited) saccades made in the visual modality can be applied to the somatosensory modality. The behavioural characteristics of putative reflexive pro-saccades and voluntary anti-saccades made to visual and somatosensory stimuli were examined. Both visual and somatosensory pro-saccades had much shorter latency than voluntary anti-saccades made in the direction opposite to a peripheral stimulus. Furthermore, erroneous pro-saccades were made towards both visual and somatosensory stimuli on approximately 11-13% of anti-saccade trials. The observed difference in pro- and anti-saccade latency and the presence of pro-saccade errors in the anti-saccade task indicates that a somatosensory stimulus can elicit a form of reflexive saccade comparable to pro-saccades made in the visual modality. It is proposed that a peripheral somatosensory stimulus can elicit a form of reflexive saccade and that somatosensory saccades do not depend exclusively on higher level endogenous control processes for their generation. However, a comparison of the underlying latency distributions and of peak-velocity profiles of saccades made to visual and somatosensory stimuli showed that this distinction may be less clearly defined for the somatosensory modality and that modality-specific differences (such as differences in neural conduction rates) in the underlying oculomotor structures involved in saccade target selection also need to be considered. It is further suggested that a broader conceptualisation of saccades and saccade programming beyond the simple voluntary and reflexive dichotomy, that takes into account the control processes involved in saccade generation for both modalities, may be required.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The present study examines whether the distinction between voluntary (endogenous) and reflexive (stimulus-elicited) saccades made in the visual modality can be applied to the somatosensory modality. The behavioural characteristics of putative reflexive pro-saccades and voluntary anti-saccades made to visual and somatosensory stimuli were examined. Both visual and somatosensory pro-saccades had much shorter latency than voluntary anti-saccades made in the direction opposite to a peripheral stimulus. Furthermore, erroneous pro-saccades were made towards both visual and somatosensory stimuli on approximately 11-13% of anti-saccade trials. The observed difference in pro- and anti-saccade latency and the presence of pro-saccade errors in the anti-saccade task indicates that a somatosensory stimulus can elicit a form of reflexive saccade comparable to pro-saccades made in the visual modality. It is proposed that a peripheral somatosensory stimulus can elicit a form of reflexive saccade and that somatosensory saccades do not depend exclusively on higher level endogenous control processes for their generation. However, a comparison of the underlying latency distributions and of peak-velocity profiles of saccades made to visual and somatosensory stimuli showed that this distinction may be less clearly defined for the somatosensory modality and that modality-specific differences (such as differences in neural conduction rates) in the underlying oculomotor structures involved in saccade target selection also need to be considered. It is further suggested that a broader conceptualisation of saccades and saccade programming beyond the simple voluntary and reflexive dichotomy, that takes into account the control processes involved in saccade generation for both modalities, may be required.

@article{Amor2016,
title = {Persistence in eye movement during visual search},
author = {Tatiana A Amor and Saulo D S Reis and Daniel Campos and Hans J Herrmann and José S Andrade},
doi = {10.1038/srep20815},
year = {2016},
date = {2016-01-01},
journal = {Scientific Reports},
volume = {6},
pages = {1--12},
publisher = {Nature Publishing Group},
abstract = {As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

@article{Amor2017,
title = {Influence of scene structure and content on visual search strategies},
author = {Tatiana A Amor and Mirko Lukovi{ć} and Hans J Herrmann and José S Andrade},
doi = {10.1098/rsif.2017.0406},
year = {2017},
date = {2017-01-01},
journal = {Journal of the Royal Society Interface},
volume = {14},
number = {132},
abstract = {When searching for a target within an image, our brain can adopt different strategies, but which one does it choose? This question can be answered by tracking the motion of the eye while it executes the task. Following many individuals performing various search tasks, we distinguish between two competing strategies. Motivated by these findings, we introduce a model that captures the interplay of the search strategies and allows us to create artificial eye-tracking trajectories, which could be compared with the experimental ones. Identifying the model parameters allows us to quantify the strategy employed in terms of ensemble averages, characterizing each experimental cohort. In this way, we can discern with high sensitivity the relation between the visual landscape and the average strategy, disclosing how small variations in the image induce changes in the strategy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

When searching for a target within an image, our brain can adopt different strategies, but which one does it choose? This question can be answered by tracking the motion of the eye while it executes the task. Following many individuals performing various search tasks, we distinguish between two competing strategies. Motivated by these findings, we introduce a model that captures the interplay of the search strategies and allows us to create artificial eye-tracking trajectories, which could be compared with the experimental ones. Identifying the model parameters allows us to quantify the strategy employed in terms of ensemble averages, characterizing each experimental cohort. In this way, we can discern with high sensitivity the relation between the visual landscape and the average strategy, disclosing how small variations in the image induce changes in the strategy.

@article{Amoruso2017,
title = {Variability in functional brain networks predicts expertise during action observation},
author = {Lucía Amoruso and Agustín Ibá{ñ}ez and Bruno Fonseca and Sebastián Gadea and Lucas Sede{ñ}o and Mariano Sigman and Adolfo M García and Ricardo Fraiman and Daniel Fraiman},
doi = {10.1016/j.neuroimage.2016.09.041},
year = {2017},
date = {2017-01-01},
journal = {NeuroImage},
volume = {146},
pages = {690--700},
publisher = {Elsevier},
abstract = {Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2 s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2 s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research.

@article{Andersen2011,
title = {Limits of spatial attention in three-dimensional space and dual-task driving performance},
author = {George J Andersen and Rui Ni and Zheng Bian and Julie Kang},
doi = {10.1016/j.aap.2010.09.007},
year = {2011},
date = {2011-01-01},
journal = {Accident Analysis & Prevention},
volume = {43},
number = {1},
pages = {381--390},
publisher = {Elsevier Ltd},
abstract = {The present study examined the limits of spatial attention while performing two driving relevant tasks that varied in depth. The first task was to maintain a fixed headway distance behind a lead vehicle that varied speed. The second task was to detect a light-change target in an array of lights located above the roadway. In Experiment 1 the light detection task required drivers to encode color and location. The results indicated that reaction time to detect a light-change target increased and accuracy decreased as a function of the horizontal location of the light-change target and as a function of the distance from the driver. In a second experiment the light change task was changed to a singleton search (detect the onset of a yellow light) and the workload of the car following task was systematically varied. The results of Experiment 2 indicated that RT increased as a function of task workload, the 2D position of the light-change target and the distance of the light-change target. A multiple regression analysis indicated that the effect of distance on light detection performance was not due to changes in the projected size of the light target. In Experiment 3 we found that the distance effect in detecting a light change could not be explained by the location of eye fixations. The results demonstrate that when drivers attend to a roadway scene attention is limited in three-dimensional space. These results have important implications for developing tests for assessing crash risk among drivers as well as the design of in vehicle technologies such as head-up displays.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The present study examined the limits of spatial attention while performing two driving relevant tasks that varied in depth. The first task was to maintain a fixed headway distance behind a lead vehicle that varied speed. The second task was to detect a light-change target in an array of lights located above the roadway. In Experiment 1 the light detection task required drivers to encode color and location. The results indicated that reaction time to detect a light-change target increased and accuracy decreased as a function of the horizontal location of the light-change target and as a function of the distance from the driver. In a second experiment the light change task was changed to a singleton search (detect the onset of a yellow light) and the workload of the car following task was systematically varied. The results of Experiment 2 indicated that RT increased as a function of task workload, the 2D position of the light-change target and the distance of the light-change target. A multiple regression analysis indicated that the effect of distance on light detection performance was not due to changes in the projected size of the light target. In Experiment 3 we found that the distance effect in detecting a light change could not be explained by the location of eye fixations. The results demonstrate that when drivers attend to a roadway scene attention is limited in three-dimensional space. These results have important implications for developing tests for assessing crash risk among drivers as well as the design of in vehicle technologies such as head-up displays.

@article{Anderson2007,
title = {Involvement of prefrontal cortex in visual search},
author = {E J Anderson and S K Mannan and M Husain and G Rees and P Sumner and D J Mort and D McRobbie and C Kennard},
doi = {10.1007/s00221-007-0860-0},
year = {2007},
date = {2007-01-01},
journal = {Experimental Brain Research},
volume = {180},
number = {2},
pages = {289--302},
abstract = {Visual search for target items embedded within a set of distracting items has consistently been shown to engage regions of occipital and parietal cortex, but the contribution of different regions of prefrontal cortex remains unclear. Here, we used fMRI to compare brain activity in 12 healthy participants performing efficient and inefficient search tasks in which target discriminability and the number of distractor items were manipulated. Matched baseline conditions were incorporated to control for visual and motor components of the tasks, allowing cortical activity associated with each type of search to be isolated. Region of interest analysis was applied to critical regions of prefrontal cortex to determine whether their involvement was common to both efficient and inefficient search, or unique to inefficient search alone. We found regions of the inferior and middle frontal cortex were only active during inefficient search, whereas an area in the superior frontal cortex (in the region of FEF) was active for both efficient and inefficient search. Thus, regions of ventral as well as dorsal prefrontal cortex are recruited during inefficient search, and we propose that this activity is related to processes that guide, control and monitor the allocation of selective attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Visual search for target items embedded within a set of distracting items has consistently been shown to engage regions of occipital and parietal cortex, but the contribution of different regions of prefrontal cortex remains unclear. Here, we used fMRI to compare brain activity in 12 healthy participants performing efficient and inefficient search tasks in which target discriminability and the number of distractor items were manipulated. Matched baseline conditions were incorporated to control for visual and motor components of the tasks, allowing cortical activity associated with each type of search to be isolated. Region of interest analysis was applied to critical regions of prefrontal cortex to determine whether their involvement was common to both efficient and inefficient search, or unique to inefficient search alone. We found regions of the inferior and middle frontal cortex were only active during inefficient search, whereas an area in the superior frontal cortex (in the region of FEF) was active for both efficient and inefficient search. Thus, regions of ventral as well as dorsal prefrontal cortex are recruited during inefficient search, and we propose that this activity is related to processes that guide, control and monitor the allocation of selective attention.

Long-term familiarity facilitates recognition of visual stimuli. To better understand the neural basis for this effect, we measured the local field potential (LFP) and multiunit spiking activity (MUA) from the inferior temporal (IT) lobe of behaving monkeys in response to novel and familiar images. In general, familiar images evoked larger amplitude LFPs whereas MUA responses were greater for novel images. Familiarity effects were attenuated by image rotations in the picture plane of 45 degrees. Decreasing image contrast led to more pronounced decreases in LFP response magnitude for novel, compared with familiar images, and resulted in more selective MUA response profiles for familiar images. The shape of individual LFP traces could be used for stimulus classification, and classification performance was better for the familiar image category. Recording the visual and auditory evoked LFP at multiple depths showed significant alterations in LFP morphology with distance changes of 2 mm. In summary, IT cortex shows local processing differences for familiar and novel images at a time scale and in a manner consistent with the observed behavioral advantage for classifying familiar images and rapidly detecting novel stimuli.

@article{Anderson2008a,
title = {Effects of temporal context and temporal expectancy on neural activity in inferior temporal cortex},
author = {Britt Anderson and David L Sheinberg},
doi = {10.1016/j.neuropsychologia.2007.11.025},
year = {2008},
date = {2008-01-01},
journal = {Neuropsychologia},
volume = {46},
number = {4},
pages = {947--957},
abstract = {Timing is critical. The same event can mean different things at different times and some events are more likely to occur at one time than another. We used a cued visual classification task to evaluate how changes in temporal context affect neural responses in inferior temporal cortex, an extrastriate visual area known to be involved in object processing. On each trial a first image cued a temporal delay before a second target image appeared. The animal's task was to classify the second image by pressing one of two buttons previously associated with that target. All images were used as both cues and targets. Whether an image cued a delay time or signaled a button press depended entirely upon whether it was the first or second picture in a trial. This paradigm allowed us to compare inferior temporal cortex neural activity to the same image subdivided by temporal context and expectation. Neuronal spiking was more robust and visually evoked local field potentials (LFP's) larger for target presentations than for cue presentations. On invalidly cued trials, when targets appeared unexpectedly early, the magnitude of the evoked LFP was reduced and delayed and neuronal spiking was attenuated. Spike field coherence increased in the beta-gamma frequency range for expected targets. In conclusion, different neural responses in higher order ventral visual cortex may occur for the same visual image based on manipulations of temporal attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Timing is critical. The same event can mean different things at different times and some events are more likely to occur at one time than another. We used a cued visual classification task to evaluate how changes in temporal context affect neural responses in inferior temporal cortex, an extrastriate visual area known to be involved in object processing. On each trial a first image cued a temporal delay before a second target image appeared. The animal's task was to classify the second image by pressing one of two buttons previously associated with that target. All images were used as both cues and targets. Whether an image cued a delay time or signaled a button press depended entirely upon whether it was the first or second picture in a trial. This paradigm allowed us to compare inferior temporal cortex neural activity to the same image subdivided by temporal context and expectation. Neuronal spiking was more robust and visually evoked local field potentials (LFP's) larger for target presentations than for cue presentations. On invalidly cued trials, when targets appeared unexpectedly early, the magnitude of the evoked LFP was reduced and delayed and neuronal spiking was attenuated. Spike field coherence increased in the beta-gamma frequency range for expected targets. In conclusion, different neural responses in higher order ventral visual cortex may occur for the same visual image based on manipulations of temporal attention.

@article{Anderson2008b,
title = {A role for spatial and nonspatial working memory processes in visual search},
author = {Elaine J Anderson and Sabira K Mannan and Geraint Rees and Petroc Sumner and Christopher Kennard},
doi = {10.1027/1618-3169.55.5.301},
year = {2008},
date = {2008-01-01},
journal = {Experimental Psychology},
volume = {55},
number = {5},
pages = {301--312},
abstract = {Searching a cluttered visual scene for a specific item of interest can take several seconds to perform if the target item is difficult to discriminate from surrounding items. Whether working memory processes are utilized to guide the path of attentional selection during such searches remains under debate. Previous studies have found evidence to support a role for spatial working memory in inefficient search, but the role of nonspatial working memory remains unclear. Here, we directly compared the role of spatial and nonspatial working memory for both an efficient and inefficient search task. In Experiment 1, we used a dual-task paradigm to investigate the effect of performing visual search within the retention interval of a spatial working memory task. Importantly, by incorporating two working memory loads (low and high) we were able to make comparisons between dual-task conditions, rather than between dual-task and single-task conditions. This design allows any interference effects observed to be attributed to changes in memory load, rather than to nonspecific effects related to "dual-task" performance. We found that the efficiency of the inefficient search task declined as spatial memory load increased, but that the efficient search task remained efficient. These results suggest that spatial memory plays an important role in inefficient but not efficient search. In Experiment 2, participants performed the same visual search tasks within the retention interval of visually matched spatial and verbal working memory tasks. Critically, we found comparable dual-task interference between inefficient search and both the spatial and nonspatial working memory tasks, indicating that inefficient search recruits working memory processes common to both domains.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Searching a cluttered visual scene for a specific item of interest can take several seconds to perform if the target item is difficult to discriminate from surrounding items. Whether working memory processes are utilized to guide the path of attentional selection during such searches remains under debate. Previous studies have found evidence to support a role for spatial working memory in inefficient search, but the role of nonspatial working memory remains unclear. Here, we directly compared the role of spatial and nonspatial working memory for both an efficient and inefficient search task. In Experiment 1, we used a dual-task paradigm to investigate the effect of performing visual search within the retention interval of a spatial working memory task. Importantly, by incorporating two working memory loads (low and high) we were able to make comparisons between dual-task conditions, rather than between dual-task and single-task conditions. This design allows any interference effects observed to be attributed to changes in memory load, rather than to nonspecific effects related to "dual-task" performance. We found that the efficiency of the inefficient search task declined as spatial memory load increased, but that the efficient search task remained efficient. These results suggest that spatial memory plays an important role in inefficient but not efficient search. In Experiment 2, participants performed the same visual search tasks within the retention interval of visually matched spatial and verbal working memory tasks. Critically, we found comparable dual-task interference between inefficient search and both the spatial and nonspatial working memory tasks, indicating that inefficient search recruits working memory processes common to both domains.

@article{Anderson2010,
title = {Overlapping functional anatomy for working memory and visual search.},
author = {Elaine J Anderson and S K Mannan and G Rees and P Sumner and C Kennard},
doi = {10.1007/s00221-009-2000-5},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {200},
number = {1},
pages = {91--107},
abstract = {Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms.

@article{Anderson2011,
title = {Exploiting human sensitivity to gaze for tracking the eyes},
author = {Nicola C Anderson and Evan F Risko and Alan Kingstone},
doi = {10.3758/s13428-011-0078-8},
year = {2011},
date = {2011-01-01},
journal = {Behavior Research Methods},
volume = {43},
pages = {843--852},
abstract = {Given the prevalence, quality, and low cost of web cameras, along with the remarkable human sensitivity to gaze, we examined the accuracy of eye tracking using only a web camera. Participants were shown webcamera recordings of a person's eyes moving 1°, 2°, or 3° of visual angle in one of eight radial directions (north, northeast, east, southeast, etc.), or no eye movement occurred at all. Observers judged whether an eye movement was made and, if so, its direction. Our findings demonstrate that for all saccades of any size or direction, observers can detect and discriminate eye movements significantly better than chance. Critically, the larger the saccade, the better the judgments, so that for eye movements of 3°, people can tell whether an eye movement occurred, and where it was going, at about 90% or better. This simple methodology of using a web camera and looking for eye movements offers researchers a simple, reliable, and cost-effective research tool that can be applied effectively both in studies where it is important that participants maintain central fixation (e.g., covert attention investigations) and in those where they are free or required to move their eyes (e.g., visual search).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Given the prevalence, quality, and low cost of web cameras, along with the remarkable human sensitivity to gaze, we examined the accuracy of eye tracking using only a web camera. Participants were shown webcamera recordings of a person's eyes moving 1°, 2°, or 3° of visual angle in one of eight radial directions (north, northeast, east, southeast, etc.), or no eye movement occurred at all. Observers judged whether an eye movement was made and, if so, its direction. Our findings demonstrate that for all saccades of any size or direction, observers can detect and discriminate eye movements significantly better than chance. Critically, the larger the saccade, the better the judgments, so that for eye movements of 3°, people can tell whether an eye movement occurred, and where it was going, at about 90% or better. This simple methodology of using a web camera and looking for eye movements offers researchers a simple, reliable, and cost-effective research tool that can be applied effectively both in studies where it is important that participants maintain central fixation (e.g., covert attention investigations) and in those where they are free or required to move their eyes (e.g., visual search).

Covert shifts of attention precede and direct overt eye movements to stimuli that are task relevant or physically salient. A growing body of evidence suggests that the learned value of perceptual stimuli strongly influences their attentional priority. For example, previously rewarded but otherwise irrelevant and inconspicuous stimuli capture covert attention involuntarily. It is unknown, however, whether stimuli also draw eye movements involuntarily as a consequence of their reward history. Here, we show that previously rewarded but currently task-irrelevant stimuli capture both attention and the eyes. Value-driven oculomotor capture was observed during unconstrained viewing, when neither eye movements nor fixations were required, and was strongly related to individual differences in visual working memory capacity. The appearance of a reward-associated stimulus came to evoke pupil dilation over the course of training, which provides physiological evidence that the stimuli that elicit value-driven capture come to serve as reward-predictive cues. These findings reveal a close coupling of value-driven attentional capture and eye movements that has broad implications for theories of attention and reward learning.

@article{Anderson2013,
title = {Recurrence quantification analysis of eye movements},
author = {Nicola C Anderson and Walter F Bischof and Kaitlin E W Laidlaw and Evan F Risko and Alan Kingstone},
doi = {10.3758/s13428-012-0299-5},
year = {2013},
date = {2013-01-01},
journal = {Behavior Research Methods},
volume = {45},
pages = {842--856},
abstract = {Recurrence quantification analysis (RQA) has been successfully used for describing dynamic systems that are too complex to be characterized adequately by standard methods in time series analysis. More recently, RQA has been used for analyzing the coordination of gaze patterns between cooperating individuals. Here, we extend RQA to the characterization of fixation sequences, and we show that the global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation in this context. We applied RQA to the analysis of a study in which observers looked at different scenes under natural or gaze-contingent viewing conditions, and we found large differences in the RQA measures between the viewing conditions, indicating that RQA is a powerful new tool for the analysis of the temporal patterns of eye movement behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Recurrence quantification analysis (RQA) has been successfully used for describing dynamic systems that are too complex to be characterized adequately by standard methods in time series analysis. More recently, RQA has been used for analyzing the coordination of gaze patterns between cooperating individuals. Here, we extend RQA to the characterization of fixation sequences, and we show that the global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation in this context. We applied RQA to the analysis of a study in which observers looked at different scenes under natural or gaze-contingent viewing conditions, and we found large differences in the RQA measures between the viewing conditions, indicating that RQA is a powerful new tool for the analysis of the temporal patterns of eye movement behavior.

@article{Anderson2015,
title = {Top-down expectancy versus bottom-up guidance in search for known color-form conjunctions},
author = {Giles M Anderson and Glyn W Humphreys},
doi = {10.3758/s13414-015-0960-z},
year = {2015},
date = {2015-11-01},
journal = {Attention, Perception, & Psychophysics},
volume = {77},
number = {8},
pages = {2622--2639},
publisher = {Springer US},
abstract = {We assessed the effects of pairing a target object with its familiar color on eye movements in visual search, under conditions where the familiar color could or could not be predicted. In Experiment 1 participants searched for a yellow- or purple-colored corn target amongst aubergine distractors, half of which were yellow and half purple. Search was more efficient when the color of the target was familiar and early eye movements more likely to be directed to targets carrying a familiar color than an unfamiliar color. Ex- periment 2 introduced cues which predicted the target color at 80 % validity. Cue validity did not affect whether early fixations were to the target. Invalid cues, however, disrupted search efficiency for targets in an unfamiliar color whilst there was little cost to search efficiency for targets in their familiar color. These re- sults generalized across items with different colors (Experiment 3). The data are consistent with early pro- cesses in selection being automatically modulated in a bottom-up manner to targets in their familiar color, even when expectancies are set for other colors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We assessed the effects of pairing a target object with its familiar color on eye movements in visual search, under conditions where the familiar color could or could not be predicted. In Experiment 1 participants searched for a yellow- or purple-colored corn target amongst aubergine distractors, half of which were yellow and half purple. Search was more efficient when the color of the target was familiar and early eye movements more likely to be directed to targets carrying a familiar color than an unfamiliar color. Ex- periment 2 introduced cues which predicted the target color at 80 % validity. Cue validity did not affect whether early fixations were to the target. Invalid cues, however, disrupted search efficiency for targets in an unfamiliar color whilst there was little cost to search efficiency for targets in their familiar color. These re- sults generalized across items with different colors (Experiment 3). The data are consistent with early pro- cesses in selection being automatically modulated in a bottom-up manner to targets in their familiar color, even when expectancies are set for other colors.

@article{Anderson2015a,
title = {A comparison of scanpath comparison methods},
author = {Nicola C Anderson and Fraser Anderson and Alan Kingstone and Walter F Bischof},
doi = {10.3758/s13428-014-0550-3},
year = {2015},
date = {2015-01-01},
journal = {Behavior Research Methods},
volume = {47},
number = {4},
pages = {1377--1392},
abstract = {Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure.

@article{Anderson2015b,
title = {It depends on when you look at it: Salience influences eye movements in natural scene viewing and search early in time},
author = {Nicola C Anderson and Eduard Ort and Wouter Kruijne and Martijn Meeter and Mieke Donk},
doi = {10.1167/15.5.9.doi},
year = {2015},
date = {2015-01-01},
journal = {Journal of Vision},
volume = {15},
number = {5},
pages = {1--22},
abstract = {It is generally accepted that salience affects eye movements in simple artificially created search displays. However, no such consensus exists for eye movements in natural scenes, with several reports arguing that it is mostly high-level cognitive factors that control oculomotor behavior in natural scenes. Here, we manipulate the salience distribution across images by decreasing or increasing the contrast in a gradient across the image. We recorded eye movements in an encoding task (Experiment 1) and a visual search task (Experiment 2) and analyzed the relationship between the latency of fixations and subsequent saccade targeting throughout scene viewing. We find that short-latency first saccades are more likely to land on a region of the image with high salience than long-latency and subsequent saccades in both the encoding and visual search tasks. This implies that salience indeed influences oculomotor behavior in natural scenes, albeit on a different timescale than previously reported. We discuss our findings in relation to current theories of saccade control in natural scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

It is generally accepted that salience affects eye movements in simple artificially created search displays. However, no such consensus exists for eye movements in natural scenes, with several reports arguing that it is mostly high-level cognitive factors that control oculomotor behavior in natural scenes. Here, we manipulate the salience distribution across images by decreasing or increasing the contrast in a gradient across the image. We recorded eye movements in an encoding task (Experiment 1) and a visual search task (Experiment 2) and analyzed the relationship between the latency of fixations and subsequent saccade targeting throughout scene viewing. We find that short-latency first saccades are more likely to land on a region of the image with high salience than long-latency and subsequent saccades in both the encoding and visual search tasks. This implies that salience indeed influences oculomotor behavior in natural scenes, albeit on a different timescale than previously reported. We discuss our findings in relation to current theories of saccade control in natural scenes.

@article{Anderson2016,
title = {The influence of a scene preview on eye movement behavior in natural scenes},
author = {Nicola C Anderson and Mieke Donk and Martijn Meeter},
doi = {10.3758/s13423-016-1035-4},
year = {2016},
date = {2016-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {23},
pages = {1794--1801},
publisher = {Psychonomic Bulletin & Review},
abstract = {Rich contextual and semantic information can be extracted from only a brief presentation of a natural scene. This is presumed to be activated quickly enough to guide initial eye movements into a scene. However, early, short-latency eye movements in natural scenes have been shown to be dependent on the salience distribution across the image (Anderson, Ort, Kruijne, Meeter, & Donk, 2015). In the present work, we manipulated the salience distribution across a natural scene by changing the global contrast. We showed participants a brief real or nonsense preview of the scene and examined the time-course of eye movement guidance. A real preview decreased the latency and increased the amplitude of initial saccades into the image, suggesting that the preview allowed observers to obtain additional contextual information that would otherwise not be available. However, the preview did not completely override the initial tendency for short-latency saccades to be guided by the underlying salience distribution of the image. We discuss these findings in the context of oculomotor selection based on the integration of contextual information and low-level features in a natural scene.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Rich contextual and semantic information can be extracted from only a brief presentation of a natural scene. This is presumed to be activated quickly enough to guide initial eye movements into a scene. However, early, short-latency eye movements in natural scenes have been shown to be dependent on the salience distribution across the image (Anderson, Ort, Kruijne, Meeter, & Donk, 2015). In the present work, we manipulated the salience distribution across a natural scene by changing the global contrast. We showed participants a brief real or nonsense preview of the scene and examined the time-course of eye movement guidance. A real preview decreased the latency and increased the amplitude of initial saccades into the image, suggesting that the preview allowed observers to obtain additional contextual information that would otherwise not be available. However, the preview did not completely override the initial tendency for short-latency saccades to be guided by the underlying salience distribution of the image. We discuss these findings in the context of oculomotor selection based on the integration of contextual information and low-level features in a natural scene.

@article{Anderson2017,
title = {Visual population receptive fields in people with schizophrenia have reduced inhibitory surrounds},
author = {Elaine J Anderson and Marc S Tibber and Sam D Schwarzkopf and Sukhwinder S Shergill and Emilio Fernandez-Egea and Geraint Rees and Steven C Dakin},
doi = {10.1523/JNEUROSCI.3620-15.2016},
year = {2017},
date = {2017-01-01},
journal = {Journal of Neuroscience},
volume = {37},
number = {6},
pages = {1546--1556},
abstract = {People with schizophrenia (SZ) experience abnormal visual perception on a range of visual tasks, which have been linked to abnormal synaptic transmission and an imbalance between cortical excitation and inhibition. However, differences in the underlying architecture of visual cortex neurons, which might explain these visual anomalies, have yet to be reportedin vivoHere, we probed the neural basis of these deficits using fMRI and population receptive field (pRF) mapping to infer properties of visually responsive neurons in people with SZ. We employed a difference-of-Gaussian model to capture the center-surround configuration of the pRF, providing critical information about the spatial scale of the pRFs inhibitory surround. Our analysis reveals that SZ is associated with reduced pRF size in early retinotopic visual cortex, as well as a reduction in size and depth of the inhibitory surround in V1, V2, and V4. We consider how reduced inhibition might explain the diverse range of visual deficits reported in SZ.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

People with schizophrenia (SZ) experience abnormal visual perception on a range of visual tasks, which have been linked to abnormal synaptic transmission and an imbalance between cortical excitation and inhibition. However, differences in the underlying architecture of visual cortex neurons, which might explain these visual anomalies, have yet to be reportedin vivoHere, we probed the neural basis of these deficits using fMRI and population receptive field (pRF) mapping to infer properties of visually responsive neurons in people with SZ. We employed a difference-of-Gaussian model to capture the center-surround configuration of the pRF, providing critical information about the spatial scale of the pRFs inhibitory surround. Our analysis reveals that SZ is associated with reduced pRF size in early retinotopic visual cortex, as well as a reduction in size and depth of the inhibitory surround in V1, V2, and V4. We consider how reduced inhibition might explain the diverse range of visual deficits reported in SZ.

@article{Anderson2017a,
title = {Salient object changes influence overt attentional prioritization and object-based targeting in natural scenes},
author = {Nicola C Anderson and Mieke Donk},
doi = {10.1371/journal.pone.0172132},
year = {2017},
date = {2017-01-01},
journal = {PLoS ONE},
volume = {12},
number = {2},
pages = {1--14},
abstract = {A change to an object in natural scenes attracts attention when it occurs during a fixation. However, when a change occurs during a saccade, and is masked by saccadic suppression, it typically does not capture the gaze in a bottom- up manner. In the present work, we investigated how the type and direction of salient changes to objects affect the prioritization and targeting of objects in natural scenes. We asked observers to look around a scene in preparation for a later memory test. After a period of time, an object in the scene was increased or decreased in salience either during a fixation (with a transient signal) or during a saccade (without transient signal), or it was not changed at all. Changes that were made during a fixation attracted the eyes both when the change involved an increase and a decrease in salience. However, changes that were made during a saccade only captured the eyes when the change was an increase in salience, relative to the baseline no-change condition. These results suggest that the prioritization of object changes can be influenced by the underlying salience of the changed object. In addition, object changes that occurred with a transient signal (which is itself a salient signal) resulted in more central object targeting. Taken together, our results suggest that salient signals in a natural scene are an important component in both object prioritization and targeting in natural scene viewing, insofar as they align with object locations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

A change to an object in natural scenes attracts attention when it occurs during a fixation. However, when a change occurs during a saccade, and is masked by saccadic suppression, it typically does not capture the gaze in a bottom- up manner. In the present work, we investigated how the type and direction of salient changes to objects affect the prioritization and targeting of objects in natural scenes. We asked observers to look around a scene in preparation for a later memory test. After a period of time, an object in the scene was increased or decreased in salience either during a fixation (with a transient signal) or during a saccade (without transient signal), or it was not changed at all. Changes that were made during a fixation attracted the eyes both when the change involved an increase and a decrease in salience. However, changes that were made during a saccade only captured the eyes when the change was an increase in salience, relative to the baseline no-change condition. These results suggest that the prioritization of object changes can be influenced by the underlying salience of the changed object. In addition, object changes that occurred with a transient signal (which is itself a salient signal) resulted in more central object targeting. Taken together, our results suggest that salient signals in a natural scene are an important component in both object prioritization and targeting in natural scene viewing, insofar as they align with object locations.

@article{Anderson2018a,
title = {On the representational nature of value-driven spatial attentional biases},
author = {Brian A Anderson and Haena Kim},
doi = {10.1152/jn.00489.2018},
year = {2018},
date = {2018-01-01},
journal = {Journal of Neurophysiology},
volume = {120},
number = {5},
pages = {2654--2658},
abstract = {Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner.

@article{Anderson2018b,
title = {Test–retest reliability of value-driven attentional capture},
author = {Brian A Anderson and Haena Kim},
doi = {10.3758/s13428-018-1079-7},
year = {2018},
date = {2018-01-01},
journal = {Behavior Research Methods},
pages = {1--7},
publisher = {Behavior Research Methods},
abstract = {Attention is biased toward learned predictors of reward. The degree to which attention is automatically drawn to arbitrary reward cues has been linked to a variety of psychopathologies, including drug dependence, HIV-risk behaviors, depressive symptoms, and attention deficit/hyperactivity disorder. In the context of addiction specifically, attentional biases toward drug cues have been related to drug craving and treatment outcomes. Given the potential role of value-based attention in psychopathology, the ability to quantify the magnitude of such bias before and after a treatment intervention in order to assess treatment-related changes in attention allocation would be desirable. However, the test–retest reliability of value-driven attentional capture by arbitrary reward cues has not been established. In the present study, we show that an oculomotor measure of value-driven attentional capture produces highly robust test–retest reliability for a behavioral assessment, whereas the response time (RT) measure more commonly used in the attentional bias literature does not. Our findings provide methodological support for the ability to obtain a reliable measure of susceptibility to value-driven attentional capture at multiple points in time, and they highlight a limitation of RT-based measures that should inform the use of attentional-bias tasks as an assessment tool.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Attention is biased toward learned predictors of reward. The degree to which attention is automatically drawn to arbitrary reward cues has been linked to a variety of psychopathologies, including drug dependence, HIV-risk behaviors, depressive symptoms, and attention deficit/hyperactivity disorder. In the context of addiction specifically, attentional biases toward drug cues have been related to drug craving and treatment outcomes. Given the potential role of value-based attention in psychopathology, the ability to quantify the magnitude of such bias before and after a treatment intervention in order to assess treatment-related changes in attention allocation would be desirable. However, the test–retest reliability of value-driven attentional capture by arbitrary reward cues has not been established. In the present study, we show that an oculomotor measure of value-driven attentional capture produces highly robust test–retest reliability for a behavioral assessment, whereas the response time (RT) measure more commonly used in the attentional bias literature does not. Our findings provide methodological support for the ability to obtain a reliable measure of susceptibility to value-driven attentional capture at multiple points in time, and they highlight a limitation of RT-based measures that should inform the use of attentional-bias tasks as an assessment tool.

@article{Anderson2018c,
title = {Mechanisms of value-learning in the guidance of spatial attention},
author = {Brian A Anderson and Haena Kim},
doi = {10.1016/j.cognition.2018.05.005},
year = {2018},
date = {2018-01-01},
journal = {Cognition},
volume = {178},
pages = {26--36},
publisher = {Elsevier},
abstract = {The role of associative reward learning in the guidance of feature-based attention is well established. The extent to which reward learning can modulate spatial attention has been much more controversial. At least one demonstration of a persistent spatial attention bias following space-based associative reward learning has been reported. At the same time, multiple other experiments have been published failing to demonstrate enduring attentional biases towards locations at which a target, if found, yields high reward. This is in spite of evidence that participants use reward structures to inform their decisions where to search, leading some to suggest that, unlike feature-based attention, spatial attention may be impervious to the influence of learning from reward structures. Here, we demonstrate a robust bias towards regions of a scene that participants were previously rewarded for selecting. This spatial bias relies on representations that are anchored to the configuration of objects within a scene. The observed bias appears to be driven specifically by reinforcement learning, and can be observed with equal strength following non-reward corrective feedback. The time course of the bias is consistent with a transient shift of attention, rather than a strategic search pattern, and is evident in eye movement patterns during free viewing. Taken together, our findings reconcile previously conflicting reports and offer an integrative account of how learning from feedback shapes the spatial attention system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The role of associative reward learning in the guidance of feature-based attention is well established. The extent to which reward learning can modulate spatial attention has been much more controversial. At least one demonstration of a persistent spatial attention bias following space-based associative reward learning has been reported. At the same time, multiple other experiments have been published failing to demonstrate enduring attentional biases towards locations at which a target, if found, yields high reward. This is in spite of evidence that participants use reward structures to inform their decisions where to search, leading some to suggest that, unlike feature-based attention, spatial attention may be impervious to the influence of learning from reward structures. Here, we demonstrate a robust bias towards regions of a scene that participants were previously rewarded for selecting. This spatial bias relies on representations that are anchored to the configuration of objects within a scene. The observed bias appears to be driven specifically by reinforcement learning, and can be observed with equal strength following non-reward corrective feedback. The time course of the bias is consistent with a transient shift of attention, rather than a strategic search pattern, and is evident in eye movement patterns during free viewing. Taken together, our findings reconcile previously conflicting reports and offer an integrative account of how learning from feedback shapes the spatial attention system.

@article{Andersson2011,
title = {I see what you're saying: The integration of complex speech and scenes during language comprehension},
author = {Richard Andersson and Fernanda Ferreira and John M Henderson},
doi = {10.1016/j.actpsy.2011.01.007},
year = {2011},
date = {2011-01-01},
journal = {Acta Psychologica},
volume = {137},
number = {2},
pages = {208--216},
publisher = {Elsevier B.V.},
abstract = {The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.

@article{Andoh2015,
title = {Asymmetric interhemispheric transfer in the auditory network: Evidence from TMS, resting-state fMRI, and diffusion imaging},
author = {Jamila Andoh and Reiko Matsushita and Robert J Zatorre},
doi = {10.1523/JNEUROSCI.2333-15.2015},
year = {2015},
date = {2015-01-01},
journal = {Journal of Neuroscience},
volume = {43},
number = {43},
pages = {14602--14611},
abstract = {Hemispheric asymmetries in human auditory cortical function and structure are still highly debated. Brain stimulation approaches can complement correlational techniques by uncovering causal influences. Previous studies have shown asymmetrical effects of transcranial magnetic stimulation (TMS) on task performance, but it is unclear whether these effects are task-specific or reflect intrinsic network properties. To test how modulation of auditory cortex (AC) influences functional networks and whether this influence is asymmetrical, the present study measured resting-state fMRI connectivity networks in 17 healthy volunteers before and immediately after TMS (continuous theta burst stimulation) to the left or right AC, and the vertex as a control. We also examined the relationship between TMS-induced interhemispheric signal propagation and anatomical properties of callosal auditory fibers as measured with diffusion-weighted MRI. We found that TMS to the right AC, but not the left, resulted in widespread connectivity decreases in auditory- and motor-related networks in the resting state. Individual differences in the degree of change in functional connectivity between auditory cortices after TMS applied over the right AC were negatively related to the volume of callosal auditory fibers. The findings show that TMS-induced network modulation occurs, even in the absence of an explicit task, and that the magnitude of the effect differs across individuals as a function of callosal structure, supporting a role for the corpus callosum in mediating functional asymmetry. The findings support theoretical models emphasizing hemispheric differences in network organization and are of practical significance in showing that brain stimulation studies need to take network-level effects into account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Hemispheric asymmetries in human auditory cortical function and structure are still highly debated. Brain stimulation approaches can complement correlational techniques by uncovering causal influences. Previous studies have shown asymmetrical effects of transcranial magnetic stimulation (TMS) on task performance, but it is unclear whether these effects are task-specific or reflect intrinsic network properties. To test how modulation of auditory cortex (AC) influences functional networks and whether this influence is asymmetrical, the present study measured resting-state fMRI connectivity networks in 17 healthy volunteers before and immediately after TMS (continuous theta burst stimulation) to the left or right AC, and the vertex as a control. We also examined the relationship between TMS-induced interhemispheric signal propagation and anatomical properties of callosal auditory fibers as measured with diffusion-weighted MRI. We found that TMS to the right AC, but not the left, resulted in widespread connectivity decreases in auditory- and motor-related networks in the resting state. Individual differences in the degree of change in functional connectivity between auditory cortices after TMS applied over the right AC were negatively related to the volume of callosal auditory fibers. The findings show that TMS-induced network modulation occurs, even in the absence of an explicit task, and that the magnitude of the effect differs across individuals as a function of callosal structure, supporting a role for the corpus callosum in mediating functional asymmetry. The findings support theoretical models emphasizing hemispheric differences in network organization and are of practical significance in showing that brain stimulation studies need to take network-level effects into account.

@article{Andoh2017,
title = {How restful is it with all that noise? Comparison of Interleaved silent steady state (ISSS) and conventional imaging in resting-state fMRI},
author = {J Andoh and M Ferreira and I R Leppert and R Matsushita and B Pike and R J Zatorre},
doi = {10.1016/j.neuroimage.2016.11.065},
year = {2017},
date = {2017-01-01},
journal = {NeuroImage},
volume = {147},
pages = {726--735},
publisher = {Elsevier},
abstract = {Resting-state fMRI studies have become very important in cognitive neuroscience because they are able to identify BOLD fluctuations in brain circuits involved in motor, cognitive, or perceptual processes without the use of an explicit task. Such approaches have been fruitful when applied to various disordered populations, or to children or the elderly. However, insufficient attention has been paid to the consequences of the loud acoustic scanner noise associated with conventional fMRI acquisition, which could be an important confounding factor affecting auditory and/or cognitive networks in resting-state fMRI. Several approaches have been developed to mitigate the effects of acoustic noise on fMRI signals, including sparse sampling protocols and interleaved silent steady state (ISSS) acquisition methods, the latter being used only for task-based fMRI. Here, we developed an ISSS protocol for resting-state fMRI (rs-ISSS) consisting of rapid acquisition of a set of echo planar imaging volumes following each silent period, during which the steady state longitudinal magnetization was maintained with a train of relatively silent slice-selective excitation pulses. We evaluated the test-retest reliability of intensity and spatial extent of connectivity networks of fMRI BOLD signal across three different days for rs-ISSS and compared it with a standard resting-state fMRI (rs-STD). We also compared the strength and distribution of connectivity networks between rs-ISSS and rs-STD. We found that both rs-ISSS and rs-STD showed high reproducibility of fMRI signal across days. In addition, rs-ISSS showed a more robust pattern of functional connectivity within the somatosensory and motor networks, as well as an auditory network compared with rs-STD. An increased connectivity between the default mode network and the language network and with the anterior cingulate cortex (ACC) network was also found for rs-ISSS compared with rs-STD. Finally, region of interest analysis showed higher interhemispheric connectivity in Heschl's gyri in rs-ISSS compared with rs-STD, with lower variability across days. The present findings suggest that rs-ISSS may be advantageous for detecting network connectivity in a less noisy environment, and that resting-state studies carried out with standard scanning protocols should consider the potential effects of loud noise on the measured networks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Resting-state fMRI studies have become very important in cognitive neuroscience because they are able to identify BOLD fluctuations in brain circuits involved in motor, cognitive, or perceptual processes without the use of an explicit task. Such approaches have been fruitful when applied to various disordered populations, or to children or the elderly. However, insufficient attention has been paid to the consequences of the loud acoustic scanner noise associated with conventional fMRI acquisition, which could be an important confounding factor affecting auditory and/or cognitive networks in resting-state fMRI. Several approaches have been developed to mitigate the effects of acoustic noise on fMRI signals, including sparse sampling protocols and interleaved silent steady state (ISSS) acquisition methods, the latter being used only for task-based fMRI. Here, we developed an ISSS protocol for resting-state fMRI (rs-ISSS) consisting of rapid acquisition of a set of echo planar imaging volumes following each silent period, during which the steady state longitudinal magnetization was maintained with a train of relatively silent slice-selective excitation pulses. We evaluated the test-retest reliability of intensity and spatial extent of connectivity networks of fMRI BOLD signal across three different days for rs-ISSS and compared it with a standard resting-state fMRI (rs-STD). We also compared the strength and distribution of connectivity networks between rs-ISSS and rs-STD. We found that both rs-ISSS and rs-STD showed high reproducibility of fMRI signal across days. In addition, rs-ISSS showed a more robust pattern of functional connectivity within the somatosensory and motor networks, as well as an auditory network compared with rs-STD. An increased connectivity between the default mode network and the language network and with the anterior cingulate cortex (ACC) network was also found for rs-ISSS compared with rs-STD. Finally, region of interest analysis showed higher interhemispheric connectivity in Heschl's gyri in rs-ISSS compared with rs-STD, with lower variability across days. The present findings suggest that rs-ISSS may be advantageous for detecting network connectivity in a less noisy environment, and that resting-state studies carried out with standard scanning protocols should consider the potential effects of loud noise on the measured networks.

@article{Andrews2004,
title = {Eye movements and morphological segmentation of compound words: There is a mouse in mousetrap},
author = {Sally Andrews and Brett Miller and Keith Rayner},
doi = {10.1080/09541440340000123},
year = {2004},
date = {2004-01-01},
journal = {European Journal of Cognitive Psychology},
volume = {16},
number = {1-2},
pages = {285--311},
abstract = {In two experiments, readers' eye movements were monitored as they read sentences containing compound words. In Experiment 1, the frequency of the first and second morpheme was manipulated in compound words of low whole word frequency. Experiment 2 compared pairs of low frequency compounds with high and low frequency first morphemes but identical second morphemes that were embedded in the same sentence frames. The results showed significant effects of the frequency of both morphemes on gaze duration and total fixation time on the compound words. Regression analyses revealed an influence of whole word frequency on the same measures. The results suggest that morphemic constituents of compound words are activated in the course of retrieving the representation of the whole compound word. The fact that the frequency effects were not confined to fixations on the morphemic constituents themselves implies that saccadic eye movements are implemented before morphemic retrieval has been completed. The results highlight the importance of developing more precise models of the perceptual processes underlying reading and how they interact with the processes involved in lexical retrieval and comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In two experiments, readers' eye movements were monitored as they read sentences containing compound words. In Experiment 1, the frequency of the first and second morpheme was manipulated in compound words of low whole word frequency. Experiment 2 compared pairs of low frequency compounds with high and low frequency first morphemes but identical second morphemes that were embedded in the same sentence frames. The results showed significant effects of the frequency of both morphemes on gaze duration and total fixation time on the compound words. Regression analyses revealed an influence of whole word frequency on the same measures. The results suggest that morphemic constituents of compound words are activated in the course of retrieving the representation of the whole compound word. The fact that the frequency effects were not confined to fixations on the morphemic constituents themselves implies that saccadic eye movements are implemented before morphemic retrieval has been completed. The results highlight the importance of developing more precise models of the perceptual processes underlying reading and how they interact with the processes involved in lexical retrieval and comprehension.

@article{Angele2008,
title = {Parafoveal processing in reading: Manipulating n+1 and n+2 previews simultaneously},
author = {Bernhard Angele and Timothy J Slattery and Jinmian Yang and Reinhold Kliegl and Keith Rayner},
doi = {10.1080/13506280802009704},
year = {2008},
date = {2008-01-01},
journal = {Visual Cognition},
volume = {16},
number = {6},
pages = {697--707},
abstract = {The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.

@article{Angele2011,
title = {Parafoveal processing of word n + 2 during reading: Do the preceding words matter?},
author = {Bernhard Angele and Keith Rayner},
doi = {10.1037/a0023096},
year = {2011},
date = {2011-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {37},
number = {4},
pages = {1210--1220},
abstract = {We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment.

@article{Angele2013,
title = {Processing the in the parafovea: Are articles skipped automatically?},
author = {Bernhard Angele and Keith Rayner},
doi = {10.1037/a0029294},
year = {2013},
date = {2013-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {39},
number = {2},
pages = {649--662},
abstract = {One of the words that readers of English skip most often is the definite article the. Most accounts of reading assume that in order for a reader to skip a word, it must have received some lexical processing. The definite article is skipped so regularly, however, that the oculomotor system might have learned to skip the letter string t-h-e automatically. We tested whether skipping of articles in English is sensitive to context information or whether it is truly automatic in the sense that any occurrence of the letter string the will trigger a skip. This was done using the gaze-contingent boundary paradigm (Rayner, 1975) to provide readers with false parafoveal previews of the article the. All experimental sentences contained a short target verb, the preview of which could be correct (i.e., identical to the actual subsequent word in the sentence; e.g., ace), a nonword (tda), or an infelicitous article preview (the). Our results indicated that readers tended to skip the infelicitous the previews frequently, suggesting that, in many cases, they seemed to be unable to detect the syntactic anomaly in the preview and based their skipping decision solely on the orthographic properties of the article. However, there was some evidence that readers sometimes detected the anomaly, as they also showed increased skipping of the pretarget word in the the preview condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

One of the words that readers of English skip most often is the definite article the. Most accounts of reading assume that in order for a reader to skip a word, it must have received some lexical processing. The definite article is skipped so regularly, however, that the oculomotor system might have learned to skip the letter string t-h-e automatically. We tested whether skipping of articles in English is sensitive to context information or whether it is truly automatic in the sense that any occurrence of the letter string the will trigger a skip. This was done using the gaze-contingent boundary paradigm (Rayner, 1975) to provide readers with false parafoveal previews of the article the. All experimental sentences contained a short target verb, the preview of which could be correct (i.e., identical to the actual subsequent word in the sentence; e.g., ace), a nonword (tda), or an infelicitous article preview (the). Our results indicated that readers tended to skip the infelicitous the previews frequently, suggesting that, in many cases, they seemed to be unable to detect the syntactic anomaly in the preview and based their skipping decision solely on the orthographic properties of the article. However, there was some evidence that readers sometimes detected the anomaly, as they also showed increased skipping of the pretarget word in the the preview condition.

@article{Angele2013a,
title = {Eye movements and parafoveal preview of compound words: Does morpheme order matter?},
author = {Bernhard Angele and Keith Rayner},
doi = {10.1080/17470218.2011.644572},
year = {2013},
date = {2013-01-01},
journal = {The Quarterly Journal of Experimental Psychology},
volume = {66},
number = {3},
pages = {505--526},
abstract = {Recently, there has been considerable debate about whether readers can identify multiple words in parallel or whether they are limited to a serial mode of word identification, processing one word at a time (see, e.g., Reichle, Liversedge, Pollatsek, & Rayner, 2009). Similar questions can be applied to bimorphemic compound words: Do readers identify all the constituents of a compound word in parallel, and does it matter which of the morphemes is identified first? We asked subjects to read compound words embedded in sentences while monitoring their eye movements. Using the boundary paradigm (Rayner, 1975), we manipulated the preview that subjects received of the compound word before they fixated it. In particular, the morpheme order of the preview was either normal (cowboy) or reversed (boycow). Additionally, we manipulated the preview availability for each of the morphemes separately. Preview was thus available for the first morpheme only (cowtxg), for the second morpheme only (enzboy), or for neither of the morphemes (enztxg). We report three major findings: First, there was an effect of morpheme order on gaze durations measured on the compound word, indicating that, as expected, readers obtained a greater preview benefit when the preview presented the morphemes in the correct order than when their order was reversed. Second, gaze durations on the compound word were influenced not only by preview availability for the first, but also by that for the second morpheme. Finally, and most importantly, the results show that readers are able to extract some morpheme information even from a reverse order preview. In summary, readers obtain preview benefit from both constituents of a short compound word, even when the preview does not reflect the correct morpheme order.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Recently, there has been considerable debate about whether readers can identify multiple words in parallel or whether they are limited to a serial mode of word identification, processing one word at a time (see, e.g., Reichle, Liversedge, Pollatsek, & Rayner, 2009). Similar questions can be applied to bimorphemic compound words: Do readers identify all the constituents of a compound word in parallel, and does it matter which of the morphemes is identified first? We asked subjects to read compound words embedded in sentences while monitoring their eye movements. Using the boundary paradigm (Rayner, 1975), we manipulated the preview that subjects received of the compound word before they fixated it. In particular, the morpheme order of the preview was either normal (cowboy) or reversed (boycow). Additionally, we manipulated the preview availability for each of the morphemes separately. Preview was thus available for the first morpheme only (cowtxg), for the second morpheme only (enzboy), or for neither of the morphemes (enztxg). We report three major findings: First, there was an effect of morpheme order on gaze durations measured on the compound word, indicating that, as expected, readers obtained a greater preview benefit when the preview presented the morphemes in the correct order than when their order was reversed. Second, gaze durations on the compound word were influenced not only by preview availability for the first, but also by that for the second morpheme. Finally, and most importantly, the results show that readers are able to extract some morpheme information even from a reverse order preview. In summary, readers obtain preview benefit from both constituents of a short compound word, even when the preview does not reflect the correct morpheme order.

@article{Angele2013b,
title = {Parafoveal-foveal overlap can facilitate ongoing word identification during reading: Evidence from eye movements},
author = {Bernhard Angele and Randy Tran and Keith Rayner},
doi = {10.1037/a0029492},
year = {2013},
date = {2013-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {39},
number = {2},
pages = {526--538},
abstract = {Readers continuously receive parafoveal information about the upcoming word in addition to the foveal information about the currently fixated word. Previous research (Inhoff, Radach, Starr, & Greenberg, 2000) showed that the presence of a parafoveal word that was similar to the foveal word facilitated processing of the foveal word. We used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the parafoveal information that subjects received before or while fixating a target word (e.g., news) within a sentence. Specifically, a reader's parafovea could contain a repetition of the target (news), a correct preview of the posttarget word (once), an unrelated word (warm), random letters (cxmr), a nonword neighbor of the target (niws), a semantically related word (tale), or a nonword neighbor of that word (tule). Target fixation times were significantly lower in the parafoveal repetition condition than in all other conditions, suggesting that foveal processing can be facilitated by parafoveal repetition. We present a simple model framework that can account for these effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Readers continuously receive parafoveal information about the upcoming word in addition to the foveal information about the currently fixated word. Previous research (Inhoff, Radach, Starr, & Greenberg, 2000) showed that the presence of a parafoveal word that was similar to the foveal word facilitated processing of the foveal word. We used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the parafoveal information that subjects received before or while fixating a target word (e.g., news) within a sentence. Specifically, a reader's parafovea could contain a repetition of the target (news), a correct preview of the posttarget word (once), an unrelated word (warm), random letters (cxmr), a nonword neighbor of the target (niws), a semantically related word (tale), or a nonword neighbor of that word (tule). Target fixation times were significantly lower in the parafoveal repetition condition than in all other conditions, suggesting that foveal processing can be facilitated by parafoveal repetition. We present a simple model framework that can account for these effects.

@article{Angele2014,
title = {The effect of high- and low-frequency previews and sentential fit on word skipping during reading},
author = {Bernhard Angele and Abby E Laishley and Keith Rayner and Simon P Liversedge},
doi = {10.1037/a0036396},
year = {2014},
date = {2014-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {40},
number = {4},
pages = {1181--1203},
abstract = {In a previous gaze-contingent boundary experiment, Angele and Rayner (2013) found that readers are likely to skip a word that appears to be the definite article the even when syntactic constraints do not allow for articles to occur in that position. In the present study, we investigated whether the word frequency of the preview of a 3-letter target word influences a reader's decision to fixate or skip that word. We found that the word frequency rather than the felicitousness (syntactic fit) of the preview affected how often the upcoming word was skipped. These results indicate that visual information about the upcoming word trumps information from the sentence context when it comes to making a skipping decision. Skipping parafoveal instances of the therefore may simply be an extreme case of skipping high-frequency words.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In a previous gaze-contingent boundary experiment, Angele and Rayner (2013) found that readers are likely to skip a word that appears to be the definite article the even when syntactic constraints do not allow for articles to occur in that position. In the present study, we investigated whether the word frequency of the preview of a 3-letter target word influences a reader's decision to fixate or skip that word. We found that the word frequency rather than the felicitousness (syntactic fit) of the preview affected how often the upcoming word was skipped. These results indicate that visual information about the upcoming word trumps information from the sentence context when it comes to making a skipping decision. Skipping parafoveal instances of the therefore may simply be an extreme case of skipping high-frequency words.

@article{Angele2015,
title = {Do successor effects in reading reflect lexical parafoveal processing? Evidence from corpus-based and experimental eye movement data},
author = {Bernhard Angele and Elizabeth R Schotter and Timothy J Slattery and Tara L Tenenbaum and Klinton Bicknell and Keith Rayner},
doi = {10.1016/j.jml.2014.11.003},
year = {2015},
date = {2015-01-01},
journal = {Journal of Memory and Language},
volume = {79-80},
pages = {76--96},
publisher = {Elsevier Inc.},
abstract = {In the past, most research on eye movements during reading involved a limited number of subjects reading sentences with specific experimental manipulations on target words. Such experiments usually only analyzed eye-movements measures on and around the target word. Recently, some researchers have started collecting larger data sets involving large and diverse groups of subjects reading large numbers of sentences, enabling them to consider a larger number of influences and study larger and more representative subject groups. In such corpus studies, most of the words in a sentence are analyzed. The complexity of the design of corpus studies and the many potentially uncontrolled influences in such studies pose new issues concerning the analysis methods and interpretability of the data. In particular, several corpus studies of reading have found an effect of successor word (n+. 1) frequency on current word (n) fixation times, while studies employing experimental manipulations tend not to. The general interpretation of corpus studies suggests that readers obtain parafoveal lexical information from the upcoming word before they have finished identifying the current word, while the experimental manipulations shed doubt on this claim. In the present study, we combined a corpus analysis approach with an experimental manipulation (i.e., a parafoveal modification of the moving mask technique, Rayner & Bertera, 1979), so that, either (a) word n+. 1, (b) word n+. 2, (c) both words, or (d) neither word was masked. We found that denying preview for either or both parafoveal words increased average fixation times. Furthermore, we found successor effects similar to those reported in the corpus studies. Importantly, these successor effects were found even when the parafoveal word was masked, suggesting that apparent successor frequency effects may be due to causes that are unrelated to lexical parafoveal preprocessing. We discuss the implications of this finding both for parallel and serial accounts of word identification and for the interpretability of large correlational studies of word identification in reading in general.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In the past, most research on eye movements during reading involved a limited number of subjects reading sentences with specific experimental manipulations on target words. Such experiments usually only analyzed eye-movements measures on and around the target word. Recently, some researchers have started collecting larger data sets involving large and diverse groups of subjects reading large numbers of sentences, enabling them to consider a larger number of influences and study larger and more representative subject groups. In such corpus studies, most of the words in a sentence are analyzed. The complexity of the design of corpus studies and the many potentially uncontrolled influences in such studies pose new issues concerning the analysis methods and interpretability of the data. In particular, several corpus studies of reading have found an effect of successor word (n+. 1) frequency on current word (n) fixation times, while studies employing experimental manipulations tend not to. The general interpretation of corpus studies suggests that readers obtain parafoveal lexical information from the upcoming word before they have finished identifying the current word, while the experimental manipulations shed doubt on this claim. In the present study, we combined a corpus analysis approach with an experimental manipulation (i.e., a parafoveal modification of the moving mask technique, Rayner & Bertera, 1979), so that, either (a) word n+. 1, (b) word n+. 2, (c) both words, or (d) neither word was masked. We found that denying preview for either or both parafoveal words increased average fixation times. Furthermore, we found successor effects similar to those reported in the corpus studies. Importantly, these successor effects were found even when the parafoveal word was masked, suggesting that apparent successor frequency effects may be due to causes that are unrelated to lexical parafoveal preprocessing. We discuss the implications of this finding both for parallel and serial accounts of word identification and for the interpretability of large correlational studies of word identification in reading in general.

We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924–1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.

@article{Anible2015,
title = {Sensitivity to verb bias in American Sign Language-English bilinguals},
author = {Benjamin Anible and Paul Twitchell and Gabriel S Waters and Paola E Dussias and Pilar Pi{ñ}ar and Jill P Morford},
doi = {10.1093/deafed/env007},
year = {2015},
date = {2015-01-01},
journal = {Journal of Deaf Studies and Deaf Education},
volume = {20},
number = {3},
pages = {215--228},
abstract = {Native speakers of English are sensitive to the likelihood that a verb will appear in a specific subcategorization frame, known as verb bias. Readers rely on verb bias to help them resolve temporary ambiguity in sentence comprehension. We investigate whether deaf sign–print bilinguals who have acquired English syntactic knowledge primarily through print exposure show sensitivity to English verb biases in both production and comprehension. We first elicited sentence continuations for 100 English verbs as an offline production measure of sensitivity to verb bias. We then collected eye movement records to examine whether deaf bilinguals' online parsing decisions are influenced by English verb bias. The results indicate that exposure to a second language primarily via print is sufficient to influence use of implicit frequency-based characteristics of a language in production and also to inform parsing decisions in comprehension for some, but not all, verbs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Native speakers of English are sensitive to the likelihood that a verb will appear in a specific subcategorization frame, known as verb bias. Readers rely on verb bias to help them resolve temporary ambiguity in sentence comprehension. We investigate whether deaf sign–print bilinguals who have acquired English syntactic knowledge primarily through print exposure show sensitivity to English verb biases in both production and comprehension. We first elicited sentence continuations for 100 English verbs as an offline production measure of sensitivity to verb bias. We then collected eye movement records to examine whether deaf bilinguals' online parsing decisions are influenced by English verb bias. The results indicate that exposure to a second language primarily via print is sufficient to influence use of implicit frequency-based characteristics of a language in production and also to inform parsing decisions in comprehension for some, but not all, verbs.

@article{Ansorge2013,
title = {Effects of relevant and irrelevant color singletons on inhibition of return and attentional capture},
author = {Ulrich Ansorge and Heinz-Werner Priess and Dirk Kerzel},
doi = {10.3758/s13414-013-0521-2},
year = {2013},
date = {2013-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {75},
number = {8},
pages = {1687--1702},
abstract = {We tested whether color singletons lead to saccadic and manual inhibition of return (IOR; i.e., slower responses at cued locations) and whether IOR depended on the relevance of the color singletons. The target display was preceded by a nonpredictive cue display. In three experiments, half of the cues were response-relevant, because participants had to perform a discrimination task at the cued location. With the exception of Experiment 2, none of the cue colors matched the target color. We observed saccadic IOR after color singletons, which was greater for slow than for fast responses. Furthermore, when the relevant cue color matched the target color, we observed attentional capture (i.e., faster responses at cued locations) with rapid responses, but IOR with slower responses, which provides evidence for attentional deallocation. When the cue display was completely response-irrelevant in two additional experiments, we did not find evidence for IOR. Instead, we found attentional capture when the cue color matched the target color. Also, attentional capture was greater for rapid responses and with short cue-target intervals. Thus, IOR emerges when cues are relevant and do not match the target color, whereas attentional capture emerges with relevant and irrelevant cues that match the target color.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We tested whether color singletons lead to saccadic and manual inhibition of return (IOR; i.e., slower responses at cued locations) and whether IOR depended on the relevance of the color singletons. The target display was preceded by a nonpredictive cue display. In three experiments, half of the cues were response-relevant, because participants had to perform a discrimination task at the cued location. With the exception of Experiment 2, none of the cue colors matched the target color. We observed saccadic IOR after color singletons, which was greater for slow than for fast responses. Furthermore, when the relevant cue color matched the target color, we observed attentional capture (i.e., faster responses at cued locations) with rapid responses, but IOR with slower responses, which provides evidence for attentional deallocation. When the cue display was completely response-irrelevant in two additional experiments, we did not find evidence for IOR. Instead, we found attentional capture when the cue color matched the target color. Also, attentional capture was greater for rapid responses and with short cue-target intervals. Thus, IOR emerges when cues are relevant and do not match the target color, whereas attentional capture emerges with relevant and irrelevant cues that match the target color.

@article{Anton-Erxleben2013,
title = {Independent Effects of Adaptation and Attention on Perceived Speed},
author = {Katharina Anton-Erxleben and Katrin Herrmann and Marisa Carrasco},
doi = {10.1177/0956797612449178},
year = {2013},
date = {2013-01-01},
journal = {Psychological Science},
volume = {24},
number = {2},
pages = {150--159},
abstract = {Adaptation and attention are two mechanisms by which sensory systems manage limited bioenergetic resources: Whereas adaptation decreases sensitivity to stimuli just encountered, attention increases sensitivity to behaviorally relevant stimuli. In the visual system, these changes in sensitivity are accompanied by a change in the appearance of different stimulus dimensions, such as speed. Adaptation causes an underestimation of speed, whereas attention leads to an overestimation of speed. In the two experiments reported here, we investigated whether the effects of these mechanisms interact and how they affect the appearance of stimulus features. We tested the effects of adaptation and the subsequent allocation of attention on perceived speed. A quickly moving adaptor decreased the perceived speed of subsequent stimuli, whereas a slow adaptor did not alter perceived speed. Attention increased perceived speed regardless of the adaptation effect, which indicates that adaptation and attention affect perceived speed independently. Moreover, the finding that attention can alter perceived speed after adaptation indicates that adaptation is not merely a by-product of neuronal fatigue.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Adaptation and attention are two mechanisms by which sensory systems manage limited bioenergetic resources: Whereas adaptation decreases sensitivity to stimuli just encountered, attention increases sensitivity to behaviorally relevant stimuli. In the visual system, these changes in sensitivity are accompanied by a change in the appearance of different stimulus dimensions, such as speed. Adaptation causes an underestimation of speed, whereas attention leads to an overestimation of speed. In the two experiments reported here, we investigated whether the effects of these mechanisms interact and how they affect the appearance of stimulus features. We tested the effects of adaptation and the subsequent allocation of attention on perceived speed. A quickly moving adaptor decreased the perceived speed of subsequent stimuli, whereas a slow adaptor did not alter perceived speed. Attention increased perceived speed regardless of the adaptation effect, which indicates that adaptation and attention affect perceived speed independently. Moreover, the finding that attention can alter perceived speed after adaptation indicates that adaptation is not merely a by-product of neuronal fatigue.

@article{Antzoulatos2016,
title = {Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations},
author = {Evan G Antzoulatos and Earl K Miller},
doi = {10.7554/eLife.17822},
year = {2016},
date = {2016-01-01},
journal = {eLife},
volume = {5},
number = {NOVEMBER2016},
pages = {1--22},
abstract = {Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand.

Adaptation of the horizontal vestibulo-ocular reflex (HVOR) provides an experimental model for cerebellum-dependent motor learning. We developed an eye movement measuring system and a paradigm for induction of HVOR adaptation for the common marmoset. The HVOR gain in dark measured by 10° (peak-to-peak amplitude) and 0.11-0.5. Hz turntable oscillation was around unity. The gain-up and gain-down HVOR adaptation was induced by 1. h of sustained out-of-phase and in-phase 10°-0.33. Hz combined turntable-screen oscillation in the light, respectively. To examine the role of long-term depression (LTD) of parallel fiber-Purkinje cell synapses, we intraperitonially applied T-588 or nimesulide, which block the induction of LTD in vitro or in vivo preparations, 1. h before the test of HVOR adaptation. T-588 (3 and 5. mg/kg body weight) did not affect nonadapted HVOR gains, and impaired both gain-up and gain-down HVOR adaptation. Nimesulide (3 and 6. mg/kg) did not affect nonadapted HVOR gains, and impaired gain-up HVOR adaptation dose-dependently; however, it very little affected gain-down HVOR adaptation. These findings are consistent with the results of our study of nimesulide on the adaptation of horizontal optokinetic response in mice (. Le et al., 2010), and support the view that LTD underlies HVOR adaptation.

@article{Aparicio2016,
title = {Neurophysiological organization of the middle face patch in macaque inferior temporal cortex},
author = {Paul L Aparicio and Elias B Issa and James J DiCarlo},
doi = {10.1523/JNEUROSCI.0237-16.2016},
year = {2016},
date = {2016-01-01},
journal = {Journal of Neuroscience},
volume = {36},
number = {50},
pages = {12729--12745},
abstract = {While early cortical visual areas contain fine scale spatial organization of neuronal properties such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face preferring regions in human ventral cortex (FFA, OFA) and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face selective region in monkeys, the middle face patch (MFP) and show that the MFP contains a graded enrichment of face preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to non-face objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face selective sites were ∼50x more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

While early cortical visual areas contain fine scale spatial organization of neuronal properties such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face preferring regions in human ventral cortex (FFA, OFA) and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face selective region in monkeys, the middle face patch (MFP) and show that the MFP contains a graded enrichment of face preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to non-face objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face selective sites were ∼50x more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis.

We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution.

We measured memory span for assembly instructions involving objects with handles oriented to the left or right side. Right-handed participants remembered more instructions when objects' handles were spatially congruent with the hand used in forthcoming assembly actions. No such affordance-based memory benefit was found for left-handed participants. These results are discussed in terms of motor simulation as an embodied rehearsal mechanism.

@article{Apel2012a,
title = {Targeting regressions: Do readers pay attention to the left?},
author = {Jens K Apel and John M Henderson and Fernanda Ferreira},
doi = {10.3758/s13423-012-0291-1},
year = {2012},
date = {2012-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {19},
number = {6},
pages = {1108--1113},
abstract = {The perceptual span during normal reading extends approximately 14 to 15 characters to the right and three to four characters to the left of a current fixation. In the present study, we investigated whether the perceptual span extends farther than three to four characters to the left immediately before readers execute a regression. We used a display-change paradigm in which we masked words beyond the three-to-four-character range to the left of a fixation. We hypothesized that if reading behavior was affected by this manipulation before regressions but not before progressions, we would have evidence that the perceptual span extends farther left before leftward eye movements. We observed significantly shorter regressive saccades and longer fixation and gaze durations in the masked condition when a regression was executed. Forward saccades were entirely unaffected by the manipulations. We concluded that the perceptual span during reading changes, depending on the direction of a following saccade.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The perceptual span during normal reading extends approximately 14 to 15 characters to the right and three to four characters to the left of a current fixation. In the present study, we investigated whether the perceptual span extends farther than three to four characters to the left immediately before readers execute a regression. We used a display-change paradigm in which we masked words beyond the three-to-four-character range to the left of a fixation. We hypothesized that if reading behavior was affected by this manipulation before regressions but not before progressions, we would have evidence that the perceptual span extends farther left before leftward eye movements. We observed significantly shorter regressive saccades and longer fixation and gaze durations in the masked condition when a regression was executed. Forward saccades were entirely unaffected by the manipulations. We concluded that the perceptual span during reading changes, depending on the direction of a following saccade.

@article{Apel2013,
title = {Children develop initial orthographic knowledge during storybook reading},
author = {Kenn Apel and Danielle Brimo and Elizabeth B Wilson-Fowler and Christian Vorstius and Ralph Radach},
doi = {10.1080/10888438.2012.692742},
year = {2013},
date = {2013-01-01},
journal = {Scientific Studies of Reading},
volume = {17},
number = {4},
pages = {286--302},
abstract = {We examined whether young children acquire orthographic knowledge during structured adult-led storybook reading even though minimal viewing time is devoted to print. Sixty-two kindergarten children were read 12 storybook ?chapters? while their eye movements were tracked. Results indicated that the children quickly acquired initial mental graphemic representations of target nonwords. This learning occurred even though they focused on the target nonwords approximately one fourth of the total time while viewing the pages. Their ability to acquire the initial orthographic representations of the target nonwords and their viewing time was affected by the linguistic statistical regularities of the words. The results provide evidence of orthographic learning during structured storybook reading and for the use of implicit linguistic statistical regularities for learning new orthographic word forms in the early stages of reading development.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We examined whether young children acquire orthographic knowledge during structured adult-led storybook reading even though minimal viewing time is devoted to print. Sixty-two kindergarten children were read 12 storybook ?chapters? while their eye movements were tracked. Results indicated that the children quickly acquired initial mental graphemic representations of target nonwords. This learning occurred even though they focused on the target nonwords approximately one fourth of the total time while viewing the pages. Their ability to acquire the initial orthographic representations of the target nonwords and their viewing time was affected by the linguistic statistical regularities of the words. The results provide evidence of orthographic learning during structured storybook reading and for the use of implicit linguistic statistical regularities for learning new orthographic word forms in the early stages of reading development.

@article{Apfelbaum2011,
title = {Semantic priming is affected by real-time phonological competition: Evidence for continuous cascading systems},
author = {Keith S Apfelbaum and Sheila E Blumstein and Bob Mcmurray},
doi = {10.3758/s13423-010-0039-8},
year = {2011},
date = {2011-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {18},
number = {1},
pages = {141--149},
abstract = {Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words.

@article{Apfelbaum2017,
title = {Learning during processing: Word learning doesn't wait for word recognition to finish},
author = {Keith S Apfelbaum and Bob McMurray},
doi = {10.1111/cogs.12401},
year = {2017},
date = {2017-01-01},
journal = {Cognitive Science},
volume = {41},
pages = {706--747},
abstract = {Previous research on associative learning has uncovered detailed aspects of the process, includ- ing what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learn- ing event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Previous research on associative learning has uncovered detailed aspects of the process, includ- ing what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learn- ing event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete.

@book{Aponte2017,
title = {The stochastic early reaction, inhibition, and late action (SERIA) model for antisaccades},
author = {Eduardo A Aponte and Dario Schöbi and Klaas E Stephan and Jakob Heinzle},
doi = {10.1371/journal.pcbi.1005692},
year = {2017},
date = {2017-01-01},
booktitle = {PLoS computational biology},
volume = {13},
number = {8},
pages = {1--36},
abstract = {The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}

The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades.

@article{Aponte2018,
title = {Inhibition failures and late errors in the antisaccade task: Influence of cue delay},
author = {Eduardo A Aponte and Dominic G Tschan and Klaas E Stephan and Jakob Heinzle},
doi = {10.1152/jn.00240.2018},
year = {2018},
date = {2018-01-01},
journal = {Journal of Neurophysiology},
volume = {120},
number = {6},
pages = {3001--3016},
abstract = {In the antisaccade task participants are required to saccade in the opposite direction of a peripheral visual cue (PVC). This paradigm is often used to investigate inhibition of reflexive responses as well as voluntary response generation. However, it is not clear to what extent different versions of this task probe the same underlying processes. Here, we explored with the Stochastic Early Reaction, Inhibition, and late Action (SERIA) model how the delay between task cue and PVC affects reaction time (RT) and error rate (ER) when pro- and antisaccade trials are randomly interleaved. Specifically, we contrasted a condition in which the task cue was presented before the PVC with a condition in which the PVC served also as task cue. Summary statistics indicate that ERs and RTs are reduced and contextual effects largely removed when the task is signaled before the PVC appears. The SERIA model accounts for RT and ER in both conditions and better so than other candidate models. Modeling demonstrates that voluntary pro- and antisaccades are frequent in both conditions. Moreover, early task cue presentation results in better control of reflexive saccades, leading to fewer fast antisaccade errors and more rapid correct prosaccades. Finally, high-latency errors are shown to be prevalent in both conditions. In summary, SERIA provides an explanation for the differences in the delayed and nondelayed antisaccade task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In the antisaccade task participants are required to saccade in the opposite direction of a peripheral visual cue (PVC). This paradigm is often used to investigate inhibition of reflexive responses as well as voluntary response generation. However, it is not clear to what extent different versions of this task probe the same underlying processes. Here, we explored with the Stochastic Early Reaction, Inhibition, and late Action (SERIA) model how the delay between task cue and PVC affects reaction time (RT) and error rate (ER) when pro- and antisaccade trials are randomly interleaved. Specifically, we contrasted a condition in which the task cue was presented before the PVC with a condition in which the PVC served also as task cue. Summary statistics indicate that ERs and RTs are reduced and contextual effects largely removed when the task is signaled before the PVC appears. The SERIA model accounts for RT and ER in both conditions and better so than other candidate models. Modeling demonstrates that voluntary pro- and antisaccades are frequent in both conditions. Moreover, early task cue presentation results in better control of reflexive saccades, leading to fewer fast antisaccade errors and more rapid correct prosaccades. Finally, high-latency errors are shown to be prevalent in both conditions. In summary, SERIA provides an explanation for the differences in the delayed and nondelayed antisaccade task.

@article{Arai2007,
title = {Priming ditransitive structures in comprehension},
author = {Manabu Arai and Roger P G van Gompel and Christoph Scheepers},
doi = {10.1016/j.cogpsych.2006.07.001},
year = {2007},
date = {2007-01-01},
journal = {Cognitive Psychology},
volume = {54},
number = {3},
pages = {218--250},
abstract = {Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent.

@article{Arai2013,
title = {The use of verb-specific information for prediction in sentence processing},
author = {Manabu Arai and Frank Keller},
doi = {10.1080/01690965.2012.658072},
year = {2013},
date = {2013-01-01},
journal = {Language and Cognitive Processes},
volume = {28},
number = {4},
pages = {525--560},
abstract = {Recent research has shown that language comprehenders make predictions about upcoming linguistic information. These studies demonstrate that the processor not only analyses the input that it received but also predicts upcoming unseen elements. Two visual world experiments were conducted to examine the type of syntactic information this prediction process has access to. Experiment 1 examined whether the verb's subcategorization information is used for predicting a direct object, by comparing transitive verbs (e.g., punish) to intransitive verbs (e.g., disagree). Experiment 2 examined whether verb frequency information is used for predicting a reduced relative clause by contrasting verbs that are infrequent in the past participle form (e.g., watch) with ones that are frequent in that form (e.g., record). Both experiments showed that comprehenders used lexically specific syntactic information to predict upcoming syntactic structure; this information can be used to avoid garden paths in certain cases, as Experiment 2 demonstrated.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Recent research has shown that language comprehenders make predictions about upcoming linguistic information. These studies demonstrate that the processor not only analyses the input that it received but also predicts upcoming unseen elements. Two visual world experiments were conducted to examine the type of syntactic information this prediction process has access to. Experiment 1 examined whether the verb's subcategorization information is used for predicting a direct object, by comparing transitive verbs (e.g., punish) to intransitive verbs (e.g., disagree). Experiment 2 examined whether verb frequency information is used for predicting a reduced relative clause by contrasting verbs that are infrequent in the past participle form (e.g., watch) with ones that are frequent in that form (e.g., record). Both experiments showed that comprehenders used lexically specific syntactic information to predict upcoming syntactic structure; this information can be used to avoid garden paths in certain cases, as Experiment 2 demonstrated.

@article{Arai2014,
title = {The development of Japanese passive syntax as indexed by structural priming in comprehension},
author = {Manabu Arai and Reiko Mazuka},
doi = {10.1080/17470218.2013.790454},
year = {2014},
date = {2014-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {67},
number = {1},
pages = {60--78},
abstract = {A number of previous studies reported a phenomenon of syntactic priming with young children as evidence for cognitive representations required for processing syntactic structures. However, it remains unclear how syntactic priming reflects children's grammatical competence. The current study investigated structural priming of the Japanese passive structure with 5- and 6-year-old children in a visual-world setting. Our results showed a priming effect as anticipatory eye movements to an upcoming referent in these children but the effect was significantly stronger in magnitude in 6-year-olds than in 5-year-olds. Consistently, the responses to comprehension questions revealed that 6-year-olds produced a greater number of correct answers and more answers using the passive structure than 5-year-olds. We also tested adult participants who showed even stronger priming than the children. The results together revealed that language users with the greater linguistic competence with the passives exhibited stronger priming, demonstrating a tight relationship between the effect of priming and the development of grammatical competence. Furthermore, we found that the magnitude of the priming effect decreased over time. We interpret these results in the light of an error-based learning account. Our results also provided evidence for prehead as well as head-independent priming.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

A number of previous studies reported a phenomenon of syntactic priming with young children as evidence for cognitive representations required for processing syntactic structures. However, it remains unclear how syntactic priming reflects children's grammatical competence. The current study investigated structural priming of the Japanese passive structure with 5- and 6-year-old children in a visual-world setting. Our results showed a priming effect as anticipatory eye movements to an upcoming referent in these children but the effect was significantly stronger in magnitude in 6-year-olds than in 5-year-olds. Consistently, the responses to comprehension questions revealed that 6-year-olds produced a greater number of correct answers and more answers using the passive structure than 5-year-olds. We also tested adult participants who showed even stronger priming than the children. The results together revealed that language users with the greater linguistic competence with the passives exhibited stronger priming, demonstrating a tight relationship between the effect of priming and the development of grammatical competence. Furthermore, we found that the magnitude of the priming effect decreased over time. We interpret these results in the light of an error-based learning account. Our results also provided evidence for prehead as well as head-independent priming.

@article{Arai2015,
title = {Predicting the unbeaten path through syntactic priming},
author = {Manabu Arai and Chie Nakamura and Reiko Mazuka},
doi = {10.1037/a0038389},
year = {2015},
date = {2015-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {41},
number = {2},
pages = {482--500},
abstract = {A number of previous studies showed that comprehenders make use of lexically based constraints such as subcategorization frequency in processing structurally ambiguous sentences. One piece of such evidence is lexically specific syntactic priming in comprehension; following the costly processing of a temporarily ambiguous sentence, comprehenders experience less processing difficulty with the same structure with the same verb in subsequent processing. In previous studies using a reading paradigm, however, the effect was observed at or following disambiguating information and it is not known whether a priming effect affects only the process ofresolving structural ambiguity following disambiguating input or it also affects the process before ambiguity is resolved. Using a visual world paradigm, the current study addressed this issue with Japanese relative clause sentences. Our results demonstrated that after experiencing the relative clause structure, comprehenders were more likely to predict the usually dispreferred structure immediately upon hearing the same verb. No compatible effect, in contrast, was observed on hearing a different verb. Our results are consistent with the constraint-based lexicalist view, which assumes the parallel activation of possible structural analyses at the verb. Our study demonstrated that an experience of a dispreferred structure activates the structural information in a lexically specific manner, leading comprehenders to predict another instance of the same structure on encountering the same verb.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

A number of previous studies showed that comprehenders make use of lexically based constraints such as subcategorization frequency in processing structurally ambiguous sentences. One piece of such evidence is lexically specific syntactic priming in comprehension; following the costly processing of a temporarily ambiguous sentence, comprehenders experience less processing difficulty with the same structure with the same verb in subsequent processing. In previous studies using a reading paradigm, however, the effect was observed at or following disambiguating information and it is not known whether a priming effect affects only the process ofresolving structural ambiguity following disambiguating input or it also affects the process before ambiguity is resolved. Using a visual world paradigm, the current study addressed this issue with Japanese relative clause sentences. Our results demonstrated that after experiencing the relative clause structure, comprehenders were more likely to predict the usually dispreferred structure immediately upon hearing the same verb. No compatible effect, in contrast, was observed on hearing a different verb. Our results are consistent with the constraint-based lexicalist view, which assumes the parallel activation of possible structural analyses at the verb. Our study demonstrated that an experience of a dispreferred structure activates the structural information in a lexically specific manner, leading comprehenders to predict another instance of the same structure on encountering the same verb.

@article{Arai2016,
title = {It's harder to break a relationship when you commit long},
author = {Manabu Arai and Chie Nakamura},
doi = {10.1371/journal.pone.0156482},
year = {2016},
date = {2016-01-01},
journal = {PLoS ONE},
volume = {11},
number = {6},
pages = {1--13},
abstract = {Past research has produced evidence that parsing commitments strengthen over the processing of additional linguistic elements that are consistent with the commitments and undoing strong commitments takes more time than undoing weak commitments. It remains unclear, however, whether this so-called digging-in effect is exclusively due to the length of an ambiguous region or at least partly to the extra cost of processing these additional phrases. The current study addressed this issue by testing Japanese relative clause structure, where lexical content and sentence meaning were controlled for. The results showed evidence for a digging-in effect reflecting the strengthened commitment to an incorrect analysis caused by the processing of additional adjuncts. Our study provides strong support for the dynamical, self-organizing models of sentence processing but poses a problem for other models including serial two-stage models as well as frequency-based probabilistic models such as the surprisal theory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Past research has produced evidence that parsing commitments strengthen over the processing of additional linguistic elements that are consistent with the commitments and undoing strong commitments takes more time than undoing weak commitments. It remains unclear, however, whether this so-called digging-in effect is exclusively due to the length of an ambiguous region or at least partly to the extra cost of processing these additional phrases. The current study addressed this issue by testing Japanese relative clause structure, where lexical content and sentence meaning were controlled for. The results showed evidence for a digging-in effect reflecting the strengthened commitment to an incorrect analysis caused by the processing of additional adjuncts. Our study provides strong support for the dynamical, self-organizing models of sentence processing but poses a problem for other models including serial two-stage models as well as frequency-based probabilistic models such as the surprisal theory.

@article{Arandia-Romero2016,
title = {Multiplicative and additive modulation of neuronal tuning with population activity affects encoded information},
author = {I{ñ}igo Arandia-Romero and Seiji Tanabe and Jan Drugowitsch and Adam Kohn and Rubén Moreno-Bote},
doi = {10.1016/j.neuron.2016.01.044},
year = {2016},
date = {2016-01-01},
journal = {Neuron},
volume = {89},
number = {6},
pages = {1305--1316},
abstract = {Numerous studies have shown that neuronal responses are modulated by stimulus properties and also by the state of the local network. However, little is known about how activity fluctuations of neuronal populations modulate the sensory tuning of cells and affect their encoded information. We found that fluctuations in ongoing and stimulus-evoked population activity in primate visual cortex modulate the tuning of neurons in a multiplicative and additive manner. While distributed on a continuum, neurons with stronger multiplicative effects tended to have less additive modulation and vice versa. The information encoded by multiplicatively modulated neurons increased with greater population activity, while that of additively modulated neurons decreased. These effects offset each other so that population activity had little effect on total information. Our results thus suggest that intrinsic activity fluctuations may act as a "traffic light" that determines which subset of neurons is most informative.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Numerous studies have shown that neuronal responses are modulated by stimulus properties and also by the state of the local network. However, little is known about how activity fluctuations of neuronal populations modulate the sensory tuning of cells and affect their encoded information. We found that fluctuations in ongoing and stimulus-evoked population activity in primate visual cortex modulate the tuning of neurons in a multiplicative and additive manner. While distributed on a continuum, neurons with stronger multiplicative effects tended to have less additive modulation and vice versa. The information encoded by multiplicatively modulated neurons increased with greater population activity, while that of additively modulated neurons decreased. These effects offset each other so that population activity had little effect on total information. Our results thus suggest that intrinsic activity fluctuations may act as a "traffic light" that determines which subset of neurons is most informative.

Neural activity during repeated presentations ofa sensory stimulus exhibits considerable trial-by-trial variability. Previous studies have reported that trial-by-trial neural variability is reduced (quenched) by the presentation of a stimulus. However, the functional significance and behavioral relevance ofvariability quenching and the potential physiological mechanisms that may drive it have been studied only rarely. Here, we recorded neural activity with EEG as subjects performed a two-interval forced-choice contrast discrimination task. Trial-by-trial neural variability was quenched by⬃40% after the presentation ofthe stimulus relative to the variability apparent before stimulus presentation, yet there were large differences in the magnitude ofvariability quenching across subjects. Individual magnitudes of quenching predicted individual discrimination capabilities such that subjects who exhibited larger quenching had smaller contrast discrimination thresholds and steeper psychometric function slopes. Furthermore, the magnitude ofvariability quenching was strongly correlated with a reduction in broadband EEGpower after stimulus presentation. Our results suggest that neural variability quenching is achieved by reducing the amplitude of broadband neural oscillations after sensory input, which yields relatively more reproducible cortical activity across trials and enables superior perceptual abilities in individuals who quench more.

@article{Arazi2017b,
title = {The Magnitude of Trial-By-Trial Neural Variability Is Reproducible over Time and across Tasks in Humans},
author = {Ayelet Arazi and Gil Gonen-Yaacovi and Ilan Dinstein},
doi = {10.1523/ENEURO.0292-17.2017},
year = {2017},
date = {2017-01-01},
journal = {eNeuro},
volume = {4},
number = {6},
pages = {ENEURO.0292--17.2017},
abstract = {Numerous studies have shown that neural activity in sensory cortices is remarkably variable over time and across trials even when subjects are presented with an identical repeating stimulus or task. This trial-by-trial neural variability is relatively large in the prestimulus period and considerably smaller (quenched) following stimulus presentation. Previous studies have suggested that the magnitude of neural variability affects behavior such that perceptual performance is better on trials and in individuals where variability quenching is larger. To what degree are neural variability magnitudes of individual subjects flexible or static? Here, we used EEG recordings from adult humans to demonstrate that neural variability magnitudes in visual cortex are remarkably consistent across different tasks and recording sessions. While magnitudes of neural variability differed dramatically across individual subjects, they were surprisingly stable across four tasks with different stimuli, temporal structures, and attentional/cognitive demands as well as across experimental sessions separated by one year. These experiments reveal that, in adults, neural variability magnitudes are mostly solidified individual characteristics that change little with task or time, and are likely to predispose individual subjects to exhibit distinct behavioral capabilities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Numerous studies have shown that neural activity in sensory cortices is remarkably variable over time and across trials even when subjects are presented with an identical repeating stimulus or task. This trial-by-trial neural variability is relatively large in the prestimulus period and considerably smaller (quenched) following stimulus presentation. Previous studies have suggested that the magnitude of neural variability affects behavior such that perceptual performance is better on trials and in individuals where variability quenching is larger. To what degree are neural variability magnitudes of individual subjects flexible or static? Here, we used EEG recordings from adult humans to demonstrate that neural variability magnitudes in visual cortex are remarkably consistent across different tasks and recording sessions. While magnitudes of neural variability differed dramatically across individual subjects, they were surprisingly stable across four tasks with different stimuli, temporal structures, and attentional/cognitive demands as well as across experimental sessions separated by one year. These experiments reveal that, in adults, neural variability magnitudes are mostly solidified individual characteristics that change little with task or time, and are likely to predispose individual subjects to exhibit distinct behavioral capabilities.

This article presents research showing that second language (L2) learners do not have deficient representations and they are capable of acquiring structures that are absent from their first language (L1). The Redeployment Hypothesis—which claims that L2 phonologies include novel representations created via redeployment of L1 phonological components—is consistent with data from several domains, including acquisition of phonological features, syllable structure, moraic structure, and metrical structure. Moreover, it is shown that input prominence plays a role in L2 acquisition, and that language learners are sensitive to robust phonetic cues. Finally, studies done on interlingual homographs and homophones argue for non-selective access to the bilingual lexicon, suggesting that the language processing capacity is always engaged.

@article{Archibald2013,
title = {Visual exploration in Parkinson's disease and Parkinson's disease dementia},
author = {Neil K Archibald and Samuel B Hutton and Michael P Clarke and Urs P Mosimann and David J Burn},
doi = {10.1093/brain/awt005},
year = {2013},
date = {2013-01-01},
journal = {Brain},
volume = {136},
number = {3},
pages = {739--750},
abstract = {Parkinson's disease, typically thought of as a movement disorder, is increasingly recognized as causing cognitive impairment and dementia. Eye movement abnormalities are also described, including impairment of rapid eye movements (saccades) and the fixations interspersed between them. Such movements are under the influence of cortical and subcortical networks commonly targeted by the neurodegeneration seen in Parkinson's disease and, as such, may provide a marker for cognitive decline. This study examined the error rates and visual exploration strategies of subjects with Parkinson's disease, with and without cognitive impairment, whilst performing a battery of visuo-cognitive tasks. Error rates were significantly higher in those Parkinson's disease groups with either mild cognitive impairment (P = 0.001) or dementia (P textless 0.001), than in cognitively normal subjects with Parkinson's disease. When compared with cognitively normal subjects with Parkinson's disease, exploration strategy, as measured by a number of eye tracking variables, was least efficient in the dementia group but was also affected in those subjects with Parkinson's disease with mild cognitive impairment. When compared with control subjects and cognitively normal subjects with Parkinson's disease, saccade amplitudes were significantly reduced in the groups with mild cognitive impairment or dementia. Fixation duration was longer in all Parkinson's disease groups compared with healthy control subjects but was longest for cognitively impaired Parkinson's disease groups. The strongest predictor of average fixation duration was disease severity. Analysing only data from the most complex task, with the highest error rates, both cognitive impairment and disease severity contributed to a predictive model for fixation duration F(2,76) = 12.52, P 0.001, but medication dose did not (r = 0.18},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Parkinson's disease, typically thought of as a movement disorder, is increasingly recognized as causing cognitive impairment and dementia. Eye movement abnormalities are also described, including impairment of rapid eye movements (saccades) and the fixations interspersed between them. Such movements are under the influence of cortical and subcortical networks commonly targeted by the neurodegeneration seen in Parkinson's disease and, as such, may provide a marker for cognitive decline. This study examined the error rates and visual exploration strategies of subjects with Parkinson's disease, with and without cognitive impairment, whilst performing a battery of visuo-cognitive tasks. Error rates were significantly higher in those Parkinson's disease groups with either mild cognitive impairment (P = 0.001) or dementia (P textless 0.001), than in cognitively normal subjects with Parkinson's disease. When compared with cognitively normal subjects with Parkinson's disease, exploration strategy, as measured by a number of eye tracking variables, was least efficient in the dementia group but was also affected in those subjects with Parkinson's disease with mild cognitive impairment. When compared with control subjects and cognitively normal subjects with Parkinson's disease, saccade amplitudes were significantly reduced in the groups with mild cognitive impairment or dementia. Fixation duration was longer in all Parkinson's disease groups compared with healthy control subjects but was longest for cognitively impaired Parkinson's disease groups. The strongest predictor of average fixation duration was disease severity. Analysing only data from the most complex task, with the highest error rates, both cognitive impairment and disease severity contributed to a predictive model for fixation duration F(2,76) = 12.52, P 0.001, but medication dose did not (r = 0.18

@article{Arcizet2018,
title = {Covert spatial selection in primate basal ganglia},
author = {Fabrice Arcizet and Richard J Krauzlis},
doi = {10.1371/journal.pbio.2005930},
year = {2018},
date = {2018-01-01},
journal = {PLoS Biology},
volume = {16},
number = {10},
pages = {1--28},
abstract = {The basal ganglia are important for action selection. They are also implicated in perceptual and cognitive functions that seem far removed from motor control. Here, we tested whether the role of the basal ganglia in selection extends to nonmotor aspects of behavior by recording neuronal activity in the caudate nucleus while animals performed a covert spatial attention task. We found that caudate neurons strongly select the spatial location of the relevant stimulus throughout the task even in the absence of any overt action. This spatially selective activity was dependent on task and visual conditions and could be dissociated from goal-directed actions. Caudate activity was also sufficient to correctly identify every epoch in the covert attention task. These results provide a novel perspective on mechanisms of attention by demonstrating that the basal ganglia are involved in spatial selection and tracking of behavioral states even in the absence of overt orienting movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The basal ganglia are important for action selection. They are also implicated in perceptual and cognitive functions that seem far removed from motor control. Here, we tested whether the role of the basal ganglia in selection extends to nonmotor aspects of behavior by recording neuronal activity in the caudate nucleus while animals performed a covert spatial attention task. We found that caudate neurons strongly select the spatial location of the relevant stimulus throughout the task even in the absence of any overt action. This spatially selective activity was dependent on task and visual conditions and could be dissociated from goal-directed actions. Caudate activity was also sufficient to correctly identify every epoch in the covert attention task. These results provide a novel perspective on mechanisms of attention by demonstrating that the basal ganglia are involved in spatial selection and tracking of behavioral states even in the absence of overt orienting movements.

Repeated readings is a frequently studied and recommended intervention for improving reading fluency. Typically, researchers investigate generalization of repeated readings interventions by assessing students' accuracy and rate on researcher-developed high word overlap passages. Unfortunately, this methodology may mask intervention effects given that the dependent measure is reflective of time spent by students reading both practiced and unpracticed words. Eye-tracking procedures have the potential to overcome this limitation. The current study examined the eye movements of participants who were (a) not provided with any intervention (n = 28), (b) provided with repeated readings on a single passage containing a set of target words (n = 28), or (c) provided the opportunity to read four different passages each containing the same set of target words (n = 28). Students' reading of a novel passage containing the target words provides evidence to support recommendations that schools use repeated readings.

@article{Ardoin2016,
title = {Repeated versus wide reading: A randomized control design study examining the impact of fluency interventions on underlying reading behavior},
author = {Scott P Ardoin and Katherine S Binder and Tori E Foster and Andrea M Zawoyski},
doi = {10.1016/j.jsp.2016.09.002},
year = {2016},
date = {2016-01-01},
journal = {Journal of School Psychology},
volume = {59},
pages = {13--38},
publisher = {Society for the Study of School Psychology},
abstract = {Repeated readings (RR) has garnered much attention as an evidence based intervention designed to improve all components of reading fluency (rate, accuracy, prosody, and comprehension). Despite this attention, there is not an abundance of research comparing its effectiveness to other potential interventions. The current study presents the findings from a randomized control trial study involving the assignment of 168 second grade students to a RR, wide reading (WR), or business as usual condition. Intervention students were provided with 9–10 weeks of intervention with sessions occurring four times per week. Pre- and post-testing were conducted using Woodcock-Johnson III reading achievement measures (Woodcock, McGrew, & Mather, 2001, curriculum-based measurement (CBM) probes, measures of prosody, and measures of students' eye movements when reading. Changes in fluency were also monitored using weekly CBM progress monitoring procedures. Data were collected on the amount of time students spent reading and the number of words read by students during each intervention session. Results indicate substantial gains made by students across conditions, with some measures indicating greater gains by students in the two intervention conditions. Analyses do not indicate that RR was superior to WR. In addition to expanding the RR literature, this study greatly expands research evaluating changes in reading behaviors that occur with improvements in reading fluency. Implications regarding whether schools should provide more opportunities to repeatedly practice the same text (i.e., RR) or practice a wide range of text (i.e., WR) are provided.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Repeated readings (RR) has garnered much attention as an evidence based intervention designed to improve all components of reading fluency (rate, accuracy, prosody, and comprehension). Despite this attention, there is not an abundance of research comparing its effectiveness to other potential interventions. The current study presents the findings from a randomized control trial study involving the assignment of 168 second grade students to a RR, wide reading (WR), or business as usual condition. Intervention students were provided with 9–10 weeks of intervention with sessions occurring four times per week. Pre- and post-testing were conducted using Woodcock-Johnson III reading achievement measures (Woodcock, McGrew, & Mather, 2001, curriculum-based measurement (CBM) probes, measures of prosody, and measures of students' eye movements when reading. Changes in fluency were also monitored using weekly CBM progress monitoring procedures. Data were collected on the amount of time students spent reading and the number of words read by students during each intervention session. Results indicate substantial gains made by students across conditions, with some measures indicating greater gains by students in the two intervention conditions. Analyses do not indicate that RR was superior to WR. In addition to expanding the RR literature, this study greatly expands research evaluating changes in reading behaviors that occur with improvements in reading fluency. Implications regarding whether schools should provide more opportunities to repeatedly practice the same text (i.e., RR) or practice a wide range of text (i.e., WR) are provided.

@article{Ardoin2018,
title = {Examining the maintenance and generalization effects of repeated practice: A comparison of three interventions},
author = {Scott P Ardoin and Katherine S Binder and Andrea M Zawoyski and Tori E Foster},
doi = {10.1016/j.jsp.2017.12.002},
year = {2018},
date = {2018-01-01},
journal = {Journal of School Psychology},
volume = {68},
pages = {1--18},
abstract = {Repeated reading (RR) procedures are consistent with the procedures recommended by Haring and Eaton's (1978) Instructional Hierarchy (IH) for promoting students' fluent responding to newly learned stimuli. It is therefore not surprising that an extensive body of literature exists, which supports RR as an effective practice for promoting students' reading fluency of practiced passages. Less clear, however, is the extent to which RR helps students read the words practiced in an intervention passage when those same words are presented in a new passage. The current study employed randomized control design procedures to examine the maintenance and generalization effects of three interventions that were designed based upon Haring and Eaton's (1978) IH. Across four days, students either practiced reading (a) the same passage seven times (RR+RR), (b) one passage four times and three passages each once (RR+Guided Wide Reading [GWR]), or (c) seven passages each once (GWR+GWR). Students participated in the study across 2 weeks, with intervention being provided on a different passage set each week. All passages practiced within a week, regardless of condition, contained four target low frequency and four high frequency words. Across the 130 students for whom data were analyzed, results indicated that increased opportunities to practice words led to greater maintenance effects when passages were read seven days later but revealed minimal differences across conditions in students' reading of target words presented within a generalization passage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Repeated reading (RR) procedures are consistent with the procedures recommended by Haring and Eaton's (1978) Instructional Hierarchy (IH) for promoting students' fluent responding to newly learned stimuli. It is therefore not surprising that an extensive body of literature exists, which supports RR as an effective practice for promoting students' reading fluency of practiced passages. Less clear, however, is the extent to which RR helps students read the words practiced in an intervention passage when those same words are presented in a new passage. The current study employed randomized control design procedures to examine the maintenance and generalization effects of three interventions that were designed based upon Haring and Eaton's (1978) IH. Across four days, students either practiced reading (a) the same passage seven times (RR+RR), (b) one passage four times and three passages each once (RR+Guided Wide Reading [GWR]), or (c) seven passages each once (GWR+GWR). Students participated in the study across 2 weeks, with intervention being provided on a different passage set each week. All passages practiced within a week, regardless of condition, contained four target low frequency and four high frequency words. Across the 130 students for whom data were analyzed, results indicated that increased opportunities to practice words led to greater maintenance effects when passages were read seven days later but revealed minimal differences across conditions in students' reading of target words presented within a generalization passage.

@article{Arizpe2012,
title = {Start position strongly influences fixation patterns during face processing: Difficulties with eye movements as a measure of information use},
author = {Joseph Arizpe and Dwight J Kravitz and Galit Yovel and Chris I Baker},
doi = {10.1371/journal.pone.0031106},
year = {2012},
date = {2012-01-01},
journal = {PLoS ONE},
volume = {7},
number = {2},
pages = {1--17},
abstract = {Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale.

@article{Arizpe2015,
title = {Characteristic visuomotor influences on eye-movement patterns to faces and other high level stimuli},
author = {Joseph M Arizpe and Vincent Walsh and Chris I Baker},
doi = {10.3389/fpsyg.2015.01027},
year = {2015},
date = {2015-01-01},
journal = {Frontiers in Psychology},
volume = {6},
pages = {1--14},
abstract = {Eye-movement patterns are often utilized in studies of visual perception as indices of the specific information extracted to efficiently process a given stimulus during a given task. Our prior work, however, revealed that not only the stimulus and task influence eye-movements, but that visuomotor (start position) factors also robustly and characteristically influence eye-movement patterns to faces (Arizpe et al., 2012). Here we manipulated lateral starting side and distance from the midline of face and line-symmetrical control (butterfly) stimuli in order to further investigate the nature and generality of such visuomotor influences. First we found that increasing starting distance from midline (4°, 8°, 12°, and 16° visual angle) strongly and proportionately increased the distance of the first ordinal fixation from midline. We did not find influences of starting distance on subsequent fixations, however, suggesting that eye-movement plans are not strongly affected by starting distance following an initial orienting fixation. Further, we replicated our prior effect of starting side (left, right) to induce a spatially contralateral tendency of fixations after the first ordinal fixation. However, we also established that these visuomotor influences did not depend upon the predictability of the location of the upcoming stimulus, and were present not only for face stimuli but also for our control stimulus category (butterflies). We found a correspondence in overall left-lateralized fixation tendency between faces and butterflies. Finally, for faces, we found a relationship between left starting side (right sided fixation pattern tendency) and increased recognition performance, which likely reflects a cortical right hemisphere (left visual hemifield) advantage for face perception. These results further indicate the importance of considering and controlling for visuomotor influences in the design, analysis, and interpretation of eye-movement studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Eye-movement patterns are often utilized in studies of visual perception as indices of the specific information extracted to efficiently process a given stimulus during a given task. Our prior work, however, revealed that not only the stimulus and task influence eye-movements, but that visuomotor (start position) factors also robustly and characteristically influence eye-movement patterns to faces (Arizpe et al., 2012). Here we manipulated lateral starting side and distance from the midline of face and line-symmetrical control (butterfly) stimuli in order to further investigate the nature and generality of such visuomotor influences. First we found that increasing starting distance from midline (4°, 8°, 12°, and 16° visual angle) strongly and proportionately increased the distance of the first ordinal fixation from midline. We did not find influences of starting distance on subsequent fixations, however, suggesting that eye-movement plans are not strongly affected by starting distance following an initial orienting fixation. Further, we replicated our prior effect of starting side (left, right) to induce a spatially contralateral tendency of fixations after the first ordinal fixation. However, we also established that these visuomotor influences did not depend upon the predictability of the location of the upcoming stimulus, and were present not only for face stimuli but also for our control stimulus category (butterflies). We found a correspondence in overall left-lateralized fixation tendency between faces and butterflies. Finally, for faces, we found a relationship between left starting side (right sided fixation pattern tendency) and increased recognition performance, which likely reflects a cortical right hemisphere (left visual hemifield) advantage for face perception. These results further indicate the importance of considering and controlling for visuomotor influences in the design, analysis, and interpretation of eye-movement studies.

@article{Arizpe2016,
title = {Differences in looking at own-and other-race faces are subtle and analysis-dependent: An account of discrepant reports},
author = {Joseph Arizpe and Dwight J Kravitz and Vincent Walsh and Galit Yovel and Chris I Baker},
doi = {10.1371/journal.pone.0148253},
year = {2016},
date = {2016-01-01},
journal = {PLoS ONE},
volume = {11},
number = {2},
pages = {1--25},
abstract = {The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processingmechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processingmechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using

@article{Arizpe2017,
title = {Where you look matters for body perception: Preferred gaze location contributes to the body inversion effect},
author = {Joseph M Arizpe and Danielle L McKean and Jack W Tsao and Annie W -Y Chan},
doi = {10.1371/journal.pone.0169148},
year = {2017},
date = {2017-01-01},
journal = {PLoS ONE},
volume = {12},
number = {1},
pages = {1--24},
abstract = {The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; how- ever, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contribu- tions of feature and configuration processing to visual body discrimination. Participants pre- dominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indi- cating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interest- ingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; how- ever, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contribu- tions of feature and configuration processing to visual body discrimination. Participants pre- dominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indi- cating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interest- ingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations.

@article{Arizpe2017a,
title = {The categories, frequencies, and stability of idiosyncratic eye-movement patterns to faces},
author = {Joseph Arizpe and Vincent Walsh and Galit Yovel and Chris I Baker},
doi = {10.1016/j.visres.2016.10.013},
year = {2017},
date = {2017-01-01},
journal = {Vision Research},
volume = {141},
pages = {191--203},
publisher = {Elsevier Ltd},
abstract = {The spatial pattern of eye-movements to faces considered typical for neurologically healthy individuals is a roughly T-shaped distribution over the internal facial features with peak fixation density tending toward the left eye (observer's perspective). However, recent studies indicate that striking deviations from this classic pattern are common within the population and are highly stable over time. The classic pattern actually reflects the average of these various idiosyncratic eye-movement patterns across individuals. The natural categories and respective frequencies of different types of idiosyncratic eye-movement patterns have not been specifically investigated before, so here we analyzed the spatial patterns of eye-movements for 48 participants to estimate the frequency of different kinds of individual eye-movement patterns to faces in the normal healthy population. Four natural clusters were discovered such that approximately 25% of our participants' fixation density peaks clustered over the left eye region (observer's perspective), 23% over the right eye-region, 31% over the nasion/bridge region of the nose, and 20% over the region spanning the nose, philthrum, and upper lips. We did not find any relationship between particular idiosyncratic eye-movement patterns and recognition performance. Individuals' eye-movement patterns early in a trial were more stereotyped than later ones and idiosyncratic fixation patterns evolved with time into a trial. Finally, while face inversion strongly modulated eye-movement patterns, individual patterns did not become less distinct for inverted compared to upright faces. Group-averaged fixation patterns do not represent individual patterns well, so exploration of such individual patterns is of value for future studies of visual cognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The spatial pattern of eye-movements to faces considered typical for neurologically healthy individuals is a roughly T-shaped distribution over the internal facial features with peak fixation density tending toward the left eye (observer's perspective). However, recent studies indicate that striking deviations from this classic pattern are common within the population and are highly stable over time. The classic pattern actually reflects the average of these various idiosyncratic eye-movement patterns across individuals. The natural categories and respective frequencies of different types of idiosyncratic eye-movement patterns have not been specifically investigated before, so here we analyzed the spatial patterns of eye-movements for 48 participants to estimate the frequency of different kinds of individual eye-movement patterns to faces in the normal healthy population. Four natural clusters were discovered such that approximately 25% of our participants' fixation density peaks clustered over the left eye region (observer's perspective), 23% over the right eye-region, 31% over the nasion/bridge region of the nose, and 20% over the region spanning the nose, philthrum, and upper lips. We did not find any relationship between particular idiosyncratic eye-movement patterns and recognition performance. Individuals' eye-movement patterns early in a trial were more stereotyped than later ones and idiosyncratic fixation patterns evolved with time into a trial. Finally, while face inversion strongly modulated eye-movement patterns, individual patterns did not become less distinct for inverted compared to upright faces. Group-averaged fixation patterns do not represent individual patterns well, so exploration of such individual patterns is of value for future studies of visual cognition.

@article{Arkesteijn2018,
title = {Target-distractor competition cannot be resolved across a saccade},
author = {Kiki Arkesteijn and Jeroen B J Smeets and Mieke Donk and Artem V Belopolsky},
doi = {10.1038/s41598-018-34120-4},
year = {2018},
date = {2018-01-01},
journal = {Scientific Reports},
volume = {8},
number = {1},
pages = {1--10},
publisher = {Springer US},
abstract = {When a distractor is presented in close spatial proximity to a target, a saccade tends to land in between the two objects rather than on the target. This robust phenomenon (also referred to as the global efect) is thought to refect unresolved competition between target and distractor. It is unclear whether this landing bias persists across saccades since a saccade displaces the retinotopic representations of target and distractor. In the present study participants made successive saccades towards two saccadic targets which were presented simultaneously with an irrelevant distractor in close proximity to the second saccade target. The second saccade was either visually-guided or memory-guided. For the memory-guided trials, the second saccade showed a landing bias towards the location of the distractor, despite the disappearance of the distractor after the frst saccade. In contrast, for the visually-guided trials, the bias was corrected and the landing bias was eliminated, even for saccades with the shortest intersaccadic intervals. This suggests that the biased saccade plan was remapped across the frst saccade. Therefore, we conclude that the target-distractor competition was not resolved across a saccade, but can be resolved based on visual information that is available after a saccade.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

When a distractor is presented in close spatial proximity to a target, a saccade tends to land in between the two objects rather than on the target. This robust phenomenon (also referred to as the global efect) is thought to refect unresolved competition between target and distractor. It is unclear whether this landing bias persists across saccades since a saccade displaces the retinotopic representations of target and distractor. In the present study participants made successive saccades towards two saccadic targets which were presented simultaneously with an irrelevant distractor in close proximity to the second saccade target. The second saccade was either visually-guided or memory-guided. For the memory-guided trials, the second saccade showed a landing bias towards the location of the distractor, despite the disappearance of the distractor after the frst saccade. In contrast, for the visually-guided trials, the bias was corrected and the landing bias was eliminated, even for saccades with the shortest intersaccadic intervals. This suggests that the biased saccade plan was remapped across the frst saccade. Therefore, we conclude that the target-distractor competition was not resolved across a saccade, but can be resolved based on visual information that is available after a saccade.

@article{Armstrong2003a,
title = {Inhibitory control of eye movements during oculomotor countermanding in adults with attention-deficit hyperactivity disorder},
author = {I T Armstrong and Douglas P Munoz},
doi = {10.1007/s00221-003-1569-3},
year = {2003},
date = {2003-01-01},
journal = {Experimental Brain Research},
volume = {152},
number = {4},
pages = {444--452},
abstract = {Children with attention-deficit hyperactivity disorder (ADHD) are impulsive, and that impulsiveness can be measured using a countermanding task. Although the overt behaviors of ADHD attenuate with age, it is not clear how well impulsiveness is controlled in adults with ADHD. We tested ADHD adults with an oculomotor countermanding task. The task included two conditions: on 75% of the trials, participants viewed a central fixation marker and then looked to an eccentric target that appeared simultaneous with the disappearance of the fixation marker; on 25% of the trials, a signal was presented at variable delays after target appearance. The signal instructed subjects to stop, or countermand, an eye movement to the target. A correct movement in this case would be to hold gaze at the central fixation location. We expected ADHD participants to be impulsive in their countermanding performance. Additionally, we expected that a visual stop signal at the central fixation location would assist oculomotor countermanding because the signal is presented in the "stop" location, at fixation. To test whether a central stop signal positively biased countermanding, we used a three types of stop signal to instruct the stop: a central visual marker, a peripheral visual signal, and a non-localized sound. All subjects performed best with the central visual stop signal. Subjects with ADHD were less able to countermand eye movements and were influenced more negatively by the non-central signals. Oculomotor countermanding may be useful for quantifying impulsive dysfunction in adults with ADHD especially if a non-central stop signal is applied.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Children with attention-deficit hyperactivity disorder (ADHD) are impulsive, and that impulsiveness can be measured using a countermanding task. Although the overt behaviors of ADHD attenuate with age, it is not clear how well impulsiveness is controlled in adults with ADHD. We tested ADHD adults with an oculomotor countermanding task. The task included two conditions: on 75% of the trials, participants viewed a central fixation marker and then looked to an eccentric target that appeared simultaneous with the disappearance of the fixation marker; on 25% of the trials, a signal was presented at variable delays after target appearance. The signal instructed subjects to stop, or countermand, an eye movement to the target. A correct movement in this case would be to hold gaze at the central fixation location. We expected ADHD participants to be impulsive in their countermanding performance. Additionally, we expected that a visual stop signal at the central fixation location would assist oculomotor countermanding because the signal is presented in the "stop" location, at fixation. To test whether a central stop signal positively biased countermanding, we used a three types of stop signal to instruct the stop: a central visual marker, a peripheral visual signal, and a non-localized sound. All subjects performed best with the central visual stop signal. Subjects with ADHD were less able to countermand eye movements and were influenced more negatively by the non-central signals. Oculomotor countermanding may be useful for quantifying impulsive dysfunction in adults with ADHD especially if a non-central stop signal is applied.

We use cookies to ensure the best experience on our website. You can find out more about the cookies we use in our Cookie Policy. You can accept all cookies (including those used by Google AdWords and other third party software), or you can accept only the cookies required for the website to remain functional (all except Google AdWords). By continuing to use the site, you consent to the cookies. Accept All CookiesRefuse Google AdwordsRead more