EyeLink Eye Tracking Publications Library

All 7000+ peer-reviewed EyeLink eye tracking publications (up to 2018) are listed below in alphabetical order based on the first author. You can search the publications library using key words such as Visual Search, Smooth Pursuit, Parkinsons, etc. You can also search for individual author names. Eye tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye tracking paper, please email us!

@article{Aagten-Murphy2017,
title = {Automatic and intentional influences on saccade landing},
author = {David Aagten-Murphy and Paul M Bays},
doi = {10.1152/jn.00141.2017},
year = {2017},
date = {2017-01-01},
journal = {Journal of Neurophysiology},
volume = {118},
pages = {1105--1122},
abstract = {Saccadic eye movements enable us to rapidly direct our high-resolution fovea onto relevant parts of the visual world. However, while we can intentionally select a location as a saccade target, the wider visual scene also influences our executed movements. In the presence of multiple objects, eye movements may be “captured” to the location of a distractor object, or be biased toward the intermediate position between objects (the “global effect”). Here we examined how the relative strengths of the global effect and visual object capture changed with saccade latency, the separation between visual items and stimulus contrast. Importantly, while many previous studies have omitted giving observers explicit instructions, we instructed participants to either saccade to a specified target object or to the midpoint between two stimuli. This allowed us to examine how their explicit movement goal influenced the likelihood that their saccades terminated at either the target, distractor, or intermediate locations. Using a probabilistic mixture model, we found evidence that both visual object capture and the global effect co-occurred at short latencies and declined as latency increased. As object separation increased, capture came to dominate the landing positions of fast saccades, with reduced global effect. Using the mixture model fits, we dissociated the proportion of unavoidably captured saccades to each location from those intentionally directed to the task goal. From this we could extract the time course of competition between automatic capture and intentional targeting. We show that task instructions substantially altered the distribution of saccade landing points, even at the shortest latencies. NEW & NOTEWORTHY When making an eye movement to a target location, the presence of a nearby distractor can cause the saccade to unintentionally terminate at the distractor itself or the average position in between stimuli. With probabilistic mixture models, we quantified how both unavoidable capture and goal-di rected targeting were influenced by changing the task and the target-distractor separation. Using this novel technique, we could extract the time course over which automatic and intentional processes compete for control of saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Saccadic eye movements enable us to rapidly direct our high-resolution fovea onto relevant parts of the visual world. However, while we can intentionally select a location as a saccade target, the wider visual scene also influences our executed movements. In the presence of multiple objects, eye movements may be “captured” to the location of a distractor object, or be biased toward the intermediate position between objects (the “global effect”). Here we examined how the relative strengths of the global effect and visual object capture changed with saccade latency, the separation between visual items and stimulus contrast. Importantly, while many previous studies have omitted giving observers explicit instructions, we instructed participants to either saccade to a specified target object or to the midpoint between two stimuli. This allowed us to examine how their explicit movement goal influenced the likelihood that their saccades terminated at either the target, distractor, or intermediate locations. Using a probabilistic mixture model, we found evidence that both visual object capture and the global effect co-occurred at short latencies and declined as latency increased. As object separation increased, capture came to dominate the landing positions of fast saccades, with reduced global effect. Using the mixture model fits, we dissociated the proportion of unavoidably captured saccades to each location from those intentionally directed to the task goal. From this we could extract the time course of competition between automatic capture and intentional targeting. We show that task instructions substantially altered the distribution of saccade landing points, even at the shortest latencies. NEW & NOTEWORTHY When making an eye movement to a target location, the presence of a nearby distractor can cause the saccade to unintentionally terminate at the distractor itself or the average position in between stimuli. With probabilistic mixture models, we quantified how both unavoidable capture and goal-di rected targeting were influenced by changing the task and the target-distractor separation. Using this novel technique, we could extract the time course over which automatic and intentional processes compete for control of saccades.

@article{Aamand2013,
title = {A NO way to BOLD?: Dietary nitrate alters the hemodynamic response to visual stimulation},
author = {Rasmus Aamand and Thomas Dalsgaard and Yi-Ching Lynn Ho and Arne Møller and Andreas Roepstorff and Torben E Lund},
doi = {10.1016/j.neuroimage.2013.06.069},
year = {2013},
date = {2013-01-01},
journal = {NeuroImage},
volume = {83},
pages = {397--407},
publisher = {Elsevier Inc.},
abstract = {Neurovascular coupling links neuronal activity to vasodilation. Nitric oxide (NO) is a potent vasodilator, and in neurovascular coupling NO production from NO synthases plays an important role. However, another pathway for NO production also exists, namely the nitrate-nitrite-NO pathway. On this basis, we hypothesized that dietary nitrate (NO3-) could influence the brain's hemodynamic response to neuronal stimulation. In the present study, 20 healthy male participants were given either sodium nitrate (NaNO3) or sodium chloride (NaCl) (saline placebo) in a crossover study and were shown visual stimuli based on the retinotopic characteristics of the visual cortex. Our primary measure of the hemodynamic response was the blood oxygenation level dependent (BOLD) response measured with high-resolution functional magnetic resonance imaging (0.64×0.64×1.8mm) in the visual cortex. From this response, we made a direct estimate of key parameters characterizing the shape of the BOLD response (i.e. lag and amplitude). During elevated nitrate intake, corresponding to the nitrate content of a large plate of salad, both the hemodynamic lag and the BOLD amplitude decreased significantly (7.0±2% and 7.9±4%, respectively), and the variation across activated voxels of both measures decreased (12.3±4% and 15.3±7%, respectively). The baseline cerebral blood flow was not affected by nitrate. Ourexperiments demonstrate, for the first time, that dietary nitrate may modulate the local cerebral hemodynamic response to stimuli. A faster and smaller BOLD response, with less variation across local cortex, is consistent with an enhanced hemodynamic coupling during elevated nitrate intake. These findings suggest that dietary patterns, via the nitrate-nitrite-NO pathway, may be a potential way to affect key properties of neurovascular coupling. This could have major clinical implications, which remain to be explored.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Neurovascular coupling links neuronal activity to vasodilation. Nitric oxide (NO) is a potent vasodilator, and in neurovascular coupling NO production from NO synthases plays an important role. However, another pathway for NO production also exists, namely the nitrate-nitrite-NO pathway. On this basis, we hypothesized that dietary nitrate (NO3-) could influence the brain's hemodynamic response to neuronal stimulation. In the present study, 20 healthy male participants were given either sodium nitrate (NaNO3) or sodium chloride (NaCl) (saline placebo) in a crossover study and were shown visual stimuli based on the retinotopic characteristics of the visual cortex. Our primary measure of the hemodynamic response was the blood oxygenation level dependent (BOLD) response measured with high-resolution functional magnetic resonance imaging (0.64×0.64×1.8mm) in the visual cortex. From this response, we made a direct estimate of key parameters characterizing the shape of the BOLD response (i.e. lag and amplitude). During elevated nitrate intake, corresponding to the nitrate content of a large plate of salad, both the hemodynamic lag and the BOLD amplitude decreased significantly (7.0±2% and 7.9±4%, respectively), and the variation across activated voxels of both measures decreased (12.3±4% and 15.3±7%, respectively). The baseline cerebral blood flow was not affected by nitrate. Ourexperiments demonstrate, for the first time, that dietary nitrate may modulate the local cerebral hemodynamic response to stimuli. A faster and smaller BOLD response, with less variation across local cortex, is consistent with an enhanced hemodynamic coupling during elevated nitrate intake. These findings suggest that dietary patterns, via the nitrate-nitrite-NO pathway, may be a potential way to affect key properties of neurovascular coupling. This could have major clinical implications, which remain to be explored.

@article{Aamand2014,
title = {Dietary nitrate facilitates an acetazolamide-induced increase in cerebral blood flow during visual stimulation},
author = {Rasmus Aamand and Yi-Ching Lynn Ho and Thomas Dalsgaard and Andreas Roepstorff and Torben E Lund},
doi = {10.1152/japplphysiol.00797.2013},
year = {2014},
date = {2014-01-01},
journal = {Journal of Applied Physiology},
volume = {116},
number = {3},
pages = {267--273},
abstract = {The carbonic anhydrase (CA) inhibitor acetazolamide (AZ) is used routinely to estimate cerebrovascular reserve capacity in patients, as it reliably increases cerebral blood flow (CBF). However, the mechanism by which AZ accomplishes this CBF increase is not entirely understood. We recently discovered that CA can produce nitric oxide (NO) from nitrite, and that AZ enhances this NO production in vitro. In fact, this interaction between AZ and CA accounted for a large part of AZ's vasodilatory action, which fits well with the known vasodilatory potency of NO. The present study aimed to assess whether AZ acts similarly in vivo in the human cerebrovascular system. Hence, we increased or minimized the dietary intake of nitrate in 20 healthy male participants, showed them a full-field flickering dartboard, and measured their CBF response to this visual stimulus with arterial spin labeling. Doing so, we found a significant positive interaction between the dietary intake of nitrate and the CBF modulation afforded by AZ during visual stimulation. In addition, but contrary to studies conducted in elderly participants, we report no effect of nitrate intake on resting CBF in healthy human participants. The present study provides in vivo support for an enhancing effect of AZ on the NO production from nitrite catalyzed by CA in the cerebrovascular system. Furthermore, our results, in combination with the results of other groups, indicate that nitrate may have significant importance to vascular function when the cerebrovascular system is challenged by age or disease.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The carbonic anhydrase (CA) inhibitor acetazolamide (AZ) is used routinely to estimate cerebrovascular reserve capacity in patients, as it reliably increases cerebral blood flow (CBF). However, the mechanism by which AZ accomplishes this CBF increase is not entirely understood. We recently discovered that CA can produce nitric oxide (NO) from nitrite, and that AZ enhances this NO production in vitro. In fact, this interaction between AZ and CA accounted for a large part of AZ's vasodilatory action, which fits well with the known vasodilatory potency of NO. The present study aimed to assess whether AZ acts similarly in vivo in the human cerebrovascular system. Hence, we increased or minimized the dietary intake of nitrate in 20 healthy male participants, showed them a full-field flickering dartboard, and measured their CBF response to this visual stimulus with arterial spin labeling. Doing so, we found a significant positive interaction between the dietary intake of nitrate and the CBF modulation afforded by AZ during visual stimulation. In addition, but contrary to studies conducted in elderly participants, we report no effect of nitrate intake on resting CBF in healthy human participants. The present study provides in vivo support for an enhancing effect of AZ on the NO production from nitrite catalyzed by CA in the cerebrovascular system. Furthermore, our results, in combination with the results of other groups, indicate that nitrate may have significant importance to vascular function when the cerebrovascular system is challenged by age or disease.

@article{Abbott2015,
title = {Skipping syntactically illegal the previews: The role of predictability},
author = {Matthew J Abbott and Bernhard Angele and Danbi Y Ahn and Keith Rayner},
doi = {10.1037/xlm0000142},
year = {2015},
date = {2015-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {41},
number = {6},
pages = {1703--1714},
abstract = {Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.

@article{Abbott2015a,
title = {The effect of plausibility on eye movements in reading: Testing E-Z Reader's null predictions},
author = {Matthew J Abbott and Adrian Staub},
doi = {10.1016/j.jml.2015.07.002},
year = {2015},
date = {2015-01-01},
journal = {Journal of Memory and Language},
volume = {85},
pages = {76--87},
publisher = {Elsevier Inc.},
abstract = {The E-Z Reader 10 model of eye movements in reading (Reichle, Warren, & McConnell, 2009) posits that the process of word identification strictly precedes the process of integration of a word into its syntactic and semantic context. The present study reports a single large-scale (N=112) eyetracking experiment in which the frequency and plausibility of a target word in each sentence were factorially manipulated. The results were consistent with E-Z Reader's central predictions: frequency but not plausibility influenced the probability that the word was skipped over by the eyes rather than directly fixated, and the two variables had additive, not interactive, effects on all reading time measures. Evidence in favor of null effects and null interactions was obtained by computing Bayes factors, using the default priors and sampling methods for ANOVA models implemented by Rouder, Morey, Speckman, and Province (2012). The results suggest that though a word's plausibility may have a measurable influence as early as the first fixation duration on the target word, in fact plausibility may be influencing only a post-lexical processing stage, rather than lexical identification itself.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The E-Z Reader 10 model of eye movements in reading (Reichle, Warren, & McConnell, 2009) posits that the process of word identification strictly precedes the process of integration of a word into its syntactic and semantic context. The present study reports a single large-scale (N=112) eyetracking experiment in which the frequency and plausibility of a target word in each sentence were factorially manipulated. The results were consistent with E-Z Reader's central predictions: frequency but not plausibility influenced the probability that the word was skipped over by the eyes rather than directly fixated, and the two variables had additive, not interactive, effects on all reading time measures. Evidence in favor of null effects and null interactions was obtained by computing Bayes factors, using the default priors and sampling methods for ANOVA models implemented by Rouder, Morey, Speckman, and Province (2012). The results suggest that though a word's plausibility may have a measurable influence as early as the first fixation duration on the target word, in fact plausibility may be influencing only a post-lexical processing stage, rather than lexical identification itself.

@article{AbediKhoozani2018,
title = {Neck muscle spindle noise biases reaches in a multisensory integration task},
author = {Parisa {Abedi Khoozani} and Gunnar Blohm},
doi = {10.1152/jn.00643.2017},
year = {2018},
date = {2018-01-01},
journal = {Journal of Neurophysiology},
volume = {120},
number = {3},
pages = {893--909},
abstract = {Reference frame Transformations (RFTs) are crucial components of sensorimotor transformations in the brain. Stochasticity in RFTs has been suggested to add noise to the transformed signal due to variability in transformation parameter estimates (e.g. angle) as well as the stochastic nature of computations in spiking networks of neurons. Here, we varied the RFT angle together with the associated variability and evaluated the behavioral impact in a reaching task that required variability-dependent visual-proprioceptive multi-sensory integration. Crucially, reaches were performed with the head either straight or rolled 30deg to either shoulder and we also applied neck loads of 0 or 1.8kg (left or right) in a 3x3 design, resulting in different combinations of estimated head roll angle magnitude and variance required in RFTs. A novel 3D stochastic model of multi-sensory integration across reference frames was fitted to the data and captured our main behavioral findings: (1) neck load biased head angle estimation across all head roll orientations resulting in systematic shifts in reach errors; (2) Increased neck muscle tone led to increased reach variability, due to signal-dependent noise; (3) both head roll and neck load created larger angular errors in reaches to visual targets away from the body compared to reaches toward the body. These results show that noise in muscle spindles and stochasticity in general have a tangible effect on RFTs underlying reach planning. Since RFTs are omnipresent in the brain, our results could have implication for processes as diverse as motor control, decision making, posture / balance control, and perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Reference frame Transformations (RFTs) are crucial components of sensorimotor transformations in the brain. Stochasticity in RFTs has been suggested to add noise to the transformed signal due to variability in transformation parameter estimates (e.g. angle) as well as the stochastic nature of computations in spiking networks of neurons. Here, we varied the RFT angle together with the associated variability and evaluated the behavioral impact in a reaching task that required variability-dependent visual-proprioceptive multi-sensory integration. Crucially, reaches were performed with the head either straight or rolled 30deg to either shoulder and we also applied neck loads of 0 or 1.8kg (left or right) in a 3x3 design, resulting in different combinations of estimated head roll angle magnitude and variance required in RFTs. A novel 3D stochastic model of multi-sensory integration across reference frames was fitted to the data and captured our main behavioral findings: (1) neck load biased head angle estimation across all head roll orientations resulting in systematic shifts in reach errors; (2) Increased neck muscle tone led to increased reach variability, due to signal-dependent noise; (3) both head roll and neck load created larger angular errors in reaches to visual targets away from the body compared to reaches toward the body. These results show that noise in muscle spindles and stochasticity in general have a tangible effect on RFTs underlying reach planning. Since RFTs are omnipresent in the brain, our results could have implication for processes as diverse as motor control, decision making, posture / balance control, and perception.

@article{Abegg2010,
title = {Systematic diagonal and vertical errors in antisaccades and memory-guided saccades},
author = {Mathias Abegg and Hyung Lee and Jason J S Barton},
year = {2010},
date = {2010-01-01},
journal = {Journal of Eye Movement Research},
volume = {3},
number = {3},
pages = {1--10},
abstract = {Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades.

@article{Abegg2010a,
title = {‘Alternate-goal bias' in antisaccades and the inXuence of expectation},
author = {Mathias Abegg and Amadeo R Rodriguez and Hyung Lee and Jason J S Barton},
doi = {10.1007/s00221-010-2259-6},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {203},
number = {3},
pages = {553--562},
abstract = {Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'.

@article{Abegg2011,
title = {Knowing the future: Partial foreknowledge effects on the programming of prosaccades and antisaccades},
author = {Mathias Abegg and Dara S Manoach and Jason J S Barton},
doi = {10.1016/j.visres.2010.11.006},
year = {2011},
date = {2011-01-01},
journal = {Vision Research},
volume = {51},
number = {1},
pages = {215--221},
abstract = {Foreknowledge about the demands of an upcoming trial may be exploited to optimize behavioural responses. In the current study we systematically investigated the benefits of partial foreknowledge - that is, when some but not all aspects of a future trial are known in advance. For this we used an ocular motor paradigm with horizontal prosaccades and antisaccades. Predictable sequences were used to create three partial foreknowledge conditions: one with foreknowledge about the stimulus location only, one with foreknowledge about the task set only, and one with foreknowledge about the direction of the required response only. These were contrasted with a condition of no-foreknowledge and a condition of complete foreknowledge about all three parameters. The results showed that the three types of foreknowledge affected saccadic efficiency differently. While foreknowledge about stimulus-location had no effect on efficiency, task foreknowledge had some effect and response-foreknowledge was as effective as complete foreknowledge. Foreknowledge effects on switch costs followed a similar pattern in general, but were not specific for switching of the trial attribute for which foreknowledge was available. We conclude that partial foreknowledge has a differential effect on efficiency, most consistent with preparatory activation of a motor schema in advance of the stimulus, with consequent benefits for both switched and repeated trials.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Foreknowledge about the demands of an upcoming trial may be exploited to optimize behavioural responses. In the current study we systematically investigated the benefits of partial foreknowledge - that is, when some but not all aspects of a future trial are known in advance. For this we used an ocular motor paradigm with horizontal prosaccades and antisaccades. Predictable sequences were used to create three partial foreknowledge conditions: one with foreknowledge about the stimulus location only, one with foreknowledge about the task set only, and one with foreknowledge about the direction of the required response only. These were contrasted with a condition of no-foreknowledge and a condition of complete foreknowledge about all three parameters. The results showed that the three types of foreknowledge affected saccadic efficiency differently. While foreknowledge about stimulus-location had no effect on efficiency, task foreknowledge had some effect and response-foreknowledge was as effective as complete foreknowledge. Foreknowledge effects on switch costs followed a similar pattern in general, but were not specific for switching of the trial attribute for which foreknowledge was available. We conclude that partial foreknowledge has a differential effect on efficiency, most consistent with preparatory activation of a motor schema in advance of the stimulus, with consequent benefits for both switched and repeated trials.

@article{Abegg2012,
title = {Antisaccades generate two types of saccadic inhibition},
author = {Mathias Abegg and Nishant Sharma and Jason J S Barton},
doi = {10.1016/j.biopsycho.2011.10.007},
year = {2012},
date = {2012-01-01},
journal = {Biological Psychology},
volume = {89},
number = {1},
pages = {191--194},
abstract = {To make an antisaccade away from a stimulus, one must also suppress the more reflexive prosaccade to the stimulus. Whether this inhibition is diffuse or specific for saccade direction is not known. We used a paradigm examining inter-trial carry-over effects. Twelve subjects performed sequences of four identical antisaccades followed by sequences of four prosaccades randomly directed at the location of the antisaccade stimulus, the location of the antisaccade goal, or neutral locations. We found two types of persistent antisaccade-related inhibition. First, prosaccades in any direction were delayed only in the first trial after the antisaccades. Second, prosaccades to the location of the antisaccade stimulus were delayed more than all other prosaccades, and this persisted from the first to the fourth subsequent trial. These findings are consistent with both a transient global inhibition and a more sustained focal inhibition specific for the location of the antisaccade stimulus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

To make an antisaccade away from a stimulus, one must also suppress the more reflexive prosaccade to the stimulus. Whether this inhibition is diffuse or specific for saccade direction is not known. We used a paradigm examining inter-trial carry-over effects. Twelve subjects performed sequences of four identical antisaccades followed by sequences of four prosaccades randomly directed at the location of the antisaccade stimulus, the location of the antisaccade goal, or neutral locations. We found two types of persistent antisaccade-related inhibition. First, prosaccades in any direction were delayed only in the first trial after the antisaccades. Second, prosaccades to the location of the antisaccade stimulus were delayed more than all other prosaccades, and this persisted from the first to the fourth subsequent trial. These findings are consistent with both a transient global inhibition and a more sustained focal inhibition specific for the location of the antisaccade stimulus.

Visual exploration of natural scenes imposes demands that differ between the upper and the lower visual hemifield. Yet little is known about how ocular motor performance is affected by the location of visual stimuli or the direction of a behavioural response. We compared saccadic latencies between upper and lower hemifield in a variety of conditions, including short-latency prosaccades, long-latency prosaccades, antisaccades, memory-guided saccades and saccades with increased attentional and selection demand. All saccade types, except memory guided saccades, had shorter latencies when saccades were directed towards the upper field as compared to downward saccades (ptextless0.05). This upper field reaction time advantage probably arises in ocular motor rather than visual processing. It may originate in structures involved in motor preparation rather than execution.

@article{Abekawa2010,
title = {Spatial coincidence of intentional actions modulates an implicit visuomotor control},
author = {Naotoshi Abekawa and H Gomi},
doi = {10.1152/jn.91133.2008},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neurophysiology},
volume = {103},
number = {5},
pages = {2717--2727},
abstract = {We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching.

@article{Abekawa2014,
title = {Eye-hand coordination in on-line visuomotor adjustments},
author = {Naotoshi Abekawa and Toshio Inui and Hiroaki Gomi},
doi = {10.1097/WNR.0000000000000111},
year = {2014},
date = {2014-01-01},
journal = {NeuroReport},
volume = {25},
number = {7},
pages = {441--445},
abstract = {When we perform a visually guided reaching action, the brain coordinates our hand and eye movements. Eye-hand coordination has been examined widely, but it remains unclear whether the hand and eye motor systems are coordinated during on-line visuomotor adjustments induced by a target jump during a reaching movement. As such quick motor responses are required when we interact with dynamic environments, eye and hand movements could be coordinated even during on-line motor control. Here, we examine the relationship between online hand adjustment and saccadic eye movement. In contrast to the well-known temporal order of eye and hand initiations where the hand follows the eyes, we found that on-line hand adjustment was initiated before the saccade onset. Despite this order reversal, a correlation between hand and saccade latencies was observed, suggesting that the hand motor system is not independent of eye control even when the hand response was induced before the saccade. Moreover, the latency of the hand adjustment with saccadic eye movement was significantly shorter than that with eye fixation. This hand latency modulation cannot be ascribed to any changes of visual or oculomotor reafferent information as the saccade was not yet initiated when the hand adjustment started. Taken together, the hand motor system would receive preparation signals rather than reafference signals of saccadic eye movements to provide quick manual adjustments of the goal-directed eye-hand movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

When we perform a visually guided reaching action, the brain coordinates our hand and eye movements. Eye-hand coordination has been examined widely, but it remains unclear whether the hand and eye motor systems are coordinated during on-line visuomotor adjustments induced by a target jump during a reaching movement. As such quick motor responses are required when we interact with dynamic environments, eye and hand movements could be coordinated even during on-line motor control. Here, we examine the relationship between online hand adjustment and saccadic eye movement. In contrast to the well-known temporal order of eye and hand initiations where the hand follows the eyes, we found that on-line hand adjustment was initiated before the saccade onset. Despite this order reversal, a correlation between hand and saccade latencies was observed, suggesting that the hand motor system is not independent of eye control even when the hand response was induced before the saccade. Moreover, the latency of the hand adjustment with saccadic eye movement was significantly shorter than that with eye fixation. This hand latency modulation cannot be ascribed to any changes of visual or oculomotor reafferent information as the saccade was not yet initiated when the hand adjustment started. Taken together, the hand motor system would receive preparation signals rather than reafference signals of saccadic eye movements to provide quick manual adjustments of the goal-directed eye-hand movements.

@article{Abekawa2015,
title = {Online gain update for manual following response accompanied by gaze shift during arm reaching},
author = {Naotoshi Abekawa and Hiroaki Gomi},
doi = {10.1152/jn.00281.2014},
year = {2015},
date = {2015-01-01},
journal = {Journal of Neurophysiology},
volume = {113},
number = {4},
pages = {1206--1216},
abstract = {To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination.

@article{Abekawa2018,
title = {Disentangling the visual, motor and representational effects of vestibular input},
author = {Naotoshi Abekawa and Elisa R Ferre and Maria Gallagher and Hiroaki Gomi and Patrick Haggard},
doi = {10.1016/j.cortex.2018.04.003},
year = {2018},
date = {2018-01-01},
journal = {Cortex},
volume = {104},
pages = {46--57},
abstract = {The body midline provides a basic reference for egocentric representation of external space. Clinical observations have suggested that vestibular information underpins egocentric representations. Here we aimed to clarify whether and how vestibular inputs contribute to egocentric representation in healthy volunteers. In a psychophysical task, participants were asked to judge whether visual stimuli were located to the left or to the right of their body midline. Artificial vestibular stimulation was applied to stimulate the vestibular organs. We found that artificial stimulation of the vestibular system biased body midline perception. Importantly, no effect was found on motor effector selection. We also ruled out additional explanations based on allocentric visual representations and on potential indirect effects caused by vestibular-driven movements of the eyes, head and body. Taken together our data suggest that vestibular information contributes to computation of egocentric representations by affecting the internal representation of the body midline.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The body midline provides a basic reference for egocentric representation of external space. Clinical observations have suggested that vestibular information underpins egocentric representations. Here we aimed to clarify whether and how vestibular inputs contribute to egocentric representation in healthy volunteers. In a psychophysical task, participants were asked to judge whether visual stimuli were located to the left or to the right of their body midline. Artificial vestibular stimulation was applied to stimulate the vestibular organs. We found that artificial stimulation of the vestibular system biased body midline perception. Importantly, no effect was found on motor effector selection. We also ruled out additional explanations based on allocentric visual representations and on potential indirect effects caused by vestibular-driven movements of the eyes, head and body. Taken together our data suggest that vestibular information contributes to computation of egocentric representations by affecting the internal representation of the body midline.

@article{Abel2008,
title = {Wavelet analysis in infantile nystagmus syndrome: Limitations and abilities},
author = {Larry Allen Abel and Zhong I Wang and Louis F Dell'Osso},
doi = {10.1167/iovs.08-1710},
year = {2008},
date = {2008-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {49},
number = {8},
pages = {3413--3423},
abstract = {PURPOSE: To investigate the proper usage of wavelet analysis in infantile nystagmus syndrome (INS) and determine its limitations and abilities. METHODS: Data were analyzed from accurate eye-movement recordings of INS patients. Wavelet analysis was performed to examine the foveation characteristics, morphologic characteristics and time variation in different INS waveforms. Also compared were the wavelet analysis and the expanded nystagmus acuity function (NAFX) analysis on sections of pre- and post-tenotomy data. RESULTS: Wavelet spectra showed some sensitivity to different features of INS waveforms and reflected their variations across time. However, wavelet analysis was not effective in detecting foveation periods, especially in a complicated INS waveform. NAFX, on the other hand, was a much more direct way of evaluating waveform changes after nystagmus treatments. CONCLUSIONS: Wavelet analysis is a tool that performs, with difficulty, some things that can be done faster and better by directly operating on the nystagmus waveform itself. It appears, however, to be insensitive to the subtle but visually important improvements brought about by INS therapies. Wavelet analysis may have a role in developing automated waveform classifiers where its time-dependent characterization of the waveform can be used. The limitations of wavelet analysis outweighed its abilities in INS waveform-characteristic examination.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

PURPOSE: To investigate the proper usage of wavelet analysis in infantile nystagmus syndrome (INS) and determine its limitations and abilities. METHODS: Data were analyzed from accurate eye-movement recordings of INS patients. Wavelet analysis was performed to examine the foveation characteristics, morphologic characteristics and time variation in different INS waveforms. Also compared were the wavelet analysis and the expanded nystagmus acuity function (NAFX) analysis on sections of pre- and post-tenotomy data. RESULTS: Wavelet spectra showed some sensitivity to different features of INS waveforms and reflected their variations across time. However, wavelet analysis was not effective in detecting foveation periods, especially in a complicated INS waveform. NAFX, on the other hand, was a much more direct way of evaluating waveform changes after nystagmus treatments. CONCLUSIONS: Wavelet analysis is a tool that performs, with difficulty, some things that can be done faster and better by directly operating on the nystagmus waveform itself. It appears, however, to be insensitive to the subtle but visually important improvements brought about by INS therapies. Wavelet analysis may have a role in developing automated waveform classifiers where its time-dependent characterization of the waveform can be used. The limitations of wavelet analysis outweighed its abilities in INS waveform-characteristic examination.

@article{Abeles2017,
title = {Just look away: Gaze aversions as an overt attentional disengagement mechanism},
author = {Dekel Abeles and Shlomit Yuval-Greenberg},
doi = {10.1016/j.cognition.2017.06.021},
year = {2017},
date = {2017-01-01},
journal = {Cognition},
volume = {168},
pages = {99--109},
publisher = {Elsevier B.V.},
abstract = {During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism. In two experiments, gaze-position was monitored by an eye-tracker while participants performed an auditory mental arithmetic task, and in a third experiment they performed an undemanding naming task. Task-irrelevant simple motion stimuli (drifting grating and random dot kinematogram) were centrally presented, moving at varying speeds. Participants averted their gaze away from the moving stimuli more frequently and for longer proportions of the time when the motion was faster than when it was slower. Additionally, a positive correlation was found between the task's difficulty and this aversion behavior. When the task was highly undemanding, no gaze aversion behavior was observed. We conclude that gaze aversion is an active avoidance strategy, sensitive to both the physical features of the visual distractions and the cognitive load imposed by the non-visual task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism. In two experiments, gaze-position was monitored by an eye-tracker while participants performed an auditory mental arithmetic task, and in a third experiment they performed an undemanding naming task. Task-irrelevant simple motion stimuli (drifting grating and random dot kinematogram) were centrally presented, moving at varying speeds. Participants averted their gaze away from the moving stimuli more frequently and for longer proportions of the time when the motion was faster than when it was slower. Additionally, a positive correlation was found between the task's difficulty and this aversion behavior. When the task was highly undemanding, no gaze aversion behavior was observed. We conclude that gaze aversion is an active avoidance strategy, sensitive to both the physical features of the visual distractions and the cognitive load imposed by the non-visual task.

@article{Abeles2018,
title = {Oculomotor behavior during non-visual tasks: The role of visual saliency},
author = {Dekel Abeles and Roy Amit and Shlomit Yuval-Greenberg},
doi = {10.1371/journal.pone.0198242},
year = {2018},
date = {2018-01-01},
journal = {PLoS ONE},
volume = {13},
number = {6},
pages = {1--21},
abstract = {During visual exploration or free-view, gaze positioning is largely determined by the tendency to maximize visual saliency: more salient locations are more likely to be fixated. However, when visual input is completely irrelevant for performance, such as with non-visual tasks, this saliency maximization strategy may be less advantageous and potentially even disruptive for task-performance. Here, we examined whether visual saliency remains a strong driving force in determining gaze positions even in non-visual tasks. We tested three alternative hypotheses: a) That saliency is disadvantageous for non-visual tasks and therefore gaze would tend to shift away from it and towards non-salient locations; b) That saliency is irrelevant during non-visual tasks and therefore gaze would not be directed towards it but also not away-from it; c) That saliency maximization is a strong behavioral drive that would prevail even during non-visual tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

During visual exploration or free-view, gaze positioning is largely determined by the tendency to maximize visual saliency: more salient locations are more likely to be fixated. However, when visual input is completely irrelevant for performance, such as with non-visual tasks, this saliency maximization strategy may be less advantageous and potentially even disruptive for task-performance. Here, we examined whether visual saliency remains a strong driving force in determining gaze positions even in non-visual tasks. We tested three alternative hypotheses: a) That saliency is disadvantageous for non-visual tasks and therefore gaze would tend to shift away from it and towards non-salient locations; b) That saliency is irrelevant during non-visual tasks and therefore gaze would not be directed towards it but also not away-from it; c) That saliency maximization is a strong behavioral drive that would prevail even during non-visual tasks.

@article{Ablinger2013,
title = {Recovery in a letter-by-letter reader: More efficiency at the expense of normal reading strategy},
author = {Irene Ablinger and Walter Huber and Kerstin I Schattka and Ralph Radach},
doi = {10.1080/13554794.2012.667119},
year = {2013},
date = {2013-01-01},
journal = {Neurocase},
volume = {19},
number = {3},
pages = {236--255},
abstract = {Although changes in reading performance of recovering letter-by-letter readers have been described in some detail, no prior research has provided an in-depth analysis of the underlying adaptive word processing strategies. Our work examined the reading performance of a letter-by-letter reader, FH, over a period of 15 months, using eye movement methodology to delineate the recovery process at two different time points (T1, T2). A central question is whether recovery is characterized either by moving back towards normal word processing or by refinement and possibly automatization of an existing pathological strategy that was developed in response to the impairment. More specifically, we hypothesized that letter-by-letter reading may be executed with at least four different strategies and our work sought to distinguish between these alternatives. During recovery significant improvements in reading performance were achieved. A shift of fixation positions from the far left to the extreme right of target words was combined with many small and very few longer regressive saccades. Apparently, ‘letter-by-letter reading' took the form of local clustering, most likely corresponding to the formation ofsublexical units ofanalysis. This pattern was more pronounced at T2, suggesting that improvements in reading efficiency may come at the expense of making it harder to eventually return to normal reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Although changes in reading performance of recovering letter-by-letter readers have been described in some detail, no prior research has provided an in-depth analysis of the underlying adaptive word processing strategies. Our work examined the reading performance of a letter-by-letter reader, FH, over a period of 15 months, using eye movement methodology to delineate the recovery process at two different time points (T1, T2). A central question is whether recovery is characterized either by moving back towards normal word processing or by refinement and possibly automatization of an existing pathological strategy that was developed in response to the impairment. More specifically, we hypothesized that letter-by-letter reading may be executed with at least four different strategies and our work sought to distinguish between these alternatives. During recovery significant improvements in reading performance were achieved. A shift of fixation positions from the far left to the extreme right of target words was combined with many small and very few longer regressive saccades. Apparently, ‘letter-by-letter reading' took the form of local clustering, most likely corresponding to the formation ofsublexical units ofanalysis. This pattern was more pronounced at T2, suggesting that improvements in reading efficiency may come at the expense of making it harder to eventually return to normal reading.

@article{Ablinger2014,
title = {Eye movement analyses indicate the underlying reading strategy in the recovery of lexical readers},
author = {Irene Ablinger and Walter Huber and Ralph Radach},
doi = {10.1080/02687038.2014.894960},
year = {2014},
date = {2014-01-01},
journal = {Aphasiology},
volume = {28},
number = {6},
pages = {640--657},
abstract = {Background: Psycholinguistic error analysis of dyslexic responses in various reading tasks provides the primary basis for clinically discriminating subtypes of pathological reading. Within this framework, phonology-related errors are indicative of a sequential word processing strategy, whereas lexical and semantic errors are associated with a lexical reading strategy. Despite the large number of published intervention studies, relatively little is known about changes in error distributions during recovery in dyslexic patients.Aims: The main purpose of the present work was to extend the scope of research on the time course of recovery in readers with acquired dyslexia, using eye tracking methodology to examine word processing in real time. The guiding hypothesis was that in lexical readers a reduction of lexical errors and an emerging predominant production of phonological errors should be associated with a change to a more segmental moment-to-moment reading behaviour.Methods & Procedures: Five patients participated in an eye movement supported reading intervention, where both lexical and segmental reading was facilitated. Reading performance was assessed before (T1) and after (T2) therapy intervention via recording of eye movements. Analyses included a novel way to examine the spatiotemporal dynamics of processing using distributions of fixation positions as different time intervals. These subdistributions reveal the gradual shifting of fixation positions during word processing, providing an adequate metric for objective classification of online reading strategies.Outcome & Results: Therapy intervention led to improved reading accuracy in all subjects. In three of five participants, analyses revealed a restructuring in the underlying reading mechanisms from predominantly lexical to more segmental word processing. In contrast, two subjects maintained their lexical reading procedures. Importantly, the fundamental assumption that a high number of phonologically based reading errors must be associated with segmental word processing routines, while the production of lexical errors is indicative of a holistic reading strategy could not be verified.Conclusions: Our results indicate that despite general improvements in reading performance, only some patients reorganised their word identification process. Contradictive data raise doubts on the validity of psycholinguistic error analysis as an exclusive indicator of changes in reading strategy. We suggest this traditional approach to combine with innovative eye tracking methodology in the interest of more comprehensive diagnostic strategies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Background: Psycholinguistic error analysis of dyslexic responses in various reading tasks provides the primary basis for clinically discriminating subtypes of pathological reading. Within this framework, phonology-related errors are indicative of a sequential word processing strategy, whereas lexical and semantic errors are associated with a lexical reading strategy. Despite the large number of published intervention studies, relatively little is known about changes in error distributions during recovery in dyslexic patients.Aims: The main purpose of the present work was to extend the scope of research on the time course of recovery in readers with acquired dyslexia, using eye tracking methodology to examine word processing in real time. The guiding hypothesis was that in lexical readers a reduction of lexical errors and an emerging predominant production of phonological errors should be associated with a change to a more segmental moment-to-moment reading behaviour.Methods & Procedures: Five patients participated in an eye movement supported reading intervention, where both lexical and segmental reading was facilitated. Reading performance was assessed before (T1) and after (T2) therapy intervention via recording of eye movements. Analyses included a novel way to examine the spatiotemporal dynamics of processing using distributions of fixation positions as different time intervals. These subdistributions reveal the gradual shifting of fixation positions during word processing, providing an adequate metric for objective classification of online reading strategies.Outcome & Results: Therapy intervention led to improved reading accuracy in all subjects. In three of five participants, analyses revealed a restructuring in the underlying reading mechanisms from predominantly lexical to more segmental word processing. In contrast, two subjects maintained their lexical reading procedures. Importantly, the fundamental assumption that a high number of phonologically based reading errors must be associated with segmental word processing routines, while the production of lexical errors is indicative of a holistic reading strategy could not be verified.Conclusions: Our results indicate that despite general improvements in reading performance, only some patients reorganised their word identification process. Contradictive data raise doubts on the validity of psycholinguistic error analysis as an exclusive indicator of changes in reading strategy. We suggest this traditional approach to combine with innovative eye tracking methodology in the interest of more comprehensive diagnostic strategies.

@article{Ablinger2014a,
title = {An eye movement based reading intervention in lexical and segmental readers with acquired dyslexia},
author = {Irene Ablinger and Kerstin {Von Heyden} and Christian Vorstius and Katja Halm and Walter Huber and Ralph Radach},
doi = {10.1080/09602011.2014.913530},
year = {2014},
date = {2014-01-01},
journal = {Neuropsychological Rehabilitation},
volume = {24},
number = {6},
pages = {833--867},
abstract = {Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes.

@article{Ablinger2016,
title = {Diverging receptive and expressive word processing mechanisms in a deep dyslexic reader},
author = {Irene Ablinger and Ralph Radach},
doi = {10.1016/j.neuropsychologia.2015.11.023},
year = {2016},
date = {2016-01-01},
journal = {Neuropsychologia},
volume = {81},
pages = {12--21},
publisher = {Elsevier},
abstract = {We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation.

@article{Abrahamse2013,
title = {Attention modulation by proportion congruency: The asymmetrical list shifting effect},
author = {Elger L Abrahamse and Wout Duthoo and Wim Notebaert and Evan F Risko},
year = {2013},
date = {2013-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {39},
number = {5},
pages = {1552--1562},
abstract = {Proportion congruency effects represent hallmark phenomena in current theorizing about cognitive control. This is based on the notion that proportion congruency determines the relative levels of attention to relevant and irrelevant information in conflict tasks. However, little empirical evidence exists that uniquely supports such an attention modulation account; moreover, a rivaling account was recently proposed that attributes the effect of proportion congruency to mere contingency leaming. In the present study, the influences of shifts in list-wide (Experiment 1) or item-specific (Experiment 2) proportion congruency were investigated. As predicted by attention modulation but not by contingency leaming, strong asymmetries were observed in such shifting: An increase in the proportion of congruent trials had only limited impact on the size of the congruency effect when participants were initially trained with a mostly incongruent list, but the impact was substantial for an equivalent increase of incongruent trials when participants were initially trained with a mostly congruent list. This asymmetrical list shifting effect directly supports attention modulation by proportion congruency manipulations and as such provides a novel tool for exploring cognitive control. Implications of our findings for existing theories of cognitive control are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Proportion congruency effects represent hallmark phenomena in current theorizing about cognitive control. This is based on the notion that proportion congruency determines the relative levels of attention to relevant and irrelevant information in conflict tasks. However, little empirical evidence exists that uniquely supports such an attention modulation account; moreover, a rivaling account was recently proposed that attributes the effect of proportion congruency to mere contingency leaming. In the present study, the influences of shifts in list-wide (Experiment 1) or item-specific (Experiment 2) proportion congruency were investigated. As predicted by attention modulation but not by contingency leaming, strong asymmetries were observed in such shifting: An increase in the proportion of congruent trials had only limited impact on the size of the congruency effect when participants were initially trained with a mostly incongruent list, but the impact was substantial for an equivalent increase of incongruent trials when participants were initially trained with a mostly congruent list. This asymmetrical list shifting effect directly supports attention modulation by proportion congruency manipulations and as such provides a novel tool for exploring cognitive control. Implications of our findings for existing theories of cognitive control are discussed.

@article{Abrahamyan2016,
title = {Adaptable history biases in human perceptual decisions},
author = {Arman Abrahamyan and Laura Luz Silva and Steven C Dakin and Matteo Carandini and Justin L Gardner},
doi = {10.1073/pnas.1518786113},
year = {2016},
date = {2016-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {113},
number = {25},
pages = {E3548--E3557},
abstract = {When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

@article{Acha2008,
title = {The effect of neighborhood frequency in reading: Evidence with transposed-letter neighbors},
author = {Joana Acha and Manuel Perea},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {108},
pages = {290--300},
abstract = {Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading.

During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features - such as edges, symmetries, and recursive patterns - guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data.

@article{Acik2010,
title = {Developmental changes in natural viewing behavior: Bottom-up and top-down differences between children, young adults and older adults},
author = {Alper A{ç}ik and Adjmal Sarwary and Rafael Schultze-Kraft and Selim Onat and Peter König},
doi = {10.3389/fpsyg.2010.00207},
year = {2010},
date = {2010-01-01},
journal = {Frontiers in Psychology},
volume = {1},
pages = {1--14},
publisher = {10},
address = {doi},
abstract = {Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.

@article{Acik2014,
title = {Real and implied motion at the center of gaze},
author = {Alper Acik and Andreas Bartel and Peter Konig},
doi = {10.1109/ICMTS.2016.7476201},
year = {2014},
date = {2014-01-01},
journal = {Journal of Vision},
volume = {14},
number = {1},
pages = {1--19},
abstract = {Even though the dynamicity of our environment is a given, much of what we know on fixation selection comes from studies of static scene viewing. We performed a direct comparison of fixation selection on static and dynamic visual stimuli and investigated how far identical mechanisms drive these. We recorded eye movements while participants viewed movie clips of natural scenery and static frames taken from the same movies. Both were presented in the same high spatial resolution (1080 textperiodcentered 1920 pixels). The static condition allowed us to check whether local movement features computed from movies are salient even when presented as single frames. We observed that during the first second of viewing, movement and static features are equally salient in both conditions. Furthermore, predictability of fixations based on movement features decreased faster when viewing static frames as compared with viewing movie clips. Yet even during the later portion of static-frame viewing, the predictive value of movement features was still high above chance. Moreover, we demonstrated that, whereas the sets of movement and static features were statistically dependent within these sets, respectively, no dependence was observed between the two sets. Based on these results, we argue that implied motion is predictive of fixation similarly to real movement and that the onset of motion in natural stimuli is more salient than ongoing movement is. The present results allow us to address to what extent and when static image viewing is similar to the perception of a dynamic environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Even though the dynamicity of our environment is a given, much of what we know on fixation selection comes from studies of static scene viewing. We performed a direct comparison of fixation selection on static and dynamic visual stimuli and investigated how far identical mechanisms drive these. We recorded eye movements while participants viewed movie clips of natural scenery and static frames taken from the same movies. Both were presented in the same high spatial resolution (1080 textperiodcentered 1920 pixels). The static condition allowed us to check whether local movement features computed from movies are salient even when presented as single frames. We observed that during the first second of viewing, movement and static features are equally salient in both conditions. Furthermore, predictability of fixations based on movement features decreased faster when viewing static frames as compared with viewing movie clips. Yet even during the later portion of static-frame viewing, the predictive value of movement features was still high above chance. Moreover, we demonstrated that, whereas the sets of movement and static features were statistically dependent within these sets, respectively, no dependence was observed between the two sets. Based on these results, we argue that implied motion is predictive of fixation similarly to real movement and that the onset of motion in natural stimuli is more salient than ongoing movement is. The present results allow us to address to what extent and when static image viewing is similar to the perception of a dynamic environment.

@article{Acker2016,
title = {FEF inactivation with improved optogenetic methods},
author = {Leah Acker and Erica N Pino and Edward S Boyden and Robert Desimone},
doi = {10.1073/pnas.1610784113},
year = {2016},
date = {2016-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {113},
number = {46},
pages = {E7297--E7306},
abstract = {Optogenetic methods have been highly effective for suppressing neural activity and modulating behavior in rodents, but effects have been much smaller in primates, which have much larger brains. Here, we present a suite of technologies to use optogenetics effectively in primates and apply these tools to a classic question in oculomotor control. First, we measured light absorption and heat propagation in vivo, optimized the conditions for using the red-light-shifted halorhodopsin Jaws in primates, and developed a large-volume illuminator to maximize light delivery with minimal heating and tissue displacement. Together, these advances allowed for nearly universal neuronal inactivation across more than 10 mm(3) of the cortex. Using these tools, we demonstrated large behavioral changes (i.e., up to several fold increases in error rate) with relatively low light power densities (≤100 mW/mm(2)) in the frontal eye field (FEF). Pharmacological inactivation studies have shown that the FEF is critical for executing saccades to remembered locations. FEF neurons increase their firing rate during the three epochs of the memory-guided saccade task: visual stimulus presentation, the delay interval, and motor preparation. It is unclear from earlier work, however, whether FEF activity during each epoch is necessary for memory-guided saccade execution. By harnessing the temporal specificity of optogenetics, we found that FEF contributes to memory-guided eye movements during every epoch of the memory-guided saccade task (the visual, delay, and motor periods).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Optogenetic methods have been highly effective for suppressing neural activity and modulating behavior in rodents, but effects have been much smaller in primates, which have much larger brains. Here, we present a suite of technologies to use optogenetics effectively in primates and apply these tools to a classic question in oculomotor control. First, we measured light absorption and heat propagation in vivo, optimized the conditions for using the red-light-shifted halorhodopsin Jaws in primates, and developed a large-volume illuminator to maximize light delivery with minimal heating and tissue displacement. Together, these advances allowed for nearly universal neuronal inactivation across more than 10 mm(3) of the cortex. Using these tools, we demonstrated large behavioral changes (i.e., up to several fold increases in error rate) with relatively low light power densities (≤100 mW/mm(2)) in the frontal eye field (FEF). Pharmacological inactivation studies have shown that the FEF is critical for executing saccades to remembered locations. FEF neurons increase their firing rate during the three epochs of the memory-guided saccade task: visual stimulus presentation, the delay interval, and motor preparation. It is unclear from earlier work, however, whether FEF activity during each epoch is necessary for memory-guided saccade execution. By harnessing the temporal specificity of optogenetics, we found that FEF contributes to memory-guided eye movements during every epoch of the memory-guided saccade task (the visual, delay, and motor periods).

@article{Ackermann2013,
title = {Choice of saccade endpoint under risk},
author = {John F Ackermann and Michael S Landy},
doi = {10.1167/13.3.27},
year = {2013},
date = {2013-01-01},
journal = {Journal of Vision},
volume = {13},
number = {3},
pages = {1--20},
abstract = {Eye movements function to bring detailed information onto the high-resolution region of the retina. Previous research has shown that human observers select fixation points that maximize information acquisition and minimize target location uncertainty. In this study, we ask whether human observers choose the saccade endpoint that maximizes gain when there are explicit rewards associated with correctly detecting the target. Observers performed an 8-alternative forced-choice detection task for a contrast-defined target in noise. After a single saccade, observers indicated the target location. Each potential target location had an associated reward that was known to the observer. In some conditions, the reward at one location was higher than at the other locations. We compared human saccade endpoints to those of an ideal observer that maximizes expected gain given the respective human observer's visibility map, i.e., d' for target detection as a function of retinal location. Varying the location of the highest reward had a significant effect on human observers' distribution of saccade endpoints. Both human and ideal observers show a high density of saccades made toward the highest rewarded and actual target locations. But humans' overall spatial distributions of saccade endpoints differed significantly from the ideal observer as they made a greater number of saccade to locations far from the highest rewarded and actual target locations. Suboptimal choice of saccade endpoint, possibly in combination with suboptimal integration of information across saccades, had a significant effect on human observers' ability to correctly detect the target and maximize gain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Eye movements function to bring detailed information onto the high-resolution region of the retina. Previous research has shown that human observers select fixation points that maximize information acquisition and minimize target location uncertainty. In this study, we ask whether human observers choose the saccade endpoint that maximizes gain when there are explicit rewards associated with correctly detecting the target. Observers performed an 8-alternative forced-choice detection task for a contrast-defined target in noise. After a single saccade, observers indicated the target location. Each potential target location had an associated reward that was known to the observer. In some conditions, the reward at one location was higher than at the other locations. We compared human saccade endpoints to those of an ideal observer that maximizes expected gain given the respective human observer's visibility map, i.e., d' for target detection as a function of retinal location. Varying the location of the highest reward had a significant effect on human observers' distribution of saccade endpoints. Both human and ideal observers show a high density of saccades made toward the highest rewarded and actual target locations. But humans' overall spatial distributions of saccade endpoints differed significantly from the ideal observer as they made a greater number of saccade to locations far from the highest rewarded and actual target locations. Suboptimal choice of saccade endpoint, possibly in combination with suboptimal integration of information across saccades, had a significant effect on human observers' ability to correctly detect the target and maximize gain.

@article{Ackermann2014,
title = {Statistical templates for visual search},
author = {John F Ackermann and M S Landy},
doi = {10.1167/14.3.18},
year = {2014},
date = {2014-01-01},
journal = {Journal of Vision},
volume = {14},
number = {3},
pages = {1--17},
abstract = {How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets.

@article{Ackermann2015,
title = {Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards},
author = {John F Ackermann and Michael S Landy},
doi = {10.3758/s13414-014-0779-z},
year = {2015},
date = {2015-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {77},
number = {2},
pages = {638--658},
abstract = {Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to $beta$ = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to $beta$ = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.

@article{Acunzo2011,
title = {No emotional "Pop-out" effect in natural scene viewing},
author = {David J Acunzo and John M Henderson},
doi = {10.1037/a0022586},
year = {2011},
date = {2011-01-01},
journal = {Emotion},
volume = {11},
number = {5},
pages = {1134--1143},
abstract = {It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided.

@article{Adab2014,
title = {Perceptual learning of simple stimuli modifies stimulus representations in posterior inferior temporal cortex},
author = {Hamed Zivari Adab and Ivo D Popivanov and Wim Vanduffel and Rufin Vogels},
doi = {10.1162/jocn},
year = {2014},
date = {2014-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {26},
number = {10},
pages = {2187--2200},
abstract = {Practicing simple visual detection and discrimination tasks improves performance, a signature of adult brain plasticity. The neural mechanisms that underlie these changes in performance are still unclear. Previously, we reported that practice in discriminating the orientation of noisy gratings (coarse orientation discrimination) increased the ability of single neurons in the early visual area V4 to discriminate the trained stimuli. Here, we ask whether practice in this task also changes the stimulus tuning properties of later visual cortical areas, despite the use of simple grating stimuli. To identify candidate areas, we used fMRI to map activations to noisy gratings in trained rhesus monkeys, revealing a region in the posterior inferior temporal (PIT) cortex. Subsequent single unit record- ings in PIT showed that the degree of orientation selectivity was similar to that of area V4 and that the PIT neurons discriminated the trained orientations better than the untrained orientations. Unlike in previous single unit studies of perceptual learning in early visual cortex, more PIT neurons preferred trained compared with untrained orientations. The effects of training on the responses to the grating stimuli were also present when the animals were performing a difficult orthogo- nal task in which the grating stimuli were task-irrelevant, suggesting that the training effect does not need attention to be expressed. The PIT neurons could support orientation discrimination at low signal-to-noise levels. These findings suggest that extensive practice in discriminating simple grating stimuli not only affects early visual cortex but also changes the stimulus tuning of a late visual cortical area.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Practicing simple visual detection and discrimination tasks improves performance, a signature of adult brain plasticity. The neural mechanisms that underlie these changes in performance are still unclear. Previously, we reported that practice in discriminating the orientation of noisy gratings (coarse orientation discrimination) increased the ability of single neurons in the early visual area V4 to discriminate the trained stimuli. Here, we ask whether practice in this task also changes the stimulus tuning properties of later visual cortical areas, despite the use of simple grating stimuli. To identify candidate areas, we used fMRI to map activations to noisy gratings in trained rhesus monkeys, revealing a region in the posterior inferior temporal (PIT) cortex. Subsequent single unit record- ings in PIT showed that the degree of orientation selectivity was similar to that of area V4 and that the PIT neurons discriminated the trained orientations better than the untrained orientations. Unlike in previous single unit studies of perceptual learning in early visual cortex, more PIT neurons preferred trained compared with untrained orientations. The effects of training on the responses to the grating stimuli were also present when the animals were performing a difficult orthogo- nal task in which the grating stimuli were task-irrelevant, suggesting that the training effect does not need attention to be expressed. The PIT neurons could support orientation discrimination at low signal-to-noise levels. These findings suggest that extensive practice in discriminating simple grating stimuli not only affects early visual cortex but also changes the stimulus tuning of a late visual cortical area.

@article{Adab2016,
title = {Perturbation of posterior inferior temporal cortical activity impairs coarse orientation discrimination},
author = {Hamed Zivari Adab and Rufin Vogels},
doi = {10.1093/cercor/bhv178},
year = {2016},
date = {2016-01-01},
journal = {Cerebral Cortex},
volume = {26},
number = {9},
pages = {3814--3827},
abstract = {It is reasonable to assume that the discrimination of simple visual stimuli depends on the activity of early visual cortical neurons, because simple visual features are supposedly coded in these areas whereas more complex features are coded in late visual areas. Recently, we showed that training monkeys in a coarse orientation discrimination task modified the response properties of single neurons in the posterior inferior temporal (PIT) cortex, a late visual area. Here, we examined the contribution of PIT to coarse orientation discrimination using causal perturbation methods. Electrical stimulation (ES) of PIT with currents of at least 100 µA impaired coarse orientation discrimination in monkeys. The performance deterioration did not exclusively reflect a general impairment to perform a difficult perceptual task. However, high current (650 µA) but not low-current (100 µA) ES also impaired fine color discrimination. ES of temporal regions dorsal or anterior to PIT produced less impairment of coarse orientation discrimination than ES of PIT. Injections of the GABA agonist muscimol into PIT also impaired performance. These data suggest that the late cortical area PIT is part of the network that supports coarse orientation discrimination of a simple grating stimulus, at least after extensive training in this task at threshold.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

It is reasonable to assume that the discrimination of simple visual stimuli depends on the activity of early visual cortical neurons, because simple visual features are supposedly coded in these areas whereas more complex features are coded in late visual areas. Recently, we showed that training monkeys in a coarse orientation discrimination task modified the response properties of single neurons in the posterior inferior temporal (PIT) cortex, a late visual area. Here, we examined the contribution of PIT to coarse orientation discrimination using causal perturbation methods. Electrical stimulation (ES) of PIT with currents of at least 100 µA impaired coarse orientation discrimination in monkeys. The performance deterioration did not exclusively reflect a general impairment to perform a difficult perceptual task. However, high current (650 µA) but not low-current (100 µA) ES also impaired fine color discrimination. ES of temporal regions dorsal or anterior to PIT produced less impairment of coarse orientation discrimination than ES of PIT. Injections of the GABA agonist muscimol into PIT also impaired performance. These data suggest that the late cortical area PIT is part of the network that supports coarse orientation discrimination of a simple grating stimulus, at least after extensive training in this task at threshold.

@article{Adam2012,
title = {Coordinated flexibility: How initial gaze position modulates eye-hand coordination and reaching},
author = {Jos J Adam and Simona Buetti and Dirk Kerzel},
doi = {10.1037/a0027592},
year = {2012},
date = {2012-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {38},
number = {4},
pages = {891--901},
abstract = {Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags between eye and hand onsets were small and initiation times were substantially correlated, suggesting simultaneous control and tight eye-hand coupling. With eyes and hand departing from different start locations (gaze aligned with the center of the range of possible target positions), time lags between eye and hand onsets were large and initiation times were largely uncorrelated, suggesting independent control and decoupling of eye and hand movements. Furthermore, initial gaze position strongly mediated manual reaching performance indexed by increments in movement time as a function of target distance. Experiment 2 confirmed the impact of target foveation in modulating the effect of target distance on movement time. Our findings reveal the operation of an overarching, flexible neural control system that tunes the operation and cooperation of saccadic and manual control systems depending on where the eyes look at target onset.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags between eye and hand onsets were small and initiation times were substantially correlated, suggesting simultaneous control and tight eye-hand coupling. With eyes and hand departing from different start locations (gaze aligned with the center of the range of possible target positions), time lags between eye and hand onsets were large and initiation times were largely uncorrelated, suggesting independent control and decoupling of eye and hand movements. Furthermore, initial gaze position strongly mediated manual reaching performance indexed by increments in movement time as a function of target distance. Experiment 2 confirmed the impact of target foveation in modulating the effect of target distance on movement time. Our findings reveal the operation of an overarching, flexible neural control system that tunes the operation and cooperation of saccadic and manual control systems depending on where the eyes look at target onset.

@article{Adam2012a,
title = {Rapid decision-making under risk},
author = {Robert Adam and Paul M Bays and Masud Husain},
year = {2012},
date = {2012-01-01},
journal = {Cognitive Neuroscience},
volume = {3},
number = {1},
pages = {52--61},
abstract = {Impulsivity is often characterized by rapid decisions under risk, but most current tests of decision-making do not impose time pressures on participants' choices. Here we introduce a new Traffic Lights test which requires people to choose whether to programme a risky, early eye movement before a traffic light turns green (earning them high rewards or a penalty) or wait for the green light before responding to obtain a small reward instead. Young participants demonstrated bimodal responses: an early, high-risk and a later, low-risk set of choices. By contrast, elderly people invariably waited for the green light and showed little risk-taking. Performance could be modelled as a race between two rise-to-threshold decision processes, one triggered by the green light and the other initiated before it. The test provides a useful measure of rapid decision-making under risk, with the potential to reveal how this process alters with aging or in patient groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Impulsivity is often characterized by rapid decisions under risk, but most current tests of decision-making do not impose time pressures on participants' choices. Here we introduce a new Traffic Lights test which requires people to choose whether to programme a risky, early eye movement before a traffic light turns green (earning them high rewards or a penalty) or wait for the green light before responding to obtain a small reward instead. Young participants demonstrated bimodal responses: an early, high-risk and a later, low-risk set of choices. By contrast, elderly people invariably waited for the green light and showed little risk-taking. Performance could be modelled as a race between two rise-to-threshold decision processes, one triggered by the green light and the other initiated before it. The test provides a useful measure of rapid decision-making under risk, with the potential to reveal how this process alters with aging or in patient groups.

@article{Adam2013,
title = {Dopamine reverses reward insensitivity in apathy following globus pallidus lesions},
author = {Robert Adam and Alexander Leff and Nihal Sinha and Christopher Turner and Paul Bays and Bogdan Draganski and Masud Husain},
doi = {10.1016/j.cortex.2012.04.013},
year = {2013},
date = {2013-01-01},
journal = {Cortex},
volume = {49},
number = {5},
pages = {1292--1303},
publisher = {Elsevier Ltd},
abstract = {Apathy is a complex, behavioural disorder associated with reduced spontaneous initiation of actions. Although present in mild forms in some healthy people, it is a pathological state in conditions such as Alzheimer's and Parkinson's disease where it can have profoundly devastating effects. Understanding the mechanisms underlying apathy is therefore of urgent concern but this has proven difficult because widespread brain changes in neurodegenerative diseases make interpretation difficult and there is no good animal model.Here we present a very rare case with profound apathy following bilateral, focal lesions of the basal ganglia, with globus pallidus regions that connect with orbitofrontal (OFC) and ventromedial prefrontal cortex (VMPFC) particularly affected. Using two measures of oculomotor decision-making we show that apathy in this individual was associated with reward insensitivity. However, reward sensitivity could be established partially with levodopa and more effectively with a dopamine receptor agonist. Concomitantly, there was an improvement in the patient's clinical state, with reduced apathy, greater motivation and increased social interactions. These findings provide a model system to study a key neuropsychiatric disorder. They demonstrate that reward insensitivity associated with basal ganglia dysfunction might be an important component of apathy that can be reversed by dopaminergic modulation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Apathy is a complex, behavioural disorder associated with reduced spontaneous initiation of actions. Although present in mild forms in some healthy people, it is a pathological state in conditions such as Alzheimer's and Parkinson's disease where it can have profoundly devastating effects. Understanding the mechanisms underlying apathy is therefore of urgent concern but this has proven difficult because widespread brain changes in neurodegenerative diseases make interpretation difficult and there is no good animal model.Here we present a very rare case with profound apathy following bilateral, focal lesions of the basal ganglia, with globus pallidus regions that connect with orbitofrontal (OFC) and ventromedial prefrontal cortex (VMPFC) particularly affected. Using two measures of oculomotor decision-making we show that apathy in this individual was associated with reward insensitivity. However, reward sensitivity could be established partially with levodopa and more effectively with a dopamine receptor agonist. Concomitantly, there was an improvement in the patient's clinical state, with reduced apathy, greater motivation and increased social interactions. These findings provide a model system to study a key neuropsychiatric disorder. They demonstrate that reward insensitivity associated with basal ganglia dysfunction might be an important component of apathy that can be reversed by dopaminergic modulation.

This study examined the ability of participants to strategically adapt their level of response preparation to the predictive value of preparatory cues. Participants performed the finger-precuing task under three levels of cue validity: 100, 75 and 50% valid. Response preparation was indexed by means of reaction time (RT) and pupil dilation, the latter providing a psychophysiological index of invested effort. Results showed a systematic increase in RT benefits (generated by valid cues) and RT costs (generated by invalid cues) with increments in the predictive value of cues. Converging with these behavioural effects, pupil dilation also increased systematically with greater cue validity during the cue-stimulus interval, suggesting more effortful response preparation with increases in cue validity. Together, these findings confirm the hypothesis that response preparation is flexible and that it can be strategically allocated in proportion to the relative frequency of valid/invalid preparatory cues.

@article{Adams2015,
title = {Active inference and oculomotor pursuit: The dynamic causal modelling of eye movements},
author = {Rick A Adams and Eduardo Aponte and Louise Marshall and Karl J Friston},
doi = {10.1016/j.jneumeth.2015.01.003},
year = {2015},
date = {2015-01-01},
journal = {Journal of Neuroscience Methods},
volume = {242},
pages = {1--14},
publisher = {Elsevier B.V.},
abstract = {Background: This paper introduces a new paradigm that allows one to quantify the Bayesian beliefs evidenced by subjects during oculomotor pursuit. Subjects' eye tracking responses to a partially occluded sinusoidal target were recorded non-invasively and averaged. These response averages were then analysed using dynamic causal modelling (DCM). In DCM, observed responses are modelled using biologically plausible generative or forward models - usually biophysical models of neuronal activity. New method: Our key innovation is to use a generative model based on a normative (Bayes-optimal) model of active inference to model oculomotor pursuit in terms of subjects' beliefs about how visual targets move and how their oculomotor system responds. Our aim here is to establish the face validity of the approach, by manipulating the content and precision of sensory information - and examining the ensuing changes in the subjects' implicit beliefs. These beliefs are inferred from their eye movements using the normative model. Results: We show that on average, subjects respond to an increase in the 'noise' of target motion by increasing sensory precision in their models of the target trajectory. In other words, they attend more to the sensory attributes of a noisier stimulus. Conversely, subjects only change kinetic parameters in their model but not precision, in response to increased target speed. Conclusions: Using this technique one can estimate the precisions of subjects' hierarchical Bayesian beliefs about target motion. We hope to apply this paradigm to subjects with schizophrenia, whose pursuit abnormalities may result from the abnormal encoding of precision.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Background: This paper introduces a new paradigm that allows one to quantify the Bayesian beliefs evidenced by subjects during oculomotor pursuit. Subjects' eye tracking responses to a partially occluded sinusoidal target were recorded non-invasively and averaged. These response averages were then analysed using dynamic causal modelling (DCM). In DCM, observed responses are modelled using biologically plausible generative or forward models - usually biophysical models of neuronal activity. New method: Our key innovation is to use a generative model based on a normative (Bayes-optimal) model of active inference to model oculomotor pursuit in terms of subjects' beliefs about how visual targets move and how their oculomotor system responds. Our aim here is to establish the face validity of the approach, by manipulating the content and precision of sensory information - and examining the ensuing changes in the subjects' implicit beliefs. These beliefs are inferred from their eye movements using the normative model. Results: We show that on average, subjects respond to an increase in the 'noise' of target motion by increasing sensory precision in their models of the target trajectory. In other words, they attend more to the sensory attributes of a noisier stimulus. Conversely, subjects only change kinetic parameters in their model but not precision, in response to increased target speed. Conclusions: Using this technique one can estimate the precisions of subjects' hierarchical Bayesian beliefs about target motion. We hope to apply this paradigm to subjects with schizophrenia, whose pursuit abnormalities may result from the abnormal encoding of precision.

@article{Adams2016,
title = {Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG},
author = {Rick A Adams and Markus Bauer and Dimitris Pinotsis and Karl J Friston},
doi = {10.1016/j.neuroimage.2016.02.055},
year = {2016},
date = {2016-01-01},
journal = {Neuroimage},
volume = {132},
pages = {175--189},
publisher = {The Authors},
abstract = {This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia.

@article{Adeli2017,
title = {A model of the superior colliculus predicts fixation locations during scene viewing and visual aearch},
author = {Hossein Adeli and Fran{ç}oise Vitu and Gregory J Zelinsky},
doi = {10.1523/JNEUROSCI.0825-16.2016},
year = {2017},
date = {2017-01-01},
journal = {Journal of Neuroscience},
volume = {37},
number = {6},
pages = {1453--1467},
abstract = {Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts.

@article{Afacan-Seref2018,
title = {Dynamic interplay of value and sensory information in high-speed decision making},
author = {Kivilcim Afacan-Seref and Natalie A Steinemann and Annabelle Blangero and Simon P Kelly},
doi = {10.1016/j.cub.2018.01.071},
year = {2018},
date = {2018-03-01},
journal = {Current Biology},
volume = {28},
number = {5},
pages = {795--802},
abstract = {In dynamic environments, split-second sensorimotor decisions must be prioritized according to potential payoffs to maximize overall rewards. The impact of relative value on deliberative perceptual judgments has been examined extensively [1–6], but relatively little is known about value-biasing mechanisms in the common situation where physical evidence is strong but the time to act is severely limited. In prominent decision models, a noisy but statistically stationary representation of sensory evidence is integrated over time to an action-triggering bound, and value-biases are affected by starting the integrator closer to the more valuable bound. Here, we show significant departures from this account for humans making rapid sensory-instructed action choices. Behavior was best explained by a simple model in which the evidence representation—and hence, rate of accumulation—is itself biased by value and is non-stationary, increasing over the short decision time frame. Because the value bias initially dominates, the model uniquely predicts a dynamic ‘‘turn-around'' effect on low-value cues, where the accumulator first launches toward the incorrect action but is then re-routed to the correct one. This was clearly exhibited in electrophysiological signals reflecting motor preparation and evidence accumulation. Finally, we construct an extended model that implements this dynamic effect through plausible sensory neural response modulations and demonstrate the correspondence between decision signal dynamics simulated from a behavioral fit of that model and the empirical decision signals. Our findings suggest that value and sensory information can exert simultaneous and dynamically countervailing influences on the trajectory of the accumulation-to-bound process, driving rapid, sensory-guided actions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In dynamic environments, split-second sensorimotor decisions must be prioritized according to potential payoffs to maximize overall rewards. The impact of relative value on deliberative perceptual judgments has been examined extensively [1–6], but relatively little is known about value-biasing mechanisms in the common situation where physical evidence is strong but the time to act is severely limited. In prominent decision models, a noisy but statistically stationary representation of sensory evidence is integrated over time to an action-triggering bound, and value-biases are affected by starting the integrator closer to the more valuable bound. Here, we show significant departures from this account for humans making rapid sensory-instructed action choices. Behavior was best explained by a simple model in which the evidence representation—and hence, rate of accumulation—is itself biased by value and is non-stationary, increasing over the short decision time frame. Because the value bias initially dominates, the model uniquely predicts a dynamic ‘‘turn-around'' effect on low-value cues, where the accumulator first launches toward the incorrect action but is then re-routed to the correct one. This was clearly exhibited in electrophysiological signals reflecting motor preparation and evidence accumulation. Finally, we construct an extended model that implements this dynamic effect through plausible sensory neural response modulations and demonstrate the correspondence between decision signal dynamics simulated from a behavioral fit of that model and the empirical decision signals. Our findings suggest that value and sensory information can exert simultaneous and dynamically countervailing influences on the trajectory of the accumulation-to-bound process, driving rapid, sensory-guided actions.

@article{Afraz2009,
title = {The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations},
author = {Arash Afraz and Patrick Cavanagh},
doi = {10.1167/9.10.10},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {10},
pages = {1--17},
abstract = {In four experiments, we measured the gender-specific face-aftereffect following subject's eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In four experiments, we measured the gender-specific face-aftereffect following subject's eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation.

Exploration of images after stimulus onset is initially biased to the left. Here, we studied the causes of such an asymmetry and investigated effects of reading habits, text primes, and priming by systematically biased eye movements on this spatial bias in visual exploration. Bilinguals first read text primes with right- to-left (RTL) or left-to-right (LTR) reading directions and subsequently explored natural images. In Experiment 1, native RTL speakers showed a leftward free-viewing shift after reading LTR primes but a weaker rightward bias after reading RTL primes. This demonstrates that reading direction dynamically influences the spatial bias. However, native LTR speakers wholearnedanRTL languagelateinlife showed a leftward bias after reading either LTR or RTL primes, which suggests the role of habit formation in the production of the spatial bias. In Experiment 2, LTR bilinguals showed a slightly enhanced leftward bias after reading LTR text primes in their second language. This might contribute to the differences of native RTL and LTR speakers observed in Experiment 1. In Experiment 3, LTR bilinguals read normal (LTR, habitual reading) and mirrored left-to-right (mLTR, nonhabitual reading) texts. We observed a strong leftward bias in both cases, indicating that the bias direction is influenced by habitual reading direction and is not secondary to the actual reading direction. This is confirmed in Experiment 4, in which LTR participants were asked to follow RTL and LTR moving dots in prior image presentation and showed no change in the normal spatial bias. In conclusion, the horizontal bias is a dynamic property and is modulated by habitual reading direction. Introduction

@article{Afsari2018,
title = {Interindividual differences among native right-to-left readers and native left-to-right readers during free viewing task},
author = {Zaeinab Afsari and Ashima Keshava and José P Ossandón and Peter König},
doi = {10.1080/13506285.2018.1473542},
year = {2018},
date = {2018-01-01},
journal = {Visual Cognition},
volume = {26},
number = {6},
pages = {430--441},
abstract = {Human visual exploration is not homogeneous but displays spatial biases. Specifically, early after the onset of a visual stimulus, the majority of eye movements target the left visual space. This horizontal asymmetry of image exploration is rather robust with respect to multiple image manipulations, yet can be dynamically modulated by preceding text primes. This characteristic points to an involvement of reading habits in the deployment of visual attention. Here, we report data of native right-to-left (RTL) readers with a larger variation and stronger modulation of horizontal spatial bias in comparison to native left-to-right (LTR) readers after preceding text primes. To investigate the influences of biological and cultural factors, we measure the correlation of the modulation of the horizontal spatial bias for native RTL readers and native LTR readers with multiple factors: age, gender, second language proficiency, and age at which the second language was acquired. The results demonstrate only weak or no correlations between the magnitude of the horizontal bias and the previously mentioned factors. We conclude that the spatial bias of viewing behaviour for native RTL readers is more variable than for native LTR readers, and this variance could not be demonstrated to be associated with interindividual differences. We speculate the role of strength of habit and/or the interindividual differences in the structural and functional brain regions as a cause of the RTL spatial bias among RTL native readers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Human visual exploration is not homogeneous but displays spatial biases. Specifically, early after the onset of a visual stimulus, the majority of eye movements target the left visual space. This horizontal asymmetry of image exploration is rather robust with respect to multiple image manipulations, yet can be dynamically modulated by preceding text primes. This characteristic points to an involvement of reading habits in the deployment of visual attention. Here, we report data of native right-to-left (RTL) readers with a larger variation and stronger modulation of horizontal spatial bias in comparison to native left-to-right (LTR) readers after preceding text primes. To investigate the influences of biological and cultural factors, we measure the correlation of the modulation of the horizontal spatial bias for native RTL readers and native LTR readers with multiple factors: age, gender, second language proficiency, and age at which the second language was acquired. The results demonstrate only weak or no correlations between the magnitude of the horizontal bias and the previously mentioned factors. We conclude that the spatial bias of viewing behaviour for native RTL readers is more variable than for native LTR readers, and this variance could not be demonstrated to be associated with interindividual differences. We speculate the role of strength of habit and/or the interindividual differences in the structural and functional brain regions as a cause of the RTL spatial bias among RTL native readers.

@article{Agaoglu2015,
title = {Field-like interactions between motion-based reference frames},
author = {Mehmet N Ağaoğlu and Michael H Herzog and Haluk Öğmen},
doi = {10.3758/s13414-015-0890-9},
year = {2015},
date = {2015-01-01},
journal = {Attention, Perception, & Psychophysics},
volume = {77},
number = {6},
pages = {2082--2097},
abstract = {A reference frame is required to specify how motion is perceived. For example, the motion ofpart ofan object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38,379–393, 1976)pro-posed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means ofwhich the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset ofsolutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set ofpsychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects ofmotion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective ofthe reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

A reference frame is required to specify how motion is perceived. For example, the motion ofpart ofan object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38,379–393, 1976)pro-posed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means ofwhich the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset ofsolutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set ofpsychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects ofmotion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective ofthe reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames.

@article{Agaoglu2015a,
title = {The effective reference frame in perceptual judgments of motion direction},
author = {Mehmet N Ağaoğlu and Michael H Herzog and Haluk Öğmen},
doi = {10.1016/j.visres.2014.12.009},
year = {2015},
date = {2015-01-01},
journal = {Vision Research},
volume = {107},
pages = {101--112},
abstract = {The retinotopic projection of stimulus motion depends both on the motion of the stimulus and the movements of the observer. In this study, we aimed to quantify the contributions of endogenous (retinotopic) and exogenous (spatiotopic and motion-based) reference frames on judgments of motion direction. We used a variant of the induced motion paradigm and we created different experimental conditions in which the predictions of each reference frame were different. Finally, assuming additive contributions from different reference frames, we used a linear model to account for the data. Our results suggest that the effective reference frame for motion perception emerges from an amalgamation of motion-based, retinotopic and spatiotopic reference frames. In determining the percept, the influence of relative motion, defined by a motion-based reference frame, dominates those of retinotopic and spatiotopic motions within a finite region. We interpret these findings within the context of the Reference Frame Metric Field (RFMF) theory, which states that local motion vectors might have perceptual reference-frame fields associated with them, and interactions between these fields determine the selection of the effective reference frame.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The retinotopic projection of stimulus motion depends both on the motion of the stimulus and the movements of the observer. In this study, we aimed to quantify the contributions of endogenous (retinotopic) and exogenous (spatiotopic and motion-based) reference frames on judgments of motion direction. We used a variant of the induced motion paradigm and we created different experimental conditions in which the predictions of each reference frame were different. Finally, assuming additive contributions from different reference frames, we used a linear model to account for the data. Our results suggest that the effective reference frame for motion perception emerges from an amalgamation of motion-based, retinotopic and spatiotopic reference frames. In determining the percept, the influence of relative motion, defined by a motion-based reference frame, dominates those of retinotopic and spatiotopic motions within a finite region. We interpret these findings within the context of the Reference Frame Metric Field (RFMF) theory, which states that local motion vectors might have perceptual reference-frame fields associated with them, and interactions between these fields determine the selection of the effective reference frame.

@article{Agaoglu2016,
title = {Can (should) theories of crowding be unified?},
author = {Mehmet N Ağaoğlu and Susana T L Chung},
doi = {10.1167/16.15.10},
year = {2016},
date = {2016-01-01},
journal = {Journal of Vision},
volume = {16},
number = {15},
pages = {1--22},
abstract = {Objects in clutter are difficult to recognize, a phenomenon known as crowding. There is little consensus on the underlying mechanisms of crowding, and a large number of models have been proposed. There have also been attempts at unifying the explanations of crowding under a single model, such as the weighted feature model of Harrison and Bex (2015) and the texture synthesis model of Rosenholtz and colleagues (Balas, Nakano, & Rosenholtz, 2009; Keshvari & Rosenholtz, 2016). The goal of this work was to test various models of crowding and to assess whether a unifying account can be developed. Adopting Harrison and Bex's (2015) experimental paradigm, we asked observers to report the orientation of two concentric C-stimuli. Contrary to the predictions of their model, observers' recognition accuracy was worse for the inner C-stimulus. In addition, we demonstrated that the stimulus paradigm used by Harrison and Bex has a crucial confounding factor, eccentricity, which limits its usage to a very narrow range of stimulus parameters. Nevertheless, reporting the orientations of both C-stimuli in this paradigm proved very useful in pitting different crowding models against each other. Specifically, we tested deterministic and probabilistic versions of averaging, substitution, and attentional resolution models as well as the texture synthesis model. None of the models alone was able to explain the entire set of data. Based on these findings, we discuss whether the explanations of crowding can (should) be unified.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Objects in clutter are difficult to recognize, a phenomenon known as crowding. There is little consensus on the underlying mechanisms of crowding, and a large number of models have been proposed. There have also been attempts at unifying the explanations of crowding under a single model, such as the weighted feature model of Harrison and Bex (2015) and the texture synthesis model of Rosenholtz and colleagues (Balas, Nakano, & Rosenholtz, 2009; Keshvari & Rosenholtz, 2016). The goal of this work was to test various models of crowding and to assess whether a unifying account can be developed. Adopting Harrison and Bex's (2015) experimental paradigm, we asked observers to report the orientation of two concentric C-stimuli. Contrary to the predictions of their model, observers' recognition accuracy was worse for the inner C-stimulus. In addition, we demonstrated that the stimulus paradigm used by Harrison and Bex has a crucial confounding factor, eccentricity, which limits its usage to a very narrow range of stimulus parameters. Nevertheless, reporting the orientations of both C-stimuli in this paradigm proved very useful in pitting different crowding models against each other. Specifically, we tested deterministic and probabilistic versions of averaging, substitution, and attentional resolution models as well as the texture synthesis model. None of the models alone was able to explain the entire set of data. Based on these findings, we discuss whether the explanations of crowding can (should) be unified.

@article{Agaoglu2016a,
title = {Motion-based nearest vector metric for reference frame selection in the perception of motion},
author = {Mehmet N Ağaoğlu and Aaron M Clarke and Michael H Herzog and Haluk Ögmen},
doi = {10.1167/16.7.14},
year = {2016},
date = {2016-01-01},
journal = {Journal of Vision},
volume = {16},
number = {7},
pages = {1--16},
abstract = {We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.

@article{Agaoglu2016b,
title = {Unmasking saccadic uncrowding},
author = {Mehmet N Ağaoğlu and Haluk Öğmen and Susana T L Chung},
doi = {10.1016/j.visres.2016.08.003},
year = {2016},
date = {2016-01-01},
journal = {Vision Research},
volume = {127},
pages = {152--164},
abstract = {Stimuli that are briefly presented around the time of saccades are often perceived with spatiotemporal distortions. These distortions do not always have deleterious effects on the visibility and identification of a stimulus. Recent studies reported that when a stimulus is the target of an intended saccade, it is released from both masking and crowding. Here, we investigated pre-saccadic changes in single and crowded letter recognition performance in the absence (Experiment 1) and the presence (Experiment 2) of backward masks to determine the extent to which saccadic “uncrowding” and “unmasking” mechanisms are similar. Our results show that pre-saccadic improvements in letter recognition performance are mostly due to the presence of masks and/or stimulus transients which occur after the target is presented. More importantly, we did not find any decrease in crowding strength before impending saccades. A simplified version of a dual-channel neural model, originally proposed to explain masking phenomena, with several saccadic add-on mechanisms, could account for our results in Experiment 1. However, this model falls short in explaining how saccades drastically reduced the effect of backward masking (Experiment 2). The addition of a remapping mechanism that alters the relative spatial positions of stimuli was needed to fully account for the improvements observed when backward masks followed the letter stimuli. Taken together, our results (i) are inconsistent with saccadic uncrowding, (ii) strongly support saccadic unmasking, and (iii) suggest that pre-saccadic letter recognition is modulated by multiple perisaccadic mechanisms with different time courses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Stimuli that are briefly presented around the time of saccades are often perceived with spatiotemporal distortions. These distortions do not always have deleterious effects on the visibility and identification of a stimulus. Recent studies reported that when a stimulus is the target of an intended saccade, it is released from both masking and crowding. Here, we investigated pre-saccadic changes in single and crowded letter recognition performance in the absence (Experiment 1) and the presence (Experiment 2) of backward masks to determine the extent to which saccadic “uncrowding” and “unmasking” mechanisms are similar. Our results show that pre-saccadic improvements in letter recognition performance are mostly due to the presence of masks and/or stimulus transients which occur after the target is presented. More importantly, we did not find any decrease in crowding strength before impending saccades. A simplified version of a dual-channel neural model, originally proposed to explain masking phenomena, with several saccadic add-on mechanisms, could account for our results in Experiment 1. However, this model falls short in explaining how saccades drastically reduced the effect of backward masking (Experiment 2). The addition of a remapping mechanism that alters the relative spatial positions of stimuli was needed to fully account for the improvements observed when backward masks followed the letter stimuli. Taken together, our results (i) are inconsistent with saccadic uncrowding, (ii) strongly support saccadic unmasking, and (iii) suggest that pre-saccadic letter recognition is modulated by multiple perisaccadic mechanisms with different time courses.

@article{Agaoglu2017,
title = {Interaction between stimulus contrast and pre-saccadic crowding},
author = {Mehmet N Ağaoğlu and Susana T L Chung},
doi = {10.1098/rsos.160559},
year = {2017},
date = {2017-01-01},
journal = {Royal Society Open Science},
volume = {4},
number = {2},
pages = {1--17},
abstract = {Objects that are briefly flashed around the time of saccades are mislocalized. Previously, robust interactions between saccadic perceptual distortions and stimulus contrast have been reported. It is also known that crowding depends on the contrast of the target and flankers. Here, we investigated how stimulus contrast and crowding interact with pre-saccadic perception. We asked observers to report the orientation of a tilted Gabor presented in the periphery, with or without four flanking vertically oriented Gabors. Observers performed the task either following a saccade or while maintaining fixation. Contrasts of the target and flankers were independently set to either high or low, with equal probability. In both the fixation and saccade conditions, the flanked conditions resulted in worse discrimination performance—the crowding effect. In the unflanked saccade trials, performance significantly decreased with target-to-saccade onset for low-contrast targets but not for high-contrast targets. In the presence of flankers, impending saccades reduced performance only for low-contrast, but not for high-contrast flankers. Interestingly, average performance in the fixation and saccade conditions was mostly similar in all contrast conditions. Moreover, the magnitude of crowding was influenced by saccades only when the target had high contrast and the flankers had low contrasts. Overall, our results are consistent with modulation of perisaccadic spatial localization by contrast and saccadic suppression, but at odds with a recent report of pre-saccadic release of crowding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Objects that are briefly flashed around the time of saccades are mislocalized. Previously, robust interactions between saccadic perceptual distortions and stimulus contrast have been reported. It is also known that crowding depends on the contrast of the target and flankers. Here, we investigated how stimulus contrast and crowding interact with pre-saccadic perception. We asked observers to report the orientation of a tilted Gabor presented in the periphery, with or without four flanking vertically oriented Gabors. Observers performed the task either following a saccade or while maintaining fixation. Contrasts of the target and flankers were independently set to either high or low, with equal probability. In both the fixation and saccade conditions, the flanked conditions resulted in worse discrimination performance—the crowding effect. In the unflanked saccade trials, performance significantly decreased with target-to-saccade onset for low-contrast targets but not for high-contrast targets. In the presence of flankers, impending saccades reduced performance only for low-contrast, but not for high-contrast flankers. Interestingly, average performance in the fixation and saccade conditions was mostly similar in all contrast conditions. Moreover, the magnitude of crowding was influenced by saccades only when the target had high contrast and the flankers had low contrasts. Overall, our results are consistent with modulation of perisaccadic spatial localization by contrast and saccadic suppression, but at odds with a recent report of pre-saccadic release of crowding.

A perceptually optimised approach to sign language video coding is presented. The proposed approach is based on the results (included) of an eye tracking study in the visual attention of sign language viewers. Results show reductions in bit rate of over 30% with very good subjective quality.

@article{Agrafiotis2006,
title = {A perceptually optimised video coding system for sign language communication at low bit rates},
author = {Dimitris Agrafiotis and Nishan Canagarajah and David R Bull and Jim Kyle and Helen Seers and Matthew Dye},
doi = {10.1016/j.image.2006.02.003},
year = {2006},
date = {2006-01-01},
journal = {Signal Processing: Image Communication},
volume = {21},
number = {7},
pages = {531--549},
abstract = {The ability to communicate remotely through the use of video as promised by wireless networks and already practised over fixed networks, is for deaf people as important as voice telephony is for hearing people. Sign languages are visual-spatial languages and as such demand good image quality for interaction and understanding. In this paper, we first analyse the sign language viewer's eye-gaze, based on the results of an eye-tracking study that we conducted, as well as the video content involved in sign language person-to-person communication. Based on this analysis we propose a sign language video coding system using foveated processing, which can lead to bit rate savings without compromising the comprehension of the coded sequence or equivalently produce a coded sequence with higher comprehension value at the same bit rate. We support this claim with the results of an initial comprehension assessment trial of such coded sequences by deaf users. The proposed system constitutes a new paradigm for coding sign language image sequences at limited bit rates. textcopyright 2006 Elsevier B.V. All rights reserved.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The ability to communicate remotely through the use of video as promised by wireless networks and already practised over fixed networks, is for deaf people as important as voice telephony is for hearing people. Sign languages are visual-spatial languages and as such demand good image quality for interaction and understanding. In this paper, we first analyse the sign language viewer's eye-gaze, based on the results of an eye-tracking study that we conducted, as well as the video content involved in sign language person-to-person communication. Based on this analysis we propose a sign language video coding system using foveated processing, which can lead to bit rate savings without compromising the comprehension of the coded sequence or equivalently produce a coded sequence with higher comprehension value at the same bit rate. We support this claim with the results of an initial comprehension assessment trial of such coded sequences by deaf users. The proposed system constitutes a new paradigm for coding sign language image sequences at limited bit rates. textcopyright 2006 Elsevier B.V. All rights reserved.

@article{Aguila2016,
title = {Effects of static magnetic fields on the visual cortex: Reversible visual deficits and reduction of neuronal activity},
author = {Jordi Aguila and Javier Cudeiro and Casto Rivadulla},
doi = {10.1093/cercor/bhu228},
year = {2016},
date = {2016-01-01},
journal = {Cerebral Cortex},
volume = {26},
pages = {628--638},
abstract = {Noninvasive brain stimulation techniques have been successfully used to modulate brain activity, have become a highly useful tool in basic and clinical research and, recently, have attracted increased attention due to their putative use as a method for neuro-enhancement. In this scenario, transcranial static magnetic stimulation (SMS) of moderate strength might represent an affordable, simple, and complementary method to other procedures, such as Transcranial Magnetic Stimulation or direct current stimulation, but its mechanisms and effects are not thoroughly understood. In this study, we show that static magnetic fields applied to visual cortex of awake primates cause reversible deficits in a visual detection task. Complementary experiments in anesthetized cats show that the visual deficits are a consequence of a strong reduction in neural activity. These results demonstrate that SMS is able to effectively modulate neuronal activity and could be considered to be a tool to be used for different purposes ranging from experimental studies to clinical applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Noninvasive brain stimulation techniques have been successfully used to modulate brain activity, have become a highly useful tool in basic and clinical research and, recently, have attracted increased attention due to their putative use as a method for neuro-enhancement. In this scenario, transcranial static magnetic stimulation (SMS) of moderate strength might represent an affordable, simple, and complementary method to other procedures, such as Transcranial Magnetic Stimulation or direct current stimulation, but its mechanisms and effects are not thoroughly understood. In this study, we show that static magnetic fields applied to visual cortex of awake primates cause reversible deficits in a visual detection task. Complementary experiments in anesthetized cats show that the visual deficits are a consequence of a strong reduction in neural activity. These results demonstrate that SMS is able to effectively modulate neuronal activity and could be considered to be a tool to be used for different purposes ranging from experimental studies to clinical applications.

@article{Aguila2017,
title = {Suppression of V1 feedback produces a shift in the topographic representation of receptive fields of LGN cells by unmasking latent retinal drives},
author = {Jordi Aguila and Javier F Cudeiro and Casto Rivadulla},
doi = {10.1093/cercor/bhx071},
year = {2017},
date = {2017-01-01},
journal = {Cerebral Cortex},
volume = {27},
number = {6},
pages = {3331--3345},
abstract = {In awake monkeys, we used repetitive transcranial magnetic stimulation (rTMS) to focally inactivate visual cortex while measuring the responsiveness of parvocellular lateral geniculate nucleus (LGN) neurons. Effects were noted in 64/75 neurons, and could be divided into 2 main groups: (1) for 39 neurons, visual responsiveness decreased and visual latency increased without apparent shift in receptive field (RF) position and (2) a second group (n = 25, 33% of the recorded cells) whose excitability was not compromised, but whose RF position shifted an average of 4.5°. This change is related to the retinotopic correspondence observed between the recorded thalamic area and the affected cortical zone. The effect of inactivation for this group of neurons was compatible with silencing the original retinal drive and unmasking a second latent retinal drive onto the studied neuron. These results indicate novel and remarkable dynamics in thalamocortical circuitry that force us to reassess constraints on retinogeniculate transmission.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In awake monkeys, we used repetitive transcranial magnetic stimulation (rTMS) to focally inactivate visual cortex while measuring the responsiveness of parvocellular lateral geniculate nucleus (LGN) neurons. Effects were noted in 64/75 neurons, and could be divided into 2 main groups: (1) for 39 neurons, visual responsiveness decreased and visual latency increased without apparent shift in receptive field (RF) position and (2) a second group (n = 25, 33% of the recorded cells) whose excitability was not compromised, but whose RF position shifted an average of 4.5°. This change is related to the retinotopic correspondence observed between the recorded thalamic area and the affected cortical zone. The effect of inactivation for this group of neurons was compatible with silencing the original retinal drive and unmasking a second latent retinal drive onto the studied neuron. These results indicate novel and remarkable dynamics in thalamocortical circuitry that force us to reassess constraints on retinogeniculate transmission.

@article{Aguilar2011,
title = {Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies},
author = {Carlos Aguilar and Eric Castet},
doi = {10.1016/j.visres.2011.02.010},
year = {2011},
date = {2011-01-01},
journal = {Vision Research},
volume = {51},
number = {9},
pages = {997--1012},
publisher = {Elsevier Ltd},
abstract = {Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays.

@article{Aguilar2017,
title = {Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people},
author = {Carlos Aguilar and Eric Castet},
doi = {10.1371/journal.pone.0174910},
year = {2017},
date = {2017-01-01},
journal = {PLoS ONE},
volume = {12},
number = {4},
pages = {1--24},
abstract = {People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)-conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)-conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading.

@article{Ahken2012,
title = {Eye movement patterns during the processing of musical and linguistic syntactic incongruities.},
author = {Stephanie Ahken and Gilles Comeau and Sylvie Hébert and Ramesh Balasubramaniam},
doi = {10.1037/a0026751},
year = {2012},
date = {2012-01-01},
journal = {Psychomusicology: Music, Mind, and Brain},
volume = {22},
number = {1},
pages = {18--25},
abstract = {It has been suggested that music and language share syntax-supporting brain mechanisms. Consequently, violations of syntax in either domain may have similar effects. The present study examined the effects of syntactic incongruities on eye movements and reading time in both music and language domains. In the music notation condition, the syntactic incongruities violated the prevailing musical tonality (i.e., the last bar of the incongruent sequence was a nontonic chord or nontonic note in the given key). In the linguistic condition, syntactic incongruities violated the expected grammatical structure (i.e., sentences with anomalies carrying the progressive –ing affix or the past tense inflection). Eighteen pianists were asked to sight-read and play musical phrases (music condition) and read sentences aloud (linguistic condition). Syntactic incongruities in both domains were associated with an increase in the mean proportion and duration of fixations in the target region of interest, as well as longer reading duration. The results are consistent with the growing evidence of a shared network of neural structures for syntactic processing, while not ruling out the possibility of independent networks for each domain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

It has been suggested that music and language share syntax-supporting brain mechanisms. Consequently, violations of syntax in either domain may have similar effects. The present study examined the effects of syntactic incongruities on eye movements and reading time in both music and language domains. In the music notation condition, the syntactic incongruities violated the prevailing musical tonality (i.e., the last bar of the incongruent sequence was a nontonic chord or nontonic note in the given key). In the linguistic condition, syntactic incongruities violated the expected grammatical structure (i.e., sentences with anomalies carrying the progressive –ing affix or the past tense inflection). Eighteen pianists were asked to sight-read and play musical phrases (music condition) and read sentences aloud (linguistic condition). Syntactic incongruities in both domains were associated with an increase in the mean proportion and duration of fixations in the target region of interest, as well as longer reading duration. The results are consistent with the growing evidence of a shared network of neural structures for syntactic processing, while not ruling out the possibility of independent networks for each domain.

@article{Ahlen2014,
title = {Learning to read upside-down: A study of perceptual expertise and its acquisition},
author = {Elsa Ahlén and Charlotte S Hills and Hashim M Hanif and Cristina Rubino and Jason J S Barton},
doi = {10.1007/s00221-013-3813-9},
year = {2014},
date = {2014-01-01},
journal = {Experimental Brain Research},
volume = {232},
number = {3},
pages = {1025--1036},
abstract = {Reading is an expert visual and ocular motor function, learned mainly in a single orientation. Characterizing the features of this expertise can be accomplished by contrasts between reading of normal and inverted text, in which perceptual but not linguistic factors are altered. Our goal was to examine this inversion effect in healthy subjects reading text, to derive behavioral and ocular motor markers of perceptual expertise in reading, and to study these parameters before and after training with inverted reading. Seven subjects engaged in a 10-week program of 30 half-hour sessions of reading inverted text. Before and after training, we assessed reading of upright and inverted single words for response time and word-length effects, as well as reading of paragraphs for time required, accuracy, and ocular motor parameters. Before training, inverted reading was characterized by long reading times and large word-length effects, with eye movements showing more and longer fixations, more and smaller forward saccades, and more regressive saccades. Training partially reversed many of these effects in single word and text reading, with the best gains occurring in reading aloud time and proportion of regressive saccades and the least change in forward saccade amplitude. We conclude that reading speed and ocular motor parameters can serve as markers of perceptual expertise during reading and that training with inverted text over 10 weeks results in significant gains of reading expertise in this unfamiliar orientation. This approach may be useful in the rehabilitation of patients with hemianopic dyslexia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Reading is an expert visual and ocular motor function, learned mainly in a single orientation. Characterizing the features of this expertise can be accomplished by contrasts between reading of normal and inverted text, in which perceptual but not linguistic factors are altered. Our goal was to examine this inversion effect in healthy subjects reading text, to derive behavioral and ocular motor markers of perceptual expertise in reading, and to study these parameters before and after training with inverted reading. Seven subjects engaged in a 10-week program of 30 half-hour sessions of reading inverted text. Before and after training, we assessed reading of upright and inverted single words for response time and word-length effects, as well as reading of paragraphs for time required, accuracy, and ocular motor parameters. Before training, inverted reading was characterized by long reading times and large word-length effects, with eye movements showing more and longer fixations, more and smaller forward saccades, and more regressive saccades. Training partially reversed many of these effects in single word and text reading, with the best gains occurring in reading aloud time and proportion of regressive saccades and the least change in forward saccade amplitude. We conclude that reading speed and ocular motor parameters can serve as markers of perceptual expertise during reading and that training with inverted text over 10 weeks results in significant gains of reading expertise in this unfamiliar orientation. This approach may be useful in the rehabilitation of patients with hemianopic dyslexia.

@article{Ahmad2014,
title = {Cost-sensitive Bayesian control policy in human active sensing},
author = {Sheeraz Ahmad and He Huang and Angela J Yu},
doi = {10.3389/fnhum.2014.00955},
year = {2014},
date = {2014-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {8},
number = {December},
pages = {1--12},
abstract = {An important but poorly understood aspect of sensory processing is the role of active sensing, the use of self-motion such as eye or head movements to focus sensing resources on the most rewarding or informative aspects of the sensory environment. Here, we present behavioral data from a visual search experiment, as well as a Bayesian model of within-trial dynamics of sensory processing and eye movements. Within this Bayes-optimal inference and control framework, which we call C-DAC (Context-Dependent Active Controller), various types of behavioral costs, such as temporal delay, response error, and sensor repositioning cost, are explicitly minimized. This contrasts with previously proposed algorithms that optimize abstract statistical objectives such as anticipated information gain (Infomax) (Butko and Movellan, 2010) and expected posterior maximum (greedy MAP) (Najemnik and Geisler, 2005). We find that C-DAC captures human visual search dynamics better than previous models, in particular a certain form of "confirmation bias" apparent in the way human subjects utilize prior knowledge about the spatial distribution of the search target to improve search speed and accuracy. We also examine several computationally efficient approximations to C-DAC that may present biologically more plausible accounts of the neural computations underlying active sensing, as well as practical tools for solving active sensing problems in engineering applications. To summarize, this paper makes the following key contributions: human visual search behavioral data, a context-sensitive Bayesian active sensing model, a comparative study between different models of human active sensing, and a family of efficient approximations to the optimal model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

An important but poorly understood aspect of sensory processing is the role of active sensing, the use of self-motion such as eye or head movements to focus sensing resources on the most rewarding or informative aspects of the sensory environment. Here, we present behavioral data from a visual search experiment, as well as a Bayesian model of within-trial dynamics of sensory processing and eye movements. Within this Bayes-optimal inference and control framework, which we call C-DAC (Context-Dependent Active Controller), various types of behavioral costs, such as temporal delay, response error, and sensor repositioning cost, are explicitly minimized. This contrasts with previously proposed algorithms that optimize abstract statistical objectives such as anticipated information gain (Infomax) (Butko and Movellan, 2010) and expected posterior maximum (greedy MAP) (Najemnik and Geisler, 2005). We find that C-DAC captures human visual search dynamics better than previous models, in particular a certain form of "confirmation bias" apparent in the way human subjects utilize prior knowledge about the spatial distribution of the search target to improve search speed and accuracy. We also examine several computationally efficient approximations to C-DAC that may present biologically more plausible accounts of the neural computations underlying active sensing, as well as practical tools for solving active sensing problems in engineering applications. To summarize, this paper makes the following key contributions: human visual search behavioral data, a context-sensitive Bayesian active sensing model, a comparative study between different models of human active sensing, and a family of efficient approximations to the optimal model.

@article{Ahmadi2011,
title = {Initial orientation of attention towards emotional faces in children with attention deficit hyperactivity disorder},
author = {Mehrnoosh Ahmadi and Mitra Judi and Anahita Khorrami and Javad Mahmoudi-Gharaei and Mehdi Tehrani-Doost},
year = {2011},
date = {2011-01-01},
journal = {Iranian Journal of Psychiatry},
volume = {6},
number = {3},
pages = {87--91},
abstract = {OBJECTIVE: Early recognition of negative emotions is considered to be of vital importance. It seems that children with attention deficit hyperactivity disorder have some difficulties recognizing facial emotional expressions, especially negative ones. This study investigated the preference of children with attention deficit hyperactivity disorder for negative (angry, sad) facial expressions compared to normal children. METHOD: Participants were 35 drug naive boys with ADHD, aged between 6-11 years,and 31 matched healthy children. Visual orientation data were recorded while participants viewed face pairs (negative-neutral pairs) shown for 3000ms. The number of first fixations made to each expression was considered as an index of initial orientation. RESULTS: Group comparisons revealed no difference between attention deficit hyperactivity disorder group and their matched healthy counterparts in initial orientation of attention. A tendency towards negative emotions was found within the normal group, while no difference was observed between initial allocation of attention toward negative and neutral expressions in children with ADHD. CONCLUSION: Children with attention deficit hyperactivity disorder do not have significant preference for negative facial expressions. In contrast, normal children have a significant preference for negative facial emotions rather than neutral faces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

OBJECTIVE: Early recognition of negative emotions is considered to be of vital importance. It seems that children with attention deficit hyperactivity disorder have some difficulties recognizing facial emotional expressions, especially negative ones. This study investigated the preference of children with attention deficit hyperactivity disorder for negative (angry, sad) facial expressions compared to normal children. METHOD: Participants were 35 drug naive boys with ADHD, aged between 6-11 years,and 31 matched healthy children. Visual orientation data were recorded while participants viewed face pairs (negative-neutral pairs) shown for 3000ms. The number of first fixations made to each expression was considered as an index of initial orientation. RESULTS: Group comparisons revealed no difference between attention deficit hyperactivity disorder group and their matched healthy counterparts in initial orientation of attention. A tendency towards negative emotions was found within the normal group, while no difference was observed between initial allocation of attention toward negative and neutral expressions in children with ADHD. CONCLUSION: Children with attention deficit hyperactivity disorder do not have significant preference for negative facial expressions. In contrast, normal children have a significant preference for negative facial emotions rather than neutral faces.

@article{Aichert2012,
title = {Associations between trait impulsivity and prepotent response inhibition},
author = {Désirée S Aichert and Nicola M Wöstmann and Anna Costa and Christine Macare and Johanna R Wenig and Hans-Jürgen Möller and Katya Rubia and Ulrich Ettinger},
doi = {10.1080/13803395.2012.706261},
year = {2012},
date = {2012-01-01},
journal = {Journal of Clinical and Experimental Neuropsychology},
volume = {34},
number = {10},
pages = {37--41},
abstract = {This study addresses the relationship between trait impulsivity and inhibitory control, two features known to be impaired in a number ofpsychiatric conditions. While impulsivity is often measured using psychometric self-report questionnaires, the inhibition ofinappropriate, impulsive motor responses is typically measured using experimental laboratory tasks. It remains unclear, however, whether psychometrically assessed impulsivity and experimentally operationalized inhibitory performance are related to each other. Therefore, we investigated the relationship between these two traits in a large sample using correlative and latent variable analysis. A total of 504 healthy individuals completed the Barratt Impulsiveness Scale (BIS-11) and a battery of four prepotent response inhibi- tion paradigms: the antisaccade, Stroop, stop-signal, and go/no-go tasks. We found significant associations of BIS impulsivity with commission errors on the go/no-go task and directional errors on the antisaccade task, over and above effects of age, gender, and intelligence. Latent variable analysis (a) supported the idea that all four inhibitory measures load on the same underlying construct termed “prepotent response inhibition” and (b) revealed that 12% of variance of the prepotent response inhibition construct could be explained by BIS impulsivity. Overall, the magnitude of associations observed was small, indicating that while a portion of variance in prepotent response inhibition can be explained by psychometric trait impulsivity, the majority of variance remains unexplained. Thus, these findings suggest that prepotent response inhibition paradigms can account for psychometric trait impulsivity only to a limited extent. Implications for studies of patient populations with symptoms of impulsivity are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

This study addresses the relationship between trait impulsivity and inhibitory control, two features known to be impaired in a number ofpsychiatric conditions. While impulsivity is often measured using psychometric self-report questionnaires, the inhibition ofinappropriate, impulsive motor responses is typically measured using experimental laboratory tasks. It remains unclear, however, whether psychometrically assessed impulsivity and experimentally operationalized inhibitory performance are related to each other. Therefore, we investigated the relationship between these two traits in a large sample using correlative and latent variable analysis. A total of 504 healthy individuals completed the Barratt Impulsiveness Scale (BIS-11) and a battery of four prepotent response inhibi- tion paradigms: the antisaccade, Stroop, stop-signal, and go/no-go tasks. We found significant associations of BIS impulsivity with commission errors on the go/no-go task and directional errors on the antisaccade task, over and above effects of age, gender, and intelligence. Latent variable analysis (a) supported the idea that all four inhibitory measures load on the same underlying construct termed “prepotent response inhibition” and (b) revealed that 12% of variance of the prepotent response inhibition construct could be explained by BIS impulsivity. Overall, the magnitude of associations observed was small, indicating that while a portion of variance in prepotent response inhibition can be explained by psychometric trait impulsivity, the majority of variance remains unexplained. Thus, these findings suggest that prepotent response inhibition paradigms can account for psychometric trait impulsivity only to a limited extent. Implications for studies of patient populations with symptoms of impulsivity are discussed.

@article{Aichert2013,
title = {Intact emotion-cognition interaction in schizophrenia patients and first-degree relatives: Evidence from an emotional antisaccade task},
author = {Désirée S Aichert and Birgit Derntl and Nicola M Wöstmann and Julia K Groß and Sandra Dehning and Anja Cerovecki and Hans-Jürgen Möller and Ute Habel and Michael Riedel and Ulrich Ettinger},
doi = {10.1016/j.bandc.2013.05.007},
year = {2013},
date = {2013-01-01},
journal = {Brain and Cognition},
volume = {82},
number = {3},
pages = {329--336},
publisher = {Elsevier Inc.},
abstract = {Schizophrenia patients have deficits in cognitive control as well as in a number of emotional domains. The antisaccade task is a measure of cognitive control that requires the inhibition of a reflex-like eye movement to a peripheral stimulus. Antisaccade performance has been shown to be modulated by the emotional content of the peripheral stimuli, with emotional stimuli leading to higher error rates than neutral stimuli, reflecting an implicit emotion processing effect. The aim of the present study was to investigate the impact on antisaccade performance of threat-related emotional facial stimuli in schizophrenia patients, first-degree relatives of schizophrenia patients and healthy controls. Fifteen patients, 22 relatives and 26 controls, matched for gender, age and verbal intelligence, carried out an antisaccade task with pictures of faces displaying disgusted, fearful and neutral expressions as peripheral stimuli. We observed higher antisaccade error rates in schizophrenia patients compared to first-degree relatives and controls. Relatives and controls did not differ significantly from each other. Antisaccade error rate was influenced by the emotional nature of the stimuli: participants had higher antisaccade error rates in response to fearful faces compared to neutral and disgusted faces. As this emotional influence on cognitive control did not differ between groups we conclude that implicit processing of emotional faces is intact in patients with schizophrenia and those at risk for the illness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Schizophrenia patients have deficits in cognitive control as well as in a number of emotional domains. The antisaccade task is a measure of cognitive control that requires the inhibition of a reflex-like eye movement to a peripheral stimulus. Antisaccade performance has been shown to be modulated by the emotional content of the peripheral stimuli, with emotional stimuli leading to higher error rates than neutral stimuli, reflecting an implicit emotion processing effect. The aim of the present study was to investigate the impact on antisaccade performance of threat-related emotional facial stimuli in schizophrenia patients, first-degree relatives of schizophrenia patients and healthy controls. Fifteen patients, 22 relatives and 26 controls, matched for gender, age and verbal intelligence, carried out an antisaccade task with pictures of faces displaying disgusted, fearful and neutral expressions as peripheral stimuli. We observed higher antisaccade error rates in schizophrenia patients compared to first-degree relatives and controls. Relatives and controls did not differ significantly from each other. Antisaccade error rate was influenced by the emotional nature of the stimuli: participants had higher antisaccade error rates in response to fearful faces compared to neutral and disgusted faces. As this emotional influence on cognitive control did not differ between groups we conclude that implicit processing of emotional faces is intact in patients with schizophrenia and those at risk for the illness.

@article{Aine2017,
title = {Multimodal neuroimaging in schizophrenia: Description and dissemination},
author = {C J Aine and H J Bockholt and J R Bustillo and J M Ca{ñ}ive and A Caprihan and C Gasparovic and F M Hanlon and J M Houck and R E Jung and J Lauriello and J Liu and A R Mayer and N I Perrone-Bizzozero and S Posse and J M Stephen and J A Turner and V P Clark and Vince D Calhoun},
doi = {10.1007/s12021-017-9338-9},
year = {2017},
date = {2017-01-01},
journal = {Neuroinformatics},
volume = {15},
number = {4},
pages = {343--364},
publisher = {Neuroinformatics},
abstract = {In this paper we describe an open-access collection ofmultimodal neuroimaging data in schizophrenia for release to the community. Data were acquired from approximately 100 patients with schizophrenia and 100 age-matched controls during rest as well as several task activation paradigms targeting a hierarchy of cognitive constructs. Neuroimaging data include structural MRI, functional MRI, diffusion MRI, MR spectroscopic imaging, and magnetoencephalography. For three of the hypothesis-driven projects, task activation paradigms were acquired on subsets of~200 volunteers which examined a range of sensory and cognitive processes (e.g., auditory sensory gating, auditory/visual multisensory integration, visual transverse patterning). Neuropsychological data were also acquired and genetic material via saliva samples were collected from most of the participants and have been typed for both genome-wide polymorphism data as well as genome-wide methylation data. Some results are also present- ed from the individual studies as well as from our data-driven multimodal analyses (e.g., multimodal examinations of network structure and network dynamics and multitask fMRI data analysis across projects). All data will be released through the Mind Research Network's collaborative informatics and neuroimaging suite (COINS).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In this paper we describe an open-access collection ofmultimodal neuroimaging data in schizophrenia for release to the community. Data were acquired from approximately 100 patients with schizophrenia and 100 age-matched controls during rest as well as several task activation paradigms targeting a hierarchy of cognitive constructs. Neuroimaging data include structural MRI, functional MRI, diffusion MRI, MR spectroscopic imaging, and magnetoencephalography. For three of the hypothesis-driven projects, task activation paradigms were acquired on subsets of~200 volunteers which examined a range of sensory and cognitive processes (e.g., auditory sensory gating, auditory/visual multisensory integration, visual transverse patterning). Neuropsychological data were also acquired and genetic material via saliva samples were collected from most of the participants and have been typed for both genome-wide polymorphism data as well as genome-wide methylation data. Some results are also present- ed from the individual studies as well as from our data-driven multimodal analyses (e.g., multimodal examinations of network structure and network dynamics and multitask fMRI data analysis across projects). All data will be released through the Mind Research Network's collaborative informatics and neuroimaging suite (COINS).

@article{Aitkin2013,
title = {Anticipatory smooth eye movements in autism spectrum disorder},
author = {Cordelia D Aitkin and Elio M Santos and Eileen Kowler},
doi = {10.1371/journal.pone.0083230},
year = {2013},
date = {2013-01-01},
journal = {PLoS ONE},
volume = {8},
number = {12},
pages = {1--11},
abstract = {Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.

As a promising imaging modality, digital breast tomosynthesis (DBT) leads to better diagnostic per- formance than traditional full-field digital mammograms (FFDM) alone. DBT allows different planes of the breast to be visualized, reducing occlusion from overlapping tissue. Although DBT is gaining popularity, best practices for search strategies in this medium are unclear. Eye tracking allowed us to describe search patterns adopted by radiologists searching DBT and FFDM images. Eleven radiologists examined eight DBT and FFDM cases. Observers marked suspicious masses with mouse clicks. Eye position was recorded at 1000 Hz and was coregistered with slice/depth plane as the radiologist scrolled through the DBT images, allowing a 3-D representation of eye position. Hit rate for masses was higher for tomography cases than 2-D cases and DBT led to lower false positive rates. However, search duration was much longer for DBT cases than FFDM. DBT was associated with longer fixations but similar saccadic amplitude compared with FFDM. When comparing radiologists' eye movements to a previous study, which tracked eye movements as radiologists read chest CT, we found DBT viewers did not align with previously identified “driller” or “scanner” strategies, although their search strategy most closely aligns with a type of vigorous drilling strategy.

@article{Ajasse2018,
title = {Effects of pupillary responses to luminance and attention on visual spatial discrimination},
author = {Suzon Ajasse and Ryad B Benosman and Jean Lorenceau},
doi = {10.1167/18.11.6},
year = {2018},
date = {2018-01-01},
journal = {Journal of Vision},
volume = {18},
number = {11},
pages = {1--14},
abstract = {The optic quality of the eyes is, at least in part, determined by pupil size. Large pupils let more light enter the eyes, but degrade the point spread function, and thus the spatial resolution that can be achieved (Campbell & Gregory, 1960). In natural conditions, the pupil is mainly driven by the luminance (and possibly the color and contrast) at the gazed location, but is also modulated by attention and cognitive factors. Whether changes in eyes' optics related to pupil size modulation by luminance and attention impacts visual processing was assessed in two experiments. In Experiment 1, we measured pupil size using a constantly visible display made of four disks with different luminance levels, with no other task than fixating the disks in succession. The results confirmed that pupil size depends on the luminance of the gazed stimulus. Experiment 2, using similar settings as Experiment 1, used a two-interval forced-choice design to test whether discriminating high spatial frequencies that requires covert attention to parafoveal stimuli is better during the fixation of bright disks that entails a small pupil size, and hence better eyes' optics, as compared to fixating dark disks that entails a large pupil size, and hence poorer eyes' optics. As in Experiment 1, we observed large modulations of pupil size depending on the luminance of the gazed stimulus, but pupil dynamics was more variable, with marked pupil dilation during stimulus encoding, presumably because the demanding spatial frequency discrimination task engaged attention. However, discrimination performance and mean pupil size were not correlated. Despite this lack of correlation, the slopes of pupil dilation during stimulus encoding were correlated to performance, while the slopes of pupil dilation during decision-making were not. We discuss these results regarding the possible functional roles of pupil size modulations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The optic quality of the eyes is, at least in part, determined by pupil size. Large pupils let more light enter the eyes, but degrade the point spread function, and thus the spatial resolution that can be achieved (Campbell & Gregory, 1960). In natural conditions, the pupil is mainly driven by the luminance (and possibly the color and contrast) at the gazed location, but is also modulated by attention and cognitive factors. Whether changes in eyes' optics related to pupil size modulation by luminance and attention impacts visual processing was assessed in two experiments. In Experiment 1, we measured pupil size using a constantly visible display made of four disks with different luminance levels, with no other task than fixating the disks in succession. The results confirmed that pupil size depends on the luminance of the gazed stimulus. Experiment 2, using similar settings as Experiment 1, used a two-interval forced-choice design to test whether discriminating high spatial frequencies that requires covert attention to parafoveal stimuli is better during the fixation of bright disks that entails a small pupil size, and hence better eyes' optics, as compared to fixating dark disks that entails a large pupil size, and hence poorer eyes' optics. As in Experiment 1, we observed large modulations of pupil size depending on the luminance of the gazed stimulus, but pupil dynamics was more variable, with marked pupil dilation during stimulus encoding, presumably because the demanding spatial frequency discrimination task engaged attention. However, discrimination performance and mean pupil size were not correlated. Despite this lack of correlation, the slopes of pupil dilation during stimulus encoding were correlated to performance, while the slopes of pupil dilation during decision-making were not. We discuss these results regarding the possible functional roles of pupil size modulations.

@article{Ajina2015,
title = {Motion area V5/MT+ response to global motion in the absence of V1 resembles early visual cortex},
author = {Sara Ajina and Christopher Kennard and Geraint Rees and Holly Bridge},
doi = {10.1093/brain/awu328},
year = {2015},
date = {2015-01-01},
journal = {Brain},
volume = {138},
number = {1},
pages = {164--178},
abstract = {Motion area V5/MT+ shows a variety of characteristic visual responses, often linked to perception, which are heavily influenced by its rich connectivity with the primary visual cortex (V1). This human motion area also receives a number of inputs from other visual regions, including direct subcortical connections and callosal connections with the contralateral hemisphere. Little is currently known about such alternative inputs to V5/MT+ and how they may drive and influence its activity. Using functional magnetic resonance imaging, the response of human V5/MT+ to increasing the proportion of coherent motion was measured in seven patients with unilateral V1 damage acquired during adulthood, and a group of healthy age-matched controls. When V1 was damaged, the typical V5/MT+ response to increasing coherence was lost. Rather, V5/MT+ in patients showed a negative trend with coherence that was similar to coherence-related activity in V1 of healthy control subjects. This shift to a response-pattern more typical of early visual cortex suggests that in the absence of V1, V5/MT+ activity may be shaped by similar direct subcortical input. This is likely to reflect intact residual pathways rather than a change in connectivity, and has important implications for blindsight function. It also confirms predictions that V1 is critically involved in normal V5/MT+ global motion processing, consistent with a convergent model of V1 input to V5/MT+. Historically, most attempts to model cortical visual responses do not consider the contribution of direct subcortical inputs that may bypass striate cortex, such as input to V5/MT+. We have shown that the signal change driven by these non-striate pathways can be measured, and suggest that models of the intact visual system may benefit from considering their contribution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Motion area V5/MT+ shows a variety of characteristic visual responses, often linked to perception, which are heavily influenced by its rich connectivity with the primary visual cortex (V1). This human motion area also receives a number of inputs from other visual regions, including direct subcortical connections and callosal connections with the contralateral hemisphere. Little is currently known about such alternative inputs to V5/MT+ and how they may drive and influence its activity. Using functional magnetic resonance imaging, the response of human V5/MT+ to increasing the proportion of coherent motion was measured in seven patients with unilateral V1 damage acquired during adulthood, and a group of healthy age-matched controls. When V1 was damaged, the typical V5/MT+ response to increasing coherence was lost. Rather, V5/MT+ in patients showed a negative trend with coherence that was similar to coherence-related activity in V1 of healthy control subjects. This shift to a response-pattern more typical of early visual cortex suggests that in the absence of V1, V5/MT+ activity may be shaped by similar direct subcortical input. This is likely to reflect intact residual pathways rather than a change in connectivity, and has important implications for blindsight function. It also confirms predictions that V1 is critically involved in normal V5/MT+ global motion processing, consistent with a convergent model of V1 input to V5/MT+. Historically, most attempts to model cortical visual responses do not consider the contribution of direct subcortical inputs that may bypass striate cortex, such as input to V5/MT+. We have shown that the signal change driven by these non-striate pathways can be measured, and suggest that models of the intact visual system may benefit from considering their contribution.

Although damage to the primary visual cortex (V1) causes hemianopia, many patients retain some residual vision; known as blindsight. We show that blindsight may be facilitated by an intact white-matter pathway between the lateral geniculate nucleus and motion area hMT+. Visual psychophysics, diffusion-weighted magnetic resonance imaging and fibre tractography were applied in 17 patients with V1 damage acquired during adulthood and 9 age-matched controls. Individuals with V1 damage were subdivided into blindsight positive (preserved residual vision) and negative (no residual vision) according to psychophysical performance. All blindsight positive individuals showed intact geniculo-hMT+ pathways, while this pathway was significantly impaired or not measurable in blindsight negative individuals. Two white matter pathways previously implicated in blindsight: (i) superior colliculus to hMT+ and (ii) between hMT+ in each hemisphere were not consistently present in blindsight positive cases. Understanding the visual pathways crucial for residual vision may direct future rehabilitation strategies for hemianopia patients.

@article{Ajina2015b,
title = {Abnormal contrast responses in the extrastriate cortex of blindsight patients},
author = {Sara Ajina and Geraint Rees and Christopher Kennard and Holly Bridge},
doi = {10.1523/JNEUROSCI.3075-14.2015},
year = {2015},
date = {2015-01-01},
journal = {Journal of Neuroscience},
volume = {35},
number = {21},
pages = {8201--8213},
abstract = {When the human primary visual cortex (V1) is damaged, the dominant geniculo-striate pathway can no longer convey visual information to the occipital cortex. However, many patients with such damage retain some residual visual function that must rely on an alternative pathway directly to extrastriate occipital regions. This residual vision is most robust for moving stimuli, suggesting a role for motion area hMT+. However, residual vision also requires high-contrast stimuli, which is inconsistent with hMT+ sensitivity to contrast in which even low-contrast levels elicit near-maximal neural activation. We sought to investigate this discrepancy by measuring behavioral and neural responses to increasing contrast in patients with V1 damage. Eight patients underwent behavioral testing and functional magnetic resonance imaging to record contrast sensitivity in hMT+ of their damaged hemisphere, using Gabor stimuli with a spatial frequency of 1 cycle/degrees. The responses from hMT+ of the blind hemisphere were compared with hMT+ and V1 responses in the sighted hemisphere of patients and a group of age-matched controls. Unlike hMT+, neural responses in V1 tend to increase linearly with increasing contrast, likely reflecting a dominant parvocellular channel input. Across all patients, the responses in hMT+ of the blind hemisphere no longer showed early saturation but increased linearly with contrast. Given the spatiotemporal parameters used in this study and the known direct subcortical projections from the koniocellular layers of the lateral geniculate nucleus to hMT+, we propose that this altered contrast sensitivity in hMT+ could be consistent with input from the koniocellular pathway.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

When the human primary visual cortex (V1) is damaged, the dominant geniculo-striate pathway can no longer convey visual information to the occipital cortex. However, many patients with such damage retain some residual visual function that must rely on an alternative pathway directly to extrastriate occipital regions. This residual vision is most robust for moving stimuli, suggesting a role for motion area hMT+. However, residual vision also requires high-contrast stimuli, which is inconsistent with hMT+ sensitivity to contrast in which even low-contrast levels elicit near-maximal neural activation. We sought to investigate this discrepancy by measuring behavioral and neural responses to increasing contrast in patients with V1 damage. Eight patients underwent behavioral testing and functional magnetic resonance imaging to record contrast sensitivity in hMT+ of their damaged hemisphere, using Gabor stimuli with a spatial frequency of 1 cycle/degrees. The responses from hMT+ of the blind hemisphere were compared with hMT+ and V1 responses in the sighted hemisphere of patients and a group of age-matched controls. Unlike hMT+, neural responses in V1 tend to increase linearly with increasing contrast, likely reflecting a dominant parvocellular channel input. Across all patients, the responses in hMT+ of the blind hemisphere no longer showed early saturation but increased linearly with contrast. Given the spatiotemporal parameters used in this study and the known direct subcortical projections from the koniocellular layers of the lateral geniculate nucleus to hMT+, we propose that this altered contrast sensitivity in hMT+ could be consistent with input from the koniocellular pathway.

@article{Ajina2018,
title = {Blindsight relies on a functional connection between hMT+ and the lateral geniculate nucleus, not the pulvinar},
author = {Sara Ajina and Holly Bridge},
doi = {10.1371/journal.pbio.2005769},
year = {2018},
date = {2018-01-01},
journal = {PLoS Biology},
volume = {16},
number = {7},
pages = {1--25},
abstract = {When the primary visual cortex (V1) is damaged, the principal visual pathway is lost, causing a loss of vision in the opposite visual field. While conscious vision is impaired, patients can still respond to certain images; this is known as ‘blindsight'. Recently, a direct anatomical connection between the lateral geniculate nucleus (LGN) and human motion area hMT+ has been implicated in blindsight. However, a functional connection between these structures has not been demonstrated. We quantified functional MRI responses to motion in 14 patients with unilateral V1 damage (with and without blindsight). Patients with blindsight showed significant activity and a preserved sensitivity to speed in motion area hMT+, which was absent in patients without blindsight. We then compared functional connectivity between motion area hMT+ and a number of structures implicated in blindsight, including the ventral pulvinar. Only patients with blindsight showed an intact functional connection with the LGN but not the other structures, supporting a specific functional role for the LGN in blindsight.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

When the primary visual cortex (V1) is damaged, the principal visual pathway is lost, causing a loss of vision in the opposite visual field. While conscious vision is impaired, patients can still respond to certain images; this is known as ‘blindsight'. Recently, a direct anatomical connection between the lateral geniculate nucleus (LGN) and human motion area hMT+ has been implicated in blindsight. However, a functional connection between these structures has not been demonstrated. We quantified functional MRI responses to motion in 14 patients with unilateral V1 damage (with and without blindsight). Patients with blindsight showed significant activity and a preserved sensitivity to speed in motion area hMT+, which was absent in patients without blindsight. We then compared functional connectivity between motion area hMT+ and a number of structures implicated in blindsight, including the ventral pulvinar. Only patients with blindsight showed an intact functional connection with the LGN but not the other structures, supporting a specific functional role for the LGN in blindsight.

@article{Akdogan2016,
title = {Temporal expectation indexed by pupillary response},
author = {Başak Akdoğan and Fuat Balcı and Hedderik van Rijn},
doi = {10.1163/22134468-00002075},
year = {2016},
date = {2016-01-01},
journal = {Timing & Time Perception},
volume = {4},
pages = {354--370},
abstract = {Forming temporal expectations plays an instrumental role for the optimization of behavior and allo- cation of attentional resources. Although the effects of temporal expectations on visual attention are well-established, the question of whether temporal predictions modulate the behavioral outputs of the autonomic nervous system such as the pupillary response remains unanswered. Therefore, this study aimed to obtain an online measure of pupil size while human participants were asked to dif- ferentiate between visual targets presented after varying time intervals since trial onset. Specifically, we manipulated temporal predictability in the presentation of target stimuli consisting of letters which appeared after either a short or long delay duration (1.5 vs. 3 s) in the majority of trials (75%) within different test blocks. In the remaining trials (25%), no target stimulus was present to investi- gate the trajectory of preparatory pupillary response under a low level of temporal uncertainty. The results revealed that the rate of preparatory pupillary response was contingent upon the time of target appearance such that pupils dilated at a higher rate when the targets were expected to appear after a shorter as compared to a longer delay period irrespective of target presence. The finding that pupil size can track temporal regularities and exhibit differential preparatory response between dif- ferent delay conditions points to the existence of a distributed neural network subserving temporal information processing which is crucial for cognitive functioning and goal-directed behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Forming temporal expectations plays an instrumental role for the optimization of behavior and allo- cation of attentional resources. Although the effects of temporal expectations on visual attention are well-established, the question of whether temporal predictions modulate the behavioral outputs of the autonomic nervous system such as the pupillary response remains unanswered. Therefore, this study aimed to obtain an online measure of pupil size while human participants were asked to dif- ferentiate between visual targets presented after varying time intervals since trial onset. Specifically, we manipulated temporal predictability in the presentation of target stimuli consisting of letters which appeared after either a short or long delay duration (1.5 vs. 3 s) in the majority of trials (75%) within different test blocks. In the remaining trials (25%), no target stimulus was present to investi- gate the trajectory of preparatory pupillary response under a low level of temporal uncertainty. The results revealed that the rate of preparatory pupillary response was contingent upon the time of target appearance such that pupils dilated at a higher rate when the targets were expected to appear after a shorter as compared to a longer delay period irrespective of target presence. The finding that pupil size can track temporal regularities and exhibit differential preparatory response between dif- ferent delay conditions points to the existence of a distributed neural network subserving temporal information processing which is crucial for cognitive functioning and goal-directed behavior.

@article{Akerfelt2006,
title = {Visual-tactile saccadic inhibition},
author = {Annika Åkerfelt and Hans Colonius and Adele Diederich},
doi = {10.1007/s00221-005-0168-x},
year = {2006},
date = {2006-01-01},
journal = {Experimental Brain Research},
volume = {169},
number = {4},
pages = {554--563},
abstract = {In an eye movement countermanding paradigm it is demonstrated for the first time that a tactile stimulus can be an effective stop signal when human participants are to inhibit saccades to a visual target. Estimated stop signal processing times were 90-140 ms, comparable to results with auditory stop signals, but shorter than those commonly found for manual responses. Two of the three participants significantly slowed their reactions in expectation of the stop signal as revealed by a control experiment without stop signals. All participants produced slower responses in the shortest stop signal delay condition than predicted by the race model (Logan and Cowan 1984) along with hypometric saccades on stop failure trials, suggesting that the race model may need to be elaborated to include some component of interaction of stop and go signal processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In an eye movement countermanding paradigm it is demonstrated for the first time that a tactile stimulus can be an effective stop signal when human participants are to inhibit saccades to a visual target. Estimated stop signal processing times were 90-140 ms, comparable to results with auditory stop signals, but shorter than those commonly found for manual responses. Two of the three participants significantly slowed their reactions in expectation of the stop signal as revealed by a control experiment without stop signals. All participants produced slower responses in the shortest stop signal delay condition than predicted by the race model (Logan and Cowan 1984) along with hypometric saccades on stop failure trials, suggesting that the race model may need to be elaborated to include some component of interaction of stop and go signal processing.

@article{Akman2009,
title = {Probing bottom-up processing with multistable images},
author = {Ozgur E Akman and Richard A Clement and David S Broomhead and Sabira Mannan and Ian Moorhead and Hugh R Wilson},
year = {2009},
date = {2009-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {3},
pages = {1--7},
abstract = {The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.

@article{Akram2017,
title = {Preferential attention towards the eye-region amongst individuals with insomnia},
author = {Umair Akram and Jason G Ellis and Andriy Myachykov and Nicola L Barclay},
doi = {10.1111/jsr.12456},
year = {2017},
date = {2017-01-01},
journal = {Journal of Sleep Research},
volume = {26},
number = {1},
pages = {84--91},
abstract = {People with insomnia often perceive their own facial appearance as more tired compared with the appearance of others. Evidence also highlights the eye-region in projecting tiredness cues to perceivers, and tiredness judgements often rely on preferential attention towards this region. Using a novel eye-tracking paradigm, this study examined: (i) whether individuals with insomnia display preferential attention towards the eyeregion, relative to nose and mouth regions, whilst observing faces compared with normal-sleepers; and (ii) whether an attentional bias towards the eye-region amongst individuals with insomnia is self-specific or general in nature. Twenty individuals with DSM-5 Insomnia Disorder and 20 normal-sleepers viewed 48 neutral facial photographs (24 of themselves, 24 of other people) for periods of 4000 ms. Eye movements were recorded using eye-tracking, and first fixation onset, first fixation duration and total gaze duration were examined for three interest-regions (eyes, nose, mouth). Significant group 9 interest-region interactions indicated that, regardless of the face presented, participants with insomnia were quicker to attend to, and spent more time observing, the eye-region relative to the nose and mouth regions compared with normal-sleepers. However, no group 9 face 9 interest-region interactions were established. Thus, whilst individuals with insomnia displayed preferential attention towards the eye-region in general, this effect was not accentuated during self-perception. Insomnia appears to be characterized by a general, rather than self-specific, attentional bias towards the eye-region. These findings contribute to our understanding of face perception in insomnia, and provide tentative support for cognitive models of insomnia demonstrating that individuals with insomnia monitor faces in general, with a specific focus around the eye-region, for cues associated with tiredness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

People with insomnia often perceive their own facial appearance as more tired compared with the appearance of others. Evidence also highlights the eye-region in projecting tiredness cues to perceivers, and tiredness judgements often rely on preferential attention towards this region. Using a novel eye-tracking paradigm, this study examined: (i) whether individuals with insomnia display preferential attention towards the eyeregion, relative to nose and mouth regions, whilst observing faces compared with normal-sleepers; and (ii) whether an attentional bias towards the eye-region amongst individuals with insomnia is self-specific or general in nature. Twenty individuals with DSM-5 Insomnia Disorder and 20 normal-sleepers viewed 48 neutral facial photographs (24 of themselves, 24 of other people) for periods of 4000 ms. Eye movements were recorded using eye-tracking, and first fixation onset, first fixation duration and total gaze duration were examined for three interest-regions (eyes, nose, mouth). Significant group 9 interest-region interactions indicated that, regardless of the face presented, participants with insomnia were quicker to attend to, and spent more time observing, the eye-region relative to the nose and mouth regions compared with normal-sleepers. However, no group 9 face 9 interest-region interactions were established. Thus, whilst individuals with insomnia displayed preferential attention towards the eye-region in general, this effect was not accentuated during self-perception. Insomnia appears to be characterized by a general, rather than self-specific, attentional bias towards the eye-region. These findings contribute to our understanding of face perception in insomnia, and provide tentative support for cognitive models of insomnia demonstrating that individuals with insomnia monitor faces in general, with a specific focus around the eye-region, for cues associated with tiredness.

On the basis of recent observations of a modulation of Fitts's law for manual pointing movements in structured visual arrays (J. J. Adam, R. Mol, J. Pratt, & M. H. Fischer, 2006; J. Pratt, J. J. Adam, & M. H. Fischer, 2007), the authors examined whether a similar modulation occurs for saccadic eye move- ments. Healthy participants (N = 19) made horizontal saccades to targets that appeared randomly in 1 of 4 positions, either on an empty background or within 1 of 4 placeholder boxes. Whereas in previous studies, placeholders caused a decrease in movement time (MT) without the normal decrease in movement accuracy predicted by Fitts's law, placeholders in the present experiment increased saccadic accuracy (decreased endpoint variability) with- out an increase in MT. The present results extend the findings of J. J. Adam et al. of a modulation of Fitts's law from the temporal domain to the spatial domain and from manual movements to eye movements.

@article{Al-Aidroos2010,
title = {Top-down control in time and space: Evidence from saccadic latencies and trajectories},
author = {Naseem Al-Aidroos and Jay Pratt},
doi = {10.1080/13506280802456939},
year = {2010},
date = {2010-01-01},
journal = {Visual Cognition},
volume = {18},
number = {1},
pages = {26--49},
abstract = {Visual distractors disrupt the production of saccadic eye movements temporally, by increasing saccade latency, and spatially, by biasing the trajectory of the movement. The present research investigated the extent to which top-down control can be exerted over these two forms of oculomotor capture. In two experiments, people were instructed to make target directed saccades in the presence of distractors, and temporal and spatial capture were assessed simultaneously by measuring saccade latency and saccade trajectory curvature, respectively. In Experiment 1, an attentional control set manipulation was employed, resulting in the elimination of temporal capture, but only an attenuation of spatial capture. In Experiment 2, foreknowledge of the target location caused an attenuation of temporal capture but an enhancement of spatial capture. These results suggest that, whereas temporal capture is contingent on top-down control, the spatial component of capture is stimulus-driven.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Visual distractors disrupt the production of saccadic eye movements temporally, by increasing saccade latency, and spatially, by biasing the trajectory of the movement. The present research investigated the extent to which top-down control can be exerted over these two forms of oculomotor capture. In two experiments, people were instructed to make target directed saccades in the presence of distractors, and temporal and spatial capture were assessed simultaneously by measuring saccade latency and saccade trajectory curvature, respectively. In Experiment 1, an attentional control set manipulation was employed, resulting in the elimination of temporal capture, but only an attenuation of spatial capture. In Experiment 2, foreknowledge of the target location caused an attenuation of temporal capture but an enhancement of spatial capture. These results suggest that, whereas temporal capture is contingent on top-down control, the spatial component of capture is stimulus-driven.

@article{Al-Samarraie2016,
title = {Predicting user preferences of environment design: A perceptual mechanism of user interface customisation},
author = {Hosam Al-Samarraie and Samer Muthana Sarsam and Hans Guesgen},
doi = {10.1080/0144929X.2016.1186735},
year = {2016},
date = {2016-01-01},
journal = {Behaviour & Information Technology},
volume = {35},
number = {8},
pages = {644--653},
abstract = {It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user?s eye-movement behaviour. We firstly captured the user?s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users? preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human?computer interaction and decision-making.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user?s eye-movement behaviour. We firstly captured the user?s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users? preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human?computer interaction and decision-making.

@article{Al-Zanoon2017,
title = {Evidence for a global oculomotor program in reading},
author = {Noor Al-Zanoon and Michael Dambacher and Victor Kuperman},
doi = {10.1007/s00426-016-0786-x},
year = {2017},
date = {2017-01-01},
journal = {Psychological Research},
volume = {81},
number = {4},
pages = {863--877},
publisher = {Springer Berlin Heidelberg},
abstract = {Recent corpus studies of eye-movements in reading revealed a substantial increase in saccade amplitudes and fixation durations as the eyes move over the first words of a sentence. This start-up effect suggests a global oculomotor program, which operates on the level of an entire line, in addition to the well-established local programs operating within the visual span. The present study investigates the nature of this global program experimentally and examines whether the start-up effect is predicated on generic visual or specific linguistic characteristics and whether it is mainly reflected in saccade amplitudes, fixation durations or both measures. Eye movements were recorded while 38 participants read (a) normal sentences, (b) sequences of randomly shuffled words and (c) sequences of z-strings. The stimuli were, therefore, similar in their visual features, but varied in the amount of syntactic and lexical information. Further, the stimuli were composed of words or strings that either varied naturally in length (Nonequal condition) or were all restricted to a specific length within a sentence (Equal). The latter condition constrained the variability of saccades and served to dissociate effects of word position in line on saccade amplitudes and fixation durations. A robust start-up effect emerged in saccade amplitudes in all Nonequal stimuli, and-in an attenuated form-in Equal sentences. A start-up effect in single fixation durations was observed in Nonequal and Equal normal sentences, but not in z-strings. These findings support the notion of a global oculomotor program in reading particularly for the spatial characteristics of motor planning, which rely on visual rather than linguistic information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Recent corpus studies of eye-movements in reading revealed a substantial increase in saccade amplitudes and fixation durations as the eyes move over the first words of a sentence. This start-up effect suggests a global oculomotor program, which operates on the level of an entire line, in addition to the well-established local programs operating within the visual span. The present study investigates the nature of this global program experimentally and examines whether the start-up effect is predicated on generic visual or specific linguistic characteristics and whether it is mainly reflected in saccade amplitudes, fixation durations or both measures. Eye movements were recorded while 38 participants read (a) normal sentences, (b) sequences of randomly shuffled words and (c) sequences of z-strings. The stimuli were, therefore, similar in their visual features, but varied in the amount of syntactic and lexical information. Further, the stimuli were composed of words or strings that either varied naturally in length (Nonequal condition) or were all restricted to a specific length within a sentence (Equal). The latter condition constrained the variability of saccades and served to dissociate effects of word position in line on saccade amplitudes and fixation durations. A robust start-up effect emerged in saccade amplitudes in all Nonequal stimuli, and-in an attenuated form-in Equal sentences. A start-up effect in single fixation durations was observed in Nonequal and Equal normal sentences, but not in z-strings. These findings support the notion of a global oculomotor program in reading particularly for the spatial characteristics of motor planning, which rely on visual rather than linguistic information.

@article{Alahyane2003,
title = {Adaptation of saccadic eye movements: Transfer and specificity},
author = {Nadia Alahyane and Denis Pélisson},
doi = {10.1196/annals.1303.008},
year = {2003},
date = {2003-01-01},
journal = {Annals of New York Academy of Sciences},
volume = {1004},
number = {1},
pages = {69--77},
abstract = {The present study was designed to test whether the adaptation of saccadic eye movements depends only on the eye displacement vector of the trained saccade or also on eye position information. Using the double-step target paradigm in eight human subjects, we first induced in a single session two "opposite directions adaptations" (ODA) of horizontal saccades of the same vector. Each ODA (backward or forward) was linked to one vertical eye position (12.5 degrees up or 25 degrees down) and alternated from trial to trial. The results showed that opposite changes of saccade amplitude can develop simultaneously, indicating that saccadic adaptation depends on orbital eye position. This finding has important functional implications because in everyday life our eyes saccade from constantly changing orbital positions. A comparison of these data to two control conditions in which training trials of a single type (backward or forward) were presented at both 12.5 degrees and -25 degrees eye elevations further indicated that eye position specificity is complete for backward, but not for forward, adaptation. Finally, the control conditions also indicated that the adaptation of a single saccade fully transferred to untrained saccades of the same vector, but initiated from different vertical eye positions. In conclusion, our study indicates that saccadic adaptation mechanisms use vectorial eye displacement signals, but can also take eye position signals into account as a contextual cue when the training involves conflicting saccade amplitude changes},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The present study was designed to test whether the adaptation of saccadic eye movements depends only on the eye displacement vector of the trained saccade or also on eye position information. Using the double-step target paradigm in eight human subjects, we first induced in a single session two "opposite directions adaptations" (ODA) of horizontal saccades of the same vector. Each ODA (backward or forward) was linked to one vertical eye position (12.5 degrees up or 25 degrees down) and alternated from trial to trial. The results showed that opposite changes of saccade amplitude can develop simultaneously, indicating that saccadic adaptation depends on orbital eye position. This finding has important functional implications because in everyday life our eyes saccade from constantly changing orbital positions. A comparison of these data to two control conditions in which training trials of a single type (backward or forward) were presented at both 12.5 degrees and -25 degrees eye elevations further indicated that eye position specificity is complete for backward, but not for forward, adaptation. Finally, the control conditions also indicated that the adaptation of a single saccade fully transferred to untrained saccades of the same vector, but initiated from different vertical eye positions. In conclusion, our study indicates that saccadic adaptation mechanisms use vectorial eye displacement signals, but can also take eye position signals into account as a contextual cue when the training involves conflicting saccade amplitude changes

@article{Alahyane2004,
title = {Transfer of adaptation from visually guided saccades to averaging saccades elicited by double visual targets},
author = {Nadia Alahyane and Ansgar Koene and Denis Pélisson},
doi = {10.1111/j.1460-9568.2004.03536.x},
year = {2004},
date = {2004-01-01},
journal = {European Journal of Neuroscience},
volume = {20},
number = {3},
pages = {827--836},
abstract = {The adaptive mechanisms that control the amplitude of visually guided saccades (VGS) are only partially elucidated. In this study, we investigated, in six human subjects, the transfer of VGS adaptation to averaging saccades elicited by the simultaneous presentation of two visual targets. The generation of averaging saccades requires the transformation of two representations encoding the desired eye displacement toward each of the two targets into a single representation encoding the averaging saccade (averaging programming site). We aimed to evaluate whether VGS adaptation acts upstream (hypothesis 1) or at/below (hypothesis 2) the level of averaging saccades programming. Using the double-step target paradigm, we simultaneously induced a backward adaptation of 17.5 degrees horizontal VGS and a forward adaptation of 17.5 degrees oblique VGS performed along the +/- 40 degrees directions relative to the azimuth. We measured the effects of this dual adaptation protocol on averaging saccades triggered by two simultaneous targets located at 17.5 degrees along the +/- 40 degrees directions. To increase the yield of averaging saccades, we instructed the subjects to move their eyes as fast as possible to an intermediate position between the two targets. We found that the amplitude of averaging saccades was smaller after VGS adaptation than before and differed significantly from that predicted by hypothesis 1, but not by hypothesis 2, with an adaptation transfer of 50%. These findings indicate that VGS adaptation largely occurs at/below the averaging saccade programming site. Based on current knowledge of the neural substrate of averaging saccades, we suggest that VGS adaptation mainly acts at the level of the superior colliculus or downstream.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The adaptive mechanisms that control the amplitude of visually guided saccades (VGS) are only partially elucidated. In this study, we investigated, in six human subjects, the transfer of VGS adaptation to averaging saccades elicited by the simultaneous presentation of two visual targets. The generation of averaging saccades requires the transformation of two representations encoding the desired eye displacement toward each of the two targets into a single representation encoding the averaging saccade (averaging programming site). We aimed to evaluate whether VGS adaptation acts upstream (hypothesis 1) or at/below (hypothesis 2) the level of averaging saccades programming. Using the double-step target paradigm, we simultaneously induced a backward adaptation of 17.5 degrees horizontal VGS and a forward adaptation of 17.5 degrees oblique VGS performed along the +/- 40 degrees directions relative to the azimuth. We measured the effects of this dual adaptation protocol on averaging saccades triggered by two simultaneous targets located at 17.5 degrees along the +/- 40 degrees directions. To increase the yield of averaging saccades, we instructed the subjects to move their eyes as fast as possible to an intermediate position between the two targets. We found that the amplitude of averaging saccades was smaller after VGS adaptation than before and differed significantly from that predicted by hypothesis 1, but not by hypothesis 2, with an adaptation transfer of 50%. These findings indicate that VGS adaptation largely occurs at/below the averaging saccade programming site. Based on current knowledge of the neural substrate of averaging saccades, we suggest that VGS adaptation mainly acts at the level of the superior colliculus or downstream.

@article{Alahyane2004a,
title = {Eye position specificity of saccadic adaptation},
author = {Nadia Alahyane and Denis Pélisson},
doi = {10.1167/iovs.03-0570},
year = {2004},
date = {2004-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {45},
number = {1},
pages = {123--130},
abstract = {PURPOSE: The accuracy of saccadic eye movements is maintained throughout life by adaptive mechanisms. With the double-step target paradigm, eight human subjects were investigated to determine whether saccadic adaptation depends only on the eye-displacement vector, or also on eye position as a context cue when two saccades of identical vector are adapted simultaneously. METHODS: First, bidirectional adaptations (BDAs) of horizontal saccades of the same vector were induced in a single training phase. Each direction of adaptation in BDAs (backward and forward) was linked to one vertical eye position (e.g., forward adaptation performed with the eyes directed 12.5 degrees upward and backward adaptation with the eyes 25 degrees downward) and alternated from trial to trial. Second, unidirectional adaptations (UDAs) were tested in two control conditions in which training trials of a single direction (backward or forward) were presented at both 12.5 degrees and -25 degrees eye elevations. RESULTS: Opposite changes in saccade amplitude could develop simultaneously in BDA, indicating that saccadic adaptation depends on orbital eye position. Comparing these data with the control conditions further indicated that eye position specificity was complete for backward, but not for forward, adaptation. CONCLUSIONS: The results indicate that saccadic adaptation mechanisms use vectorial eye displacement signals, but can also take eye position signals into account as a contextual cue when the training involves conflicting saccade amplitude changes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

PURPOSE: The accuracy of saccadic eye movements is maintained throughout life by adaptive mechanisms. With the double-step target paradigm, eight human subjects were investigated to determine whether saccadic adaptation depends only on the eye-displacement vector, or also on eye position as a context cue when two saccades of identical vector are adapted simultaneously. METHODS: First, bidirectional adaptations (BDAs) of horizontal saccades of the same vector were induced in a single training phase. Each direction of adaptation in BDAs (backward and forward) was linked to one vertical eye position (e.g., forward adaptation performed with the eyes directed 12.5 degrees upward and backward adaptation with the eyes 25 degrees downward) and alternated from trial to trial. Second, unidirectional adaptations (UDAs) were tested in two control conditions in which training trials of a single direction (backward or forward) were presented at both 12.5 degrees and -25 degrees eye elevations. RESULTS: Opposite changes in saccade amplitude could develop simultaneously in BDA, indicating that saccadic adaptation depends on orbital eye position. Comparing these data with the control conditions further indicated that eye position specificity was complete for backward, but not for forward, adaptation. CONCLUSIONS: The results indicate that saccadic adaptation mechanisms use vectorial eye displacement signals, but can also take eye position signals into account as a contextual cue when the training involves conflicting saccade amplitude changes.

@article{Alahyane2005,
title = {Retention of saccadic adaptation in humans},
author = {Nadia Alahyane and Denis Pélisson},
doi = {10.1196/annals.1325.067},
year = {2005},
date = {2005-01-01},
journal = {Annals of the New York Academy of Sciences},
volume = {1039},
pages = {558--562},
abstract = {In the present study, we tested in human subjects the persistence of the oculomotor changes resulting from saccadic adaptation up to 19 days after exposure to the double step target protocol. The main results indicate that the reduction of saccade gain related to the adaptation session (mean gain change of 5 subjects = 22 +/- 4.7%) was partially but significantly retained after 1 day and 5 days (mean amount of retention = 36 +/- 17% and 19.7 +/- 13.3%, respectively) but was no longer significant at day 11 and 19. Unexpectedly, gain changes were larger for leftward than for rightward saccades. No change in saccade dynamics was observed. These data suggest that in humans, adaptive mechanisms induce long lasting changes in visually-guided saccade amplitude, probably reflecting plastic changes in the brain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

In the present study, we tested in human subjects the persistence of the oculomotor changes resulting from saccadic adaptation up to 19 days after exposure to the double step target protocol. The main results indicate that the reduction of saccade gain related to the adaptation session (mean gain change of 5 subjects = 22 +/- 4.7%) was partially but significantly retained after 1 day and 5 days (mean amount of retention = 36 +/- 17% and 19.7 +/- 13.3%, respectively) but was no longer significant at day 11 and 19. Unexpectedly, gain changes were larger for leftward than for rightward saccades. No change in saccade dynamics was observed. These data suggest that in humans, adaptive mechanisms induce long lasting changes in visually-guided saccade amplitude, probably reflecting plastic changes in the brain.

@article{Alahyane2005a,
title = {Long-lasting modifications of saccadic eye movements following adaptation induced in the double-step target paradigm},
author = {Nadia Alahyane and Denis Pélisson},
doi = {10.1101/lm.96405},
year = {2005},
date = {2005-01-01},
journal = {Learning and Memory},
volume = {12},
number = {4},
pages = {433--443},
abstract = {The adaptation of saccadic eye movements to environmental changes occurring throughout life is a good model of motor learning and motor memory. Numerous studies have analyzed the behavioral properties and neural substrate of oculomotor learning in short-term saccadic adaptation protocols, but to our knowledge, none have tested the persistence of the oculomotor memory. In the present study, the double-step target protocol was used in five human subjects to adaptively decrease the amplitude of reactive saccades triggered by a horizontally-stepping visual target. We tested the amplitude of visually guided saccades just before and at different times (up to 19 days) after the adaptation session. The results revealed that immediately after the adaptation session, saccade amplitude was significantly reduced by 22% on average. Although progressively recovering over days, this change in saccade gain was still statistically significant on days 1 and 5, with an average retention rate of 36% and 19%, respectively. On day 11, saccade amplitude no longer differed from the pre-adaptation value. Adaptation was more effective and more resistant to recovery for leftward saccades than for rightward ones. Lastly, modifications of saccade gain related to adaptation were accompanied by a decrease of both saccade duration and peak velocity. A control experiment indicated that all these findings were specifically related to the adaptation protocol, and further revealed that no change in the main sequence relationships could be specifically related to adaptation. We conclude that in humans, the modifications of saccade amplitude that quickly develop during a double-step target adaptation protocol can remain in memory for a much longer period of time, reflecting enduring plastic changes in the brain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The adaptation of saccadic eye movements to environmental changes occurring throughout life is a good model of motor learning and motor memory. Numerous studies have analyzed the behavioral properties and neural substrate of oculomotor learning in short-term saccadic adaptation protocols, but to our knowledge, none have tested the persistence of the oculomotor memory. In the present study, the double-step target protocol was used in five human subjects to adaptively decrease the amplitude of reactive saccades triggered by a horizontally-stepping visual target. We tested the amplitude of visually guided saccades just before and at different times (up to 19 days) after the adaptation session. The results revealed that immediately after the adaptation session, saccade amplitude was significantly reduced by 22% on average. Although progressively recovering over days, this change in saccade gain was still statistically significant on days 1 and 5, with an average retention rate of 36% and 19%, respectively. On day 11, saccade amplitude no longer differed from the pre-adaptation value. Adaptation was more effective and more resistant to recovery for leftward saccades than for rightward ones. Lastly, modifications of saccade gain related to adaptation were accompanied by a decrease of both saccade duration and peak velocity. A control experiment indicated that all these findings were specifically related to the adaptation protocol, and further revealed that no change in the main sequence relationships could be specifically related to adaptation. We conclude that in humans, the modifications of saccade amplitude that quickly develop during a double-step target adaptation protocol can remain in memory for a much longer period of time, reflecting enduring plastic changes in the brain.

@article{Alahyane2007,
title = {Oculomotor plasticity: Are mechanisms of adaptation for reactive and voluntary saccades separate?},
author = {Nadia Alahyane and Roméo Salemme and Christian Urquizar and Julien Cotti and Alain Guillaume and Jean-Louis Vercher and Denis Pélisson},
doi = {10.1016/j.brainres.2006.11.077},
year = {2007},
date = {2007-01-01},
journal = {Brain Research},
volume = {1135},
number = {1},
pages = {107--121},
abstract = {Saccadic eye movements are permanently controlled and their accuracy maintained by adaptive mechanisms that compensate for physiological or pathological perturbations. In contrast to the adaptation of reactive saccades (RS) which are automatically triggered by the sudden appearance of a single target, little is known about the adaptation of voluntary saccades which allow us to intentionally scan our environment in nearly all our daily activities. In this study, we addressed this issue in human subjects by determining the properties of adaptation of scanning voluntary saccades (SVS) and comparing these features to those of RS. We also tested the reciprocal transfers of adaptation between the two saccade types. Our results revealed that SVS and RS adaptations disclosed similar adaptation fields, time course and recovery levels, with only a slightly lower after-effect for SVS. Moreover, RS and SVS main sequences both remained unaffected after adaptation. Finally and quite unexpectedly, the pattern of adaptation transfers was asymmetrical, with a much stronger transfer from SVS to RS (79%) than in the reverse direction (22%). These data demonstrate that adaptations of RS and SVS share several behavioural properties but at the same time rely on partially distinct processes. Based on these findings, it is proposed that adaptations of RS and SVS may involve a neural network including both a common site and two separate sites specifically recruited for each saccade type.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Saccadic eye movements are permanently controlled and their accuracy maintained by adaptive mechanisms that compensate for physiological or pathological perturbations. In contrast to the adaptation of reactive saccades (RS) which are automatically triggered by the sudden appearance of a single target, little is known about the adaptation of voluntary saccades which allow us to intentionally scan our environment in nearly all our daily activities. In this study, we addressed this issue in human subjects by determining the properties of adaptation of scanning voluntary saccades (SVS) and comparing these features to those of RS. We also tested the reciprocal transfers of adaptation between the two saccade types. Our results revealed that SVS and RS adaptations disclosed similar adaptation fields, time course and recovery levels, with only a slightly lower after-effect for SVS. Moreover, RS and SVS main sequences both remained unaffected after adaptation. Finally and quite unexpectedly, the pattern of adaptation transfers was asymmetrical, with a much stronger transfer from SVS to RS (79%) than in the reverse direction (22%). These data demonstrate that adaptations of RS and SVS share several behavioural properties but at the same time rely on partially distinct processes. Based on these findings, it is proposed that adaptations of RS and SVS may involve a neural network including both a common site and two separate sites specifically recruited for each saccade type.

@article{Alahyane2008a,
title = {Separate neural substrates in the human cerebellum for sensory-motor adaptation of reactive and of scanning voluntary saccades},
author = {N Alahyane and V Fonteille and C Urquizar and R Salemme and N Nighoghossian and D Pelisson and C Tilikete},
doi = {10.1007/s12311-008-0065-5},
year = {2008},
date = {2008-01-01},
journal = {Cerebellum},
volume = {7},
number = {4},
pages = {595--601},
abstract = {Sensory-motor adaptation processes are critically involved in maintaining accurate motor behavior throughout life. Yet their underlying neural substrates and task-dependency bases are still poorly understood. We address these issues here by studying adaptation of saccadic eye movements, a well-established model of sensory-motor plasticity. The cerebellum plays a major role in saccadic adaptation but it has not yet been investigated whether this role can account for the known specificity of adaptation to the saccade type (e.g., reactive versus voluntary). Two patients with focal lesions in different parts of the cerebellum were tested using the double-step target paradigm. Each patient was submitted to two separate sessions: one for reactive saccades (RS) triggered by the sudden appearance of a visual target and the second for scanning voluntary saccades (SVS) performed when exploring a more complex scene. We found that a medial cerebellar lesion impaired adaptation of reactive-but not of voluntary-saccades, whereas a lateral lesion affected adaptation of scanning voluntary saccades, but not of reactive saccades. These findings provide the first evidence of an involvement of the lateral cerebellum in saccadic adaptation, and extend the demonstrated role of the cerebellum in RS adaptation to adaptation of SVS. The double dissociation of adaptive abilities is also consistent with our previous hypothesis of the involvement in saccadic adaptation of partially separated cerebellar areas specific to the reactive or voluntary task (Alahyane et al. Brain Res 1135:107-121 (2007)).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Sensory-motor adaptation processes are critically involved in maintaining accurate motor behavior throughout life. Yet their underlying neural substrates and task-dependency bases are still poorly understood. We address these issues here by studying adaptation of saccadic eye movements, a well-established model of sensory-motor plasticity. The cerebellum plays a major role in saccadic adaptation but it has not yet been investigated whether this role can account for the known specificity of adaptation to the saccade type (e.g., reactive versus voluntary). Two patients with focal lesions in different parts of the cerebellum were tested using the double-step target paradigm. Each patient was submitted to two separate sessions: one for reactive saccades (RS) triggered by the sudden appearance of a visual target and the second for scanning voluntary saccades (SVS) performed when exploring a more complex scene. We found that a medial cerebellar lesion impaired adaptation of reactive-but not of voluntary-saccades, whereas a lateral lesion affected adaptation of scanning voluntary saccades, but not of reactive saccades. These findings provide the first evidence of an involvement of the lateral cerebellum in saccadic adaptation, and extend the demonstrated role of the cerebellum in RS adaptation to adaptation of SVS. The double dissociation of adaptive abilities is also consistent with our previous hypothesis of the involvement in saccadic adaptation of partially separated cerebellar areas specific to the reactive or voluntary task (Alahyane et al. Brain Res 1135:107-121 (2007)).

@article{Alahyane2008b,
title = {Spatial transfer of adaptation of scanning voluntary saccades in humans},
author = {Nadia Alahyane and Anne-Dominique Devauchelle and Roméo Salemme and Denis Pélisson},
doi = {10.1097/WNR.0b013e3282f2a5f2},
year = {2008},
date = {2008-01-01},
journal = {Neuroreport},
volume = {19},
number = {1},
pages = {37--41},
abstract = {The properties and neural substrates of the adaptive mechanisms that maintain over time the accuracy of voluntary, internally triggered saccades are still poorly understood. Here, we used transfer tests to evaluate the spatial properties of adaptation of scanning voluntary saccades. We found that an adaptive reduction of the size of a horizontal rightward 7 degrees saccade transferred to other saccades of a wide range of amplitudes and directions. This transfer decreased as tested saccades increasingly differed in amplitude or direction from the trained saccade, being null for vertical and leftward saccades. Voluntary saccade adaptation thus presents bounded, but large adaptation fields, suggesting that at least part of the underlying neural substrate encodes saccades as vectors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

The properties and neural substrates of the adaptive mechanisms that maintain over time the accuracy of voluntary, internally triggered saccades are still poorly understood. Here, we used transfer tests to evaluate the spatial properties of adaptation of scanning voluntary saccades. We found that an adaptive reduction of the size of a horizontal rightward 7 degrees saccade transferred to other saccades of a wide range of amplitudes and directions. This transfer decreased as tested saccades increasingly differed in amplitude or direction from the trained saccade, being null for vertical and leftward saccades. Voluntary saccade adaptation thus presents bounded, but large adaptation fields, suggesting that at least part of the underlying neural substrate encodes saccades as vectors.

@article{Alahyane2016,
title = {Development and learning of saccadic eye movements in 7- to 42-month-old children},
author = {Nadia Alahyane and Christelle Lemoine-Lardennois and Coline Tailhefer and Thér{è}se Collins and Jacqueline Fagard and Karine Doré-Mazars},
doi = {10.1167/16.1.6},
year = {2016},
date = {2016-01-01},
journal = {Journal of Vision},
volume = {16},
number = {1},
pages = {1--12},
abstract = {From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance.

@article{Alamargot2006,
title = {Eye and Pen: A new device for studying reading},
author = {Denis Alamargot},
year = {2006},
date = {2006-01-01},
journal = {Behavior Research Methods},
volume = {38},
number = {2},
pages = {287--299},
abstract = {We present a new method for studying reading during writing and the relationships between these two activities. the Eye and Pen device makes a synchronous recording of handwriting and eye move- ments during written composition. it complements existing online methods by providing a fine-grained description of the visual information fixated during pauses as well as during the actual writing act. this device can contribute to the exploration of several research issues, since it can be used to investigate the role of the text produced so far and the documentary sources displayed in the task environment. the study of the engagement of reading during writing should provide important information about the dynamics of writing processes based on visual information. Written},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

We present a new method for studying reading during writing and the relationships between these two activities. the Eye and Pen device makes a synchronous recording of handwriting and eye move- ments during written composition. it complements existing online methods by providing a fine-grained description of the visual information fixated during pauses as well as during the actual writing act. this device can contribute to the exploration of several research issues, since it can be used to investigate the role of the text produced so far and the documentary sources displayed in the task environment. the study of the engagement of reading during writing should provide important information about the dynamics of writing processes based on visual information. Written

@article{Alamargot2010,
title = {Using eye and pen movements to trace the development of writing expertise: Case studies of a 7th, 9th and 12th grader, graduate student, and professional writer},
author = {Denis Alamargot and Sylvie Plane and Eric Lambert and David Chesnet},
doi = {10.1007/s11145-009-9191-9},
year = {2010},
date = {2010-01-01},
journal = {Reading and Writing},
volume = {23},
number = {7},
pages = {853--888},
abstract = {This study was designed to enhance our understanding of the changing relationship between low- and high-level writing processes in the course of development. A dual description of writing processes was undertaken, based on (a) the respective time courses of these processes, as assessed by an analysis of eye and pen movements, and (b) the semantic characteristics of the writers' scripts. To conduct a more fine-grained description of processing strategies, a ‘‘case study'' approach was adopted, whereby a comprehensive range of measures was used to assess processes within five writers with different levels of expertise. The task was to continue writing a story based on excerpt from a source document (incipit). The main results showed two developmental patterns linked to expertise: (a) a gradual acceleration in low- and high-level processing (pauses, flow), associated with (b) changes in the way the previous text was (re)read.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

This study was designed to enhance our understanding of the changing relationship between low- and high-level writing processes in the course of development. A dual description of writing processes was undertaken, based on (a) the respective time courses of these processes, as assessed by an analysis of eye and pen movements, and (b) the semantic characteristics of the writers' scripts. To conduct a more fine-grained description of processing strategies, a ‘‘case study'' approach was adopted, whereby a comprehensive range of measures was used to assess processes within five writers with different levels of expertise. The task was to continue writing a story based on excerpt from a source document (incipit). The main results showed two developmental patterns linked to expertise: (a) a gradual acceleration in low- and high-level processing (pauses, flow), associated with (b) changes in the way the previous text was (re)read.

@article{Alamargot2015,
title = {Successful written subject–verb agreement: An online analysis of the procedure used by students in Grades 3, 5 and 12},
author = {Denis Alamargot and Lisa Flouret and Denis Larocque and Gilles Caporossi and Virginie Pontart and Carmen Paduraru and Pauline Morisset and Michel Fayol},
doi = {10.1007/s11145-014-9525-0},
year = {2015},
date = {2015-01-01},
journal = {Reading and Writing},
volume = {28},
number = {3},
pages = {291--312},
abstract = {This study was designed to (1) investigate the procedure responsible for successful written subject–verb agreement, and (2) describe how it develops across grades. Students in Grades 3, 5 and 12 were asked to read noun–noun–verb sen- tences aloud (e.g., Le chien des voisins mange [The dog of the neighbors eats]) and write out the verb inflections. Some of the nouns differed in number, thus inducing attraction errors. Results showed that third graders were successful because they implemented a declarative procedure requiring regressive fixations on the subject noun while writing out the inflection. A dual-step procedure (Hupet, Schelstraete, Demaeght, & Fayol, 1996) emerged in Grade 5, and was fully efficient by Grade 12. This procedure, which couples an automatized agreement rule with a monitoring process operated within working memory (without the need for regressive fixa- tions), was found to trigger a mismatch asymmetry (singular–plural textgreater plural–sin- gular) in Grade 5. The time course of written subject–verb agreement, the origin of agreement errors and differences between the spoken and written modalities are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

This study was designed to (1) investigate the procedure responsible for successful written subject–verb agreement, and (2) describe how it develops across grades. Students in Grades 3, 5 and 12 were asked to read noun–noun–verb sen- tences aloud (e.g., Le chien des voisins mange [The dog of the neighbors eats]) and write out the verb inflections. Some of the nouns differed in number, thus inducing attraction errors. Results showed that third graders were successful because they implemented a declarative procedure requiring regressive fixations on the subject noun while writing out the inflection. A dual-step procedure (Hupet, Schelstraete, Demaeght, & Fayol, 1996) emerged in Grade 5, and was fully efficient by Grade 12. This procedure, which couples an automatized agreement rule with a monitoring process operated within working memory (without the need for regressive fixa- tions), was found to trigger a mismatch asymmetry (singular–plural textgreater plural–sin- gular) in Grade 5. The time course of written subject–verb agreement, the origin of agreement errors and differences between the spoken and written modalities are discussed.

@article{Alamia2016,
title = {Statistical regularities attract attention when task-relevant},
author = {Andrea Alamia and Alexandre Zénon},
doi = {10.3389/fnhum.2016.00042},
year = {2016},
date = {2016-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {10},
pages = {1--10},
abstract = {Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e. the effect of color predictability on reaction times (RT), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the 2 colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e. the effect of color predictability on reaction times (RT), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the 2 colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.

@article{Alamia2018,
title = {Strong conscious cues suppress preferential gaze allocation to unconscious cues},
author = {Andrea Alamia and Oleg Solopchuk and Alexandre Zénon},
doi = {10.3389/fnhum.2018.00427},
year = {2018},
date = {2018-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {12},
pages = {1--9},
abstract = {Visual attention allows relevant information to be selected for further processing. Both conscious and unconscious visual stimuli can bias attentional allocation, but how these two types of visual information interact to guide attention remains unclear. In this study, we explored attentional allocation during a motion discrimination task with varied motion strength and unconscious associations between stimuli and cues. Participants were instructed to report the motion direction of two colored patches of dots. Unbeknown to participants, dot colors were sometimes informative of the correct response. We found that subjects learnt the associations between colors and motion direction but failed to report this association using the questionnaire filled at the end of the experiment, confirming that learning remained unconscious. The eye movement analyses revealed that allocation of attention to unconscious sources of information occurred mostly when motion coherence was low, indicating that unconscious cues influence attentional allocation only in the absence of strong conscious cues. All in all, our results reveal that conscious and unconscious sources of information interact with each other to influence attentional allocation and suggest a selection process that weights cues in proportion to their reliability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Visual attention allows relevant information to be selected for further processing. Both conscious and unconscious visual stimuli can bias attentional allocation, but how these two types of visual information interact to guide attention remains unclear. In this study, we explored attentional allocation during a motion discrimination task with varied motion strength and unconscious associations between stimuli and cues. Participants were instructed to report the motion direction of two colored patches of dots. Unbeknown to participants, dot colors were sometimes informative of the correct response. We found that subjects learnt the associations between colors and motion direction but failed to report this association using the questionnaire filled at the end of the experiment, confirming that learning remained unconscious. The eye movement analyses revealed that allocation of attention to unconscious sources of information occurred mostly when motion coherence was low, indicating that unconscious cues influence attentional allocation only in the absence of strong conscious cues. All in all, our results reveal that conscious and unconscious sources of information interact with each other to influence attentional allocation and suggest a selection process that weights cues in proportion to their reliability.

@article{Albonico2016,
title = {Temporal dissociation between the focal and orientation components of spatial attention in central and peripheral vision},
author = {Andrea Albonico and Manuela Malaspina and Emanuela Bricolo and Marialuisa Martelli and Roberta Daini},
doi = {10.1016/j.actpsy.2016.10.003},
year = {2016},
date = {2016-01-01},
journal = {Acta Psychologica},
volume = {171},
pages = {85--92},
publisher = {Elsevier B.V.},
abstract = {Selective attention, i.e. the ability to concentrate one's limited processing resources on one aspect of the environment, is a multifaceted concept that includes different processes like spatial attention and its subcomponents of orienting and focusing. Several studies, indeed, have shown that visual tasks performance is positively influenced not only by attracting attention to the target location (orientation component), but also by the adjustment of the size of the attentional window according to task demands (focal component). Nevertheless, the relative weight of the two components in central and peripheral vision has never been studied. We conducted two experiments to explore whether different components of spatial attention have different effects in central and peripheral vision. In order to do so, participants underwent either a detection (Experiment 1) or a discrimination (Experiment 2) task where different types of cues elicited different components of spatial attention: a red dot, a small square and a big square (an optimal stimulus for the orientation component, an optimal and a sub-optimal stimulus for the focal component respectively). Response times and cue-size effects indicated a stronger effect of the small square or of the dot in different conditions, suggesting the existence of a dissociation in terms of mechanisms between the focal and the orientation components of spatial attention. Specifically, we found that the orientation component was stronger in periphery, while the focal component was noticeable only in central vision and characterized by an exogenous nature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Selective attention, i.e. the ability to concentrate one's limited processing resources on one aspect of the environment, is a multifaceted concept that includes different processes like spatial attention and its subcomponents of orienting and focusing. Several studies, indeed, have shown that visual tasks performance is positively influenced not only by attracting attention to the target location (orientation component), but also by the adjustment of the size of the attentional window according to task demands (focal component). Nevertheless, the relative weight of the two components in central and peripheral vision has never been studied. We conducted two experiments to explore whether different components of spatial attention have different effects in central and peripheral vision. In order to do so, participants underwent either a detection (Experiment 1) or a discrimination (Experiment 2) task where different types of cues elicited different components of spatial attention: a red dot, a small square and a big square (an optimal stimulus for the orientation component, an optimal and a sub-optimal stimulus for the focal component respectively). Response times and cue-size effects indicated a stronger effect of the small square or of the dot in different conditions, suggesting the existence of a dissociation in terms of mechanisms between the focal and the orientation components of spatial attention. Specifically, we found that the orientation component was stronger in periphery, while the focal component was noticeable only in central vision and characterized by an exogenous nature.

@article{AlDahhan2014,
title = {Eye movements of university students with and without reading difficulties during naming speed tasks},
author = {Noor Z {Al Dahhan} and George K Georgiou and Rickie Hung and Douglas Munoz and Rauno Parrila and John R Kirby},
doi = {10.1007/s11881-013-0090-z},
year = {2014},
date = {2014-01-01},
journal = {Annals of Dyslexia},
volume = {64},
number = {2},
pages = {137--150},
abstract = {Although naming speed (NS) has been shown to predict reading into adulthood and differentiate between adult dyslexics and controls, the question remains why NS is related to reading. To address this question, eye movement methodology was combined with three letter NS tasks (the original letter NS task by Denckla & Rudel, Cortex 10:186-202, 1974, and two more developed by Compton, The Journal of Special Education 37:81-94, 2003, with increased phonological or visual similarity of the letters). Twenty undergraduate students with reading difficulties (RD) and 27 without (NRD) were tested on letter NS tasks (eye movements were recorded during the NS tasks), phonological processing, and reading fluency. The results indicated first that the RD group was slower than the NRD group on all NS tasks with no differences between the NS tasks. In addition, the NRD group had shorter fixation durations, longer saccades, and fewer saccades and fixations than the RD group. Fixation duration and fixation count were significant predictors of reading fluency even after controlling for phonological processing measures. Taken together, these findings suggest that the NS-reading relationship is due to two factors: less able readers require more time to acquire stimulus information during fixation and they make more saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Although naming speed (NS) has been shown to predict reading into adulthood and differentiate between adult dyslexics and controls, the question remains why NS is related to reading. To address this question, eye movement methodology was combined with three letter NS tasks (the original letter NS task by Denckla & Rudel, Cortex 10:186-202, 1974, and two more developed by Compton, The Journal of Special Education 37:81-94, 2003, with increased phonological or visual similarity of the letters). Twenty undergraduate students with reading difficulties (RD) and 27 without (NRD) were tested on letter NS tasks (eye movements were recorded during the NS tasks), phonological processing, and reading fluency. The results indicated first that the RD group was slower than the NRD group on all NS tasks with no differences between the NS tasks. In addition, the NRD group had shorter fixation durations, longer saccades, and fewer saccades and fixations than the RD group. Fixation duration and fixation count were significant predictors of reading fluency even after controlling for phonological processing measures. Taken together, these findings suggest that the NS-reading relationship is due to two factors: less able readers require more time to acquire stimulus information during fixation and they make more saccades.

@article{AlDahhan2017,
title = {Eye movements and articulations during a letter naming speed task: Children with and without Dyslexia},
author = {Noor Z {Al Dahhan} and John R Kirby and Donald C Brien and Douglas P Munoz},
doi = {10.1177/0022219415618502},
year = {2017},
date = {2017-01-01},
journal = {Journal of Learning Disabilities},
volume = {50},
number = {3},
pages = {275--285},
abstract = {Abstract Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times into pause and articulation components, and analysis of eye movements during task performance) with three groups of participants (children with dyslexia, ages 9–10; chronological-age [CA] controls, ages 9–10; reading-level [RL] controls, ages 6–7) to examine NS and the NS–reading relationship. Results indicated (a) for all groups, increasing visual similarity of the letters decreased letter naming efficiency and increased naming errors, saccades, regressions (rapid eye movements back to letters already fixated), pause times, and fixation durations; (b) children with dyslexia performed like RL controls and were less efficient, had longer articulation times, pause times, fixation durations, and made more errors and regressions than CA controls; and (c) pause time and fixation duration were the most powerful predictors of reading. We conclude that NS is related to reading via fixation durations and pause times: Longer fixation durations and pause times reflect the greater amount of time needed to acquire visual/orthographic information from stimuli and prepare the correct response.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Abstract Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times into pause and articulation components, and analysis of eye movements during task performance) with three groups of participants (children with dyslexia, ages 9–10; chronological-age [CA] controls, ages 9–10; reading-level [RL] controls, ages 6–7) to examine NS and the NS–reading relationship. Results indicated (a) for all groups, increasing visual similarity of the letters decreased letter naming efficiency and increased naming errors, saccades, regressions (rapid eye movements back to letters already fixated), pause times, and fixation durations; (b) children with dyslexia performed like RL controls and were less efficient, had longer articulation times, pause times, fixation durations, and made more errors and regressions than CA controls; and (c) pause time and fixation duration were the most powerful predictors of reading. We conclude that NS is related to reading via fixation durations and pause times: Longer fixation durations and pause times reflect the greater amount of time needed to acquire visual/orthographic information from stimuli and prepare the correct response.

@article{Alers2015,
title = {Quantifying the importance of preserving video quality in visually important regions at the expense of background content},
author = {Hani Alers and Judith A Redi and Ingrid Heynderickx},
doi = {10.1016/j.image.2015.01.006},
year = {2015},
date = {2015-01-01},
journal = {Signal Processing: Image Communication},
volume = {32},
pages = {69--80},
publisher = {Elsevier},
abstract = {Advances in digital technology have allowed us to embed significant processing power in everyday video consumption devices. At the same time, we have placed high demands on the video content itself by continuing to increase spatial resolution while trying to limit the allocated file size and bandwidth as much as possible. The result is typically a trade-off between perceptual quality and fulfillment of technological limitations. To bring this trade-off to its optimum, it is necessary to understand better how people perceive video quality. In this work, we particularly focus on understanding how the spatial location of compression artifacts impacts visual quality perception, and specifically in relation with visual attention. In particular we investigate how changing the quality of the region of interest of a video affects its overall perceived quality, and we quantify the importance of the visual quality of the region of interest to the overall quality judgment. A three stage experiment was conducted where viewers were shown videos with different quality levels in different parts of the scene. By asking them to score the overall quality we found that the quality of the region of interest has 10 times more impact than the quality of the rest of the scene. These results are in line with similar effects observed in still images, yet in videos the relevance of the visual quality of the region of interest is twice as high than in images. The latter finding is directly relevant for the design of more accurate objective quality metrics for videos, that are based on the estimation of local distortion visibility.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Advances in digital technology have allowed us to embed significant processing power in everyday video consumption devices. At the same time, we have placed high demands on the video content itself by continuing to increase spatial resolution while trying to limit the allocated file size and bandwidth as much as possible. The result is typically a trade-off between perceptual quality and fulfillment of technological limitations. To bring this trade-off to its optimum, it is necessary to understand better how people perceive video quality. In this work, we particularly focus on understanding how the spatial location of compression artifacts impacts visual quality perception, and specifically in relation with visual attention. In particular we investigate how changing the quality of the region of interest of a video affects its overall perceived quality, and we quantify the importance of the visual quality of the region of interest to the overall quality judgment. A three stage experiment was conducted where viewers were shown videos with different quality levels in different parts of the scene. By asking them to score the overall quality we found that the quality of the region of interest has 10 times more impact than the quality of the rest of the scene. These results are in line with similar effects observed in still images, yet in videos the relevance of the visual quality of the region of interest is twice as high than in images. The latter finding is directly relevant for the design of more accurate objective quality metrics for videos, that are based on the estimation of local distortion visibility.

We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets.

We use cookies to ensure the best experience on our website. You can find out more about the cookies we use in our Cookie Policy. You can accept all cookies (including those used by Google AdWords and other third party software), or you can accept only the cookies required for the website to remain functional (all except Google AdWords). By continuing to use the site, you consent to the cookies. Accept All CookiesRefuse Google AdwordsRead more