Animals including humans are endowed with a remarkable capacity to estimate rapidly the number of items in a scene. Some have questioned whether this ability reflects a genuine sense of number, or whether numerosity is derived indirectly from other covarying attributes, such as density and area. In previous work we have demonstrated that adult observers are more sensitive to changes in numerosity than to area or density, particularly changes that leave numerosity constant, pointing to a spontaneous sensitivity to numerosity, not attributable to area and density. Here we extend this line of research with a novel technique where participants reproduce the size and density of a dot-array. They were given no explicit instructions of what to match, but could regulate freely all combinations of area and density by trackpad. If the task is mediated by matching separately area and texture-density, the errors in the two attributes have to be independent. Contrarily to this prediction, we found that errors in area and density were negatively correlated, suggesting that subjects matched numerosity, rather than area and density. We employed this technique to investigate processing of number in adolescents with typical and low math abilities (dyscalculia). Interestingly, we found that dyscalculics also reproduced numerosity rather than area or density. However, compared to typicals, dyscalculics had longer reaction times, a tendency to rely also on area, and their performance did not improve over sessions. Taken together, the data demonstrate that numerosity emerges as the most spontaneous and sensitive dimension, supporting the existence of a dedicated number sense and confirm numerosity atypicalities in dyscalculia.

Body size is a salient marker of physical health, with extremes implicated in various mental and physical health issues. It is therefore important to understand the mechanisms of perception of body size of self and others. We report a novel technique we term the bodyline, based on the numberline technique in numerosity studies. One hundred and three young women judged the size of sequentially presented female body images by positioning a marker on a line, delineated with images of extreme sizes. Participants performed this task easily and well, with average standard deviations less than 6% of the total scale. Critically, judgments of size were biased towards the previously viewed body, demonstrating that serial dependencies occur in the judgment of body size. The magnitude of serial dependence was well predicted by a simple Kalman-filter ideal-observer model, suggesting that serial dependence occurs in an optimal, adaptive way to improve performance in size judgments.

How the visual system achieves perceptual stability across saccadic eye movements is a long-standing question in neuroscience. It has been proposed that an efference copy informs vision about upcoming saccades, and this might lead to shifting spatial coordinates and suppressing image motion. Here we ask whether these two aspects of visual stability are interdependent or may be dissociated under special conditions. We study a memory-guided double-step saccade task, where two saccades are executed in quick succession. Previous studies have led to the hypothesis that in this paradigm the two saccades are planned in parallel, with a single efference copy signal generated at the start of the double-step sequence, i.e. before the first saccade. In line with this hypothesis, we find that visual stability is impaired during the second saccade, which is consistent with (accurate) efference copy information being unavailable during the second saccade. However, we find that saccadic suppression is normal during the second saccade. Thus, the second saccade of a double-step sequence instantiates a dissociation between visual stability and saccadic suppression: stability is impaired even though suppression is strong.

The pupil is primarily regulated by prevailing light levels but is also modulated by perceptual and attentional factors. We measured pupil-size in typical adult humans viewing a bistable-rotating cylinder, constructed so the luminance of the front surface changes with perceived direction of rotation. In some participants, pupil diameter oscillated in phase with the ambiguous perception, more dilated when the black surface was in front. Importantly, the magnitude of oscillation predicts autistic traits of participants, assessed by the Autism-Spectrum Quotient AQ. Further experiments suggest that these results are driven by differences in perceptual styles: high AQ participants focus on the front surface of the rotating cylinder, while those with low AQ distribute attention to both surfaces in a more global, holistic style. This is the first evidence that pupillometry reliably tracks inter-individual differences in perceptual styles; it does so quickly and objectively, without interfering with spontaneous perceptual strategies.

Periventricular leukomalacia (PVL) is characterized by focal necrosis at the level of the periventricular white matter, often observed in preterm infants. PVL is frequently associated with motor impairment and with visual deficits affecting primary stages of visual processes as well as higher visual cognitive abilities. Here we describe six PVL subjects, with normal verbal IQ, showing orientation perception deficits in both the haptic and visual domains. Subjects were asked to compare the orientation of two stimuli presented simultaneously or sequentially, using both a two alternative forced choice (2AFC) orientation-discrimination and a matching procedure. Visual stimuli were oriented gratings or bars or collinear short lines embedded within a random pattern. Haptic stimuli comprised two rotatable wooden sticks. PVL patients performed at chance in discriminating the oblique orientation, both for visual and haptic stimuli. Moreover when asked to reproduce the oblique orientation, they often oriented the stimulus along the symmetric mirror orientation. The deficit generalized to stimuli varying in many low level features, was invariant for spatiotopic object orientation, and also occurred for sequential presentations. The deficit was specific to oblique orientations, and not for horizontal or vertical stimuli. These findings show that PVL can affect a specific network involved with the supramodal perception of mirror symmetry orientation.

Does visual processing start anew after each eye movement, or is information integrated across saccades? Here we test a strong prediction of the integration hypothesis: that information acquired after a saccade interferes with the perception of images acquired before the saccade. We investigate perception of a basic visual feature, grating orientation, and we take advantage of a delayed interference phenomenon-in human participants, the reported orientation of a target grating, briefly presented at an eccentric location, is strongly biased toward the orientation of flanker gratings that are flashed shortly after the target. Crucially, we find that the effect is the same whether or not a saccade is made during the delay interval even though the eye movement produces a large retinotopic separation between target and flankers. However, the trans-saccadic effect nearly vanishes when flankers are displaced to a different screen location even when this location matches the retinotopic coordinates of the target. We conclude that information about grating orientation is integrated across saccades within a spatial region that is defined in external coordinates and thereby is stable in spite of the movement of the eyes.

We investigated the BOLD response of visual cortical and sub-cortical regions to fast drifting motion presented over wide fields, including the far periphery. Stimuli were sinusoidal gratings of 50% contrast moving at moderate and very high speeds (38 and 570 °/s), projected to a large field of view (~60°). Both stimuli generated strong and balanced responses in the lateral geniculate nucleus and the superior colliculus. In visual cortical areas, responses were evaluated at three different eccentricities: central 0-15°; peripheral 20-30°; and extreme peripheral 30-60°. "Ventral stream" areas (V2, V3, V4) preferred moderate-speeds in the central visual field, while motion area MT+ responded equally well to both speeds at all eccentricities. In all other areas and eccentricities BOLD responses were significant and equally strong for both types of moving stimuli. Support vector machine showed that the direction of the fast-speed motion could be successfully decoded from the BOLD response in all visual areas, suggesting that responses are mediated by motion mechanisms rather than being an unspecific preference for fast rate of flicker. The results show that the visual cortex responds to very fast motion, at speeds generated when we move our eyes rapidly, or when moving objects pass by closely.

Action and perception are tightly coupled systems requiring coordination and synchronization over time. How the brain achieves synchronization is still a matter of debate, but recent experiments suggest that brain oscillations may play an important role in this process. Brain oscillations have been also proposed to be fundamental in determining time perception. Here, we had subjects perform an audiovisual temporal order judgment task to investigate the fine dynamics of temporal bias and sensitivity before and after the execution of voluntary hand movement (button-press). The reported order of the audiovisual sequence was rhythmically biased as a function of delay from hand action execution. Importantly, we found that it oscillated at a theta range frequency, starting approximately 500 ms before and persisting approximately 250 ms after the button-press, with consistent phase-locking across participants. Our results show that the perception of cross-sensory simultaneity oscillates rhythmically in synchrony with the programming phase of a voluntary action, demonstrating a link between action preparation and bias in temporal perceptual judgments.

How numerical quantity is processed is a central issue for cognition. On the one hand the "number sense theory" claims that numerosity is perceived directly, and may represent an early precursor for acquisition of mathematical skills. On the other, the "theory of magnitude" notes that numerosity correlates with many continuous properties such as size and density, and may therefore not exist as an independent feature, but be part of a more general system of magnitude. In this study we examined interactions in sensitivity between numerosity and size perception. In a group of children, we measured psychophysically two sensory parameters: perceptual adaptation and discrimination thresholds for both size and numerosity. Neither discrimination thresholds nor adaptation strength for numerosity and size correlated across participants. This clear lack of correlation (confirmed by Bayesian analyses) suggests that numerosity and size interference effects are unlikely to reflect a shared sensory representation. We suggest these small interference effects may rather result from top-down phenomena occurring at late decisional levels rather than a primary "sense of magnitude".

The perceptual consequences of eye movements are manifold: Each large saccade is accompanied by a drop of sensitivity to luminance-contrast, low-frequency stimuli, impacting both conscious vision and involuntary responses, including pupillary constrictions. They also produce transient distortions of space, time, and number, which cannot be attributed to the mere motion on the retinae. All these are signs that the visual system evokes active processes to predict and counteract the consequences of saccades. We propose that a key mechanism is the reorganization of spatiotemporal visual fields, which transiently increases the temporal and spatial uncertainty of visual representations just before and during saccades. On one hand, this accounts for the spatiotemporal distortions of visual perception; on the other hand, it implements a mechanism for fusing pre- and postsaccadic stimuli. This, together with the active suppression of motion signals, ensures the stability and continuity of our visual experience.

Cicchini, G. M., Mikellidou, K. & Burr, D. (2018). The functional role of serial dependence, Proceedings of the Royal Society of London B, PDF

The world tends to be stable from moment to moment, leading to strong serial correlations in natural scenes. As similar stimuli usually require similar behavioral responses, it is highly likely that the brain has developed strategies to leverage these regularities. A good deal of recent psychophysical evidence is beginning to show that the brain is sensitive to serial correlations, causing strong drifts in observer responses towards previously seen stimuli. However, it is still not clear that this tendency leads to a functional advantage. Here we test a formal model of optimal serial dependence and show that as predicted, serial dependence in an orientation reproduction task is dependent on current stimulus reliability, with less precise stimuli, such as low spatial frequency oblique Gabors, exhibiting the strongest effects. We also show that serial dependence depends on the similarity between two successive stimuli, again consistent with behavior of an ideal observer aiming at minimizing reproduction errors. Lastly, we show that serial dependence leads to faster response times, indicating that the benefits of serial integration go beyond reproduction error. Overall our data show that serial dependence has a beneficial role at various levels of perception, consistent with the idea that the brain exploits temporal redundancy of the visual scene as an optimization strategy

It has been suggested that a core deficit of the “number sense” may underlie dyscalculia. We test this idea by measuring perceptual adaptation and discrimination thresholds for numerosity and object size in a group of dyscalculic and typical preadolescents (N=71, mean age 12). We confirmed that numerosity discrimination thresholds are higher in evelopmental dyscalculia, while size thresholds are not affected. However, dyscalculics adapted to numerosity in a similar way to typicals. This suggests that although numerosity thresholds are selectively higher in dyscalculia, the mechanisms for perceiving numerosity are otherwise similar, suggesting that that have a similar, but perhaps noisier, number sense.

Small quantities of visual objects can be rapidly estimated without error, a phenomenon known as subitizing. Larger quantities can also be rapidly estimated, but with error, and the error rate predicts math abilities. This study addressed two issues: (a) whether subitizing generalizes over modalities and stimulus formats and (b) whether subitizing correlates with math abilities. We measured subitizing limits in primary school children and adults for visual and auditory stimuli presented either sequentially (sequences of flashes or sounds) or simultaneously (visual presentations, dot arrays). The results show that (a) subitizing limits for adults were one item larger than those for primary school children across all conditions; (b) subitizing for simultaneous visual stimuli (dots) was better than that for sequential stimuli; (c) subitizing limits for dots do not correlate with subitizing limits for either flashes or sounds; (d) subitizing of sequences of flashes and subitizing of sequences of sounds are strongly correlated with each other in children; and (e) regardless of stimuli sensory modality and format, subitizing limits do not correlate with mental calculation or digit magnitude knowledge proficiency. These results suggest that although children can subitize sequential numerosity, simultaneous and temporal subitizing may be subserved by separate systems. Furthermore, subitizing does not seem to be related to numerical abilities.

Continuous psychophysics is a newly developed technique that allows rapid estimation of visual thresholds by asking subjects to track a moving object, then deriving the integration window underlying tracking behavior (Bonnen, Burge, Yates, Pillow, & Cormack, 2015). Leveraging the continuous flow of stimuli and responses, continuous psychophysics allows for estimation of psychophysical thresholds in as little as 1 min. To date this technique has been applied only to tracking visual objects, where it has been used to measure localization thresholds. Here we adapt the technique to visual motion discrimination, by displaying a drifting grating that changes direction on a binary random walk and asking participants to continuously report drift direction by alternate key press. This technique replicates and confirms well-known findings of the motion-perception system. It also proves particularly valuable in demonstrating induced motion, reinforcing evidence for the existence of antagonistic surround fields. At low contrasts, the surround summates with the center, rather than opposing it, again consistent with existing evidence on classical techniques. The user-friendliness and efficiency of the method may lend it to clinical and developmental work.

It has been recently proposed that space, time, and number might share a common representation in the brain. Evidence supporting this idea comes from adaptation studies demonstrating that prolonged exposure to a given stimulus feature distorts the perception of different characteristics. For example, visual motion adaptation affects both perceived position and duration of subsequent stimuli presented in the adapted location. Here, we tested whether motion adaptation also affects perceived numerosity, by testing the effect of adaptation to translating or rotating stimuli moving either at high (20 Hz) or low (5 Hz) speed. Adaptation to fast translational motion yielded a robust reduction in the apparent numerosity of the adapted stimulus (~25%) while adaptation to slow translational or circular motion (either 20 Hz or 5 Hz) yielded a weaker but still significant compression. Control experiments suggested that none of these results could be accounted for in terms of stimulus masking. Taken together, our results are consistent with the extant literature supporting the idea of a generalized magnitude system underlying the representation of numerosity, space and time via common metrics. However, as changes in perceived numerosity co-varied with both adapting motion profile and speed, our evidence also suggests complex and asymmetric interactions between different magnitude representations.

Sensory deprivation during the post-natal 'critical period' leads to structural reorganization of the developing visual cortex. In adulthood, the visual cortex retains some flexibility and adapts to sensory deprivation. Here we show that short-term (2h) monocular deprivation in adult humans boosts the BOLD response to the deprived eye, changing ocular dominance of V1 vertices, consistent with homeostatic plasticity. The boost is strongest in V1, present in V2, V3 and V4 but absent in V3a and hMT+. Assessment of spatial frequency tuning in V1 by a population Receptive-Field technique shows that deprivation primarily boosts high spatial frequencies, consistent with a primary involvement of the parvocellular pathway. Crucially, the V1 deprivation effect correlates across participants with the perceptual increase of the deprived eye dominance assessed with binocular rivalry, suggesting a common origin. Our results demonstrate that visual cortex, particularly the ventral pathway, retains a high potential for homeostatic plasticity in the human adult.

Objectives: The aim of this study was to investigate whether short?term inverse occlusion, combined with moderate physical exercise, could promote the recovery of visual acuity and stereopsis in a group of adult anisometropic amblyopes.

Methods: Ten adult anisometropic patients underwent six brief (2 h) training sessions over a period of 4 weeks. Each training session consisted in the occlusion of the amblyopic eye combined with physical exercise (intermittent cycling on a stationary bike). Visual acuity (measured with ETDRS charts), stereoacuity (measured with the TNO test), and sensory eye dominance (measured with binocular rivalry) were tested before and after each training session, as well as in follow?up visits performed 1 month, 3 months, and 1 year after the end of the training.

Results: After six brief (2 h) training sessions, visual acuity improved in all 10 patients (0.15 ± 0.02 LogMar), and six of them also recovered stereopsis. The improvement was preserved for up to 1 year after training. A pilot experiment suggested that physical activity might play an important role for the recovery of visual acuity and stereopsis.

Temporal processing is fundamental for an accurate synchronization between motor behaviour and sensory processing. Here, we investigate how motor timing during rhythmic tapping influences perception of visual time. Participants listen to a sequence of four auditory tones played at 1 Hz and continue the sequence (without auditory stimulation) by tapping four times with their finger. During finger tapping, they are presented with an empty visual interval and are asked to judge its length compared to a previously internalized interval of 150 ms. The visual temporal estimates show non-monotonic changes locked to the finger tapping: perceived time is maximally expanded at halftime between the two consecutive finger taps, and maximally compressed near tap onsets. Importantly, the temporal dynamics of the perceptual time distortion scales linearly with the timing of the motor tapping, with maximal expansion always being anchored to the centre of the inter-tap interval. These results reveal an intrinsic coupling between distortion of perceptual time and production of self-timed motor rhythms, suggesting the existence of a timing mechanism that keeps perception and action accurately synchronized.

2017

It is now clear that most animals, including humans, possess an ability to rapidly estimate number. Some have questioned whether this ability arises from dedicated numerosity mechanisms, or is derived indirectly from judgements of density or other attributes. We describe a series of psychophysical experiments, largely using adaptation techniques, which demonstrate clearly the existence of a number sense in humans. The number sense is truly general, extending over space, time and sensory modality, and is closely linked with action. We further show that when multiple cues are present, numerosity emerges as the natural dimension for discrimination. However, when element density increases past a certain level, the elements become too crowded to parse, and the scene is perceived as a texture rather than array of elements. The two different regimes are psychophysically discriminable in that they follow distinct psychophysical laws, and show different dependencies on eccentricity, luminance levels and effects of perceptual grouping. The distinction is important, as the ability to discriminate numerosity, but not texture, correlates with formal maths skills.This article is part of the discussion meeting issue 'The origins of numerical abilities'.

How is numerosity encoded by the visual system? – directly, or derived indirectly from texture density? We recently suggested that the numerosity of sparse patterns is encoded directly by dedicated mechanisms (which have been described as the “Approximate Number System” ANS). However, at high dot densities, where items become “crowded” and difficult to segregate, “texture-density” mechanisms come into play. Here we tested the importance of item segmentation on numerosity and density perception at various stimulus densities, by measuring the effect of connecting visual objects with thin lines. The results confirmed many previous studies showing that connecting items robustly reduces the apparent numerosity of patterns of moderate density. We further showed that the apparent density of moderate-density patterns is also reduced by connecting the dots. Crucially, we found that both these effects are strongly reduced at higher numerosities. Indeed for density judgments, the effect reverses, so connecting dots in dense patterns increases the apparent density (as expected from the physical characteristics). The results provide clear support for the three-regime framework of number perception, and suggest that for moderately sparse stimuli, numerosity – but not texture-density – is perceived directly.

In adults, partial damage to V1 or optic radiations abolishes perception in the corresponding part of the visual field, causing a scotoma. However, it is widely accepted that the developing cortex has superior capacities to reorganize following an early lesion to endorse adaptive plasticity. Here we report a single patient case (G.S.) with near normal central field vision despite a massive unilateral lesion to the optic radiations acquired early in life. The patient underwent surgical removal of a right hemisphere parieto-temporal-occipital atypical choroid plexus papilloma of the right lateral ventricle at four months of age, which presumably altered the visual pathways during in utero development. Both the tumor and surgery severely compromised the optic radiations. Residual vision of G.S. was tested psychophysically when the patient was 7 years old. We found a close-to-normal visual acuity and contrast sensitivity within the central 25 degrees and a great impairment in form and contrast vision in the far periphery (40-50 degrees ) of the left visual hemifield. BOLD response to full field luminance flicker was recorded from the primary visual cortex (V1) and in a region in the residual temporal-occipital region, presumably corresponding to the middle temporal complex (MT+), of the lesioned (right) hemisphere. A population receptive field analysis of the BOLD responses to contrast modulated stimuli revealed a retinotopic organization just for the MT+ region but not for the calcarine regions. Interestingly, consistent islands of ipsilateral activity were found in MT+ and in the parieto-occipital sulcus (POS) of the intact hemisphere. Probabilistic tractography revealed that optic radiations between LGN and V1 were very sparse in the lesioned hemisphere consistently with the post-surgery cerebral resection, while normal in the intact hemisphere. On the other hand, strong structural connections between MT+ and LGN were found in the lesioned hemisphere, while the equivalent tract in the spared hemisphere showed minimal structural connectivity. These results suggest that during development of the pathological brain, abnormal thalamic projections can lead to functional cortical changes, which may mediate functional recovery of vision.

Many behavioral measures of visual perception fluctuate continually in a rhythmic manner, reflecting the influence of endogenous brain oscillations, particularly theta (approximately 4-7 Hz) and alpha (approximately 8-12 Hz) rhythms [1-3]. However, it is unclear whether these oscillations are unique to vision or whether auditory performance also oscillates [4, 5]. Several studies report no oscillatory modulation in audition [6, 7], while those with positive findings suffer from confounds relating to neural entrainment [8-10]. Here, we used a bilateral pitch-identification task to investigate rhythmic fluctuations in auditory performance separately for the two ears and applied signal detection theory (SDT) to test for oscillations of both sensitivity and criterion (changes in decision boundary) [11, 12]. Using uncorrelated dichotic white noise to induce a phase reset of oscillations, we demonstrate that, as with vision, both auditory sensitivity and criterion showed strong oscillations over time, at different frequencies: approximately 6 Hz (theta range) for sensitivity and approximately 8 Hz (low alpha range) for criterion, implying distinct underlying sampling mechanisms [13]. The modulation in sensitivity in left and right ears was in antiphase, suggestive of attention-like mechanisms sampling alternatively from the two ears.

Sensory information is inherently ambiguous. The brain disambiguates this information by anticipating or predicting the sensory environment based on prior knowledge. Pellicano and Burr (2012) proposed that this process may be atypical in autism and that internal assumptions, or "priors," may be underweighted or less used than in typical individuals. A robust internal assumption used by adults is the "light-from-above" prior, a bias to interpret ambiguous shading patterns as if formed by a light source located above (and slightly to the left) of the scene. We investigated whether autistic children (n=18) use this prior to the same degree as typical children of similar age and intellectual ability (n=18). Children were asked to judge the shape (concave or convex) of a shaded hexagon stimulus presented in 24 rotations. We estimated the relation between the proportion of convex judgments and stimulus orientation for each child and calculated the light source location most consistent with those judgments. Children behaved similarly to adults in this task, preferring to assume that the light source was from above left, when other interpretations were compatible with the shading evidence. Autistic and typical children used prior assumptions to the same extent to make sense of shading patterns. Future research should examine whether this prior is as adaptable (i.e., modifiable with training) in autistic children as it is in typical adults.

We investigated the ability of children with ASD to discriminate a small cylinder from a large cube by observing a point-light movie of an actor grasping the object, either from an allocentric or egocentric viewpoint (observing action of others or self). Compared with typically developing controls, high functioning autistic children showed a strong selective impairment in this task, but only with the allocentric viewpoint, where thresholds were twice as high: egocentric thresholds were similar to age- and ability-matched controls. The magnitude of the impairment correlated strongly with the degree of symptomology (R2 = 0.5). The results suggest that children with ASD might be impaired in their ability to predict and infer the consequences of others' movements, which could be related to the social-communicative deficits often reported in autism. FAU - Turi, Marco

Faivre, N., Arzi, A., Lunghi, C. & Salomon, R. (2017). Consciousness is more than meets the eye: a call for a multisensory study of subjective experience, Neuroscience of Consciousness, 1-8.PDF

Over the last 30 years, our understanding of the neurocognitive bases of consciousness has improved, mostly through studies employing vision. While studying consciousness in the visual modality presents clear advantages, we believe that a comprehensive scientific account of subjective experience must not neglect other exteroceptive and interoceptive signals as well as the role of multisensory interactions for perceptual and self-consciousness. Here, we briefly review four distinct lines of work which converge in documenting how multisensory signals are processed across several levels and contents of consciousness. Namely, how multisensory interactions occur when consciousness is prevented because of perceptual manipulations (i.e. subliminal stimuli) or because of low vigilance states (i.e. sleep, anesthesia), how interactions between exteroceptive and interoceptive signals give rise to bodily self-consciousness, and how multisensory signals are combined to form metacognitive judgments. By describing the interactions between multisensory signals at the perceptual, cognitive, and metacognitive levels, we illustrate how stepping out the visual comfort zone may help in deriving refined accounts of consciousness, and may allow cancelling out idiosyncrasies of each sense to delineate supramodal mechanisms involved during consciousness.

Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an ‘ensemble’ emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

The pulvinar is the largest of the thalamic nuclei in the primates, including humans. In the primates, two of the three major subdivisions, the lateral and inferior pulvinar, are heavily interconnected with a significant proportion of the visual association cortex. However, while we now have a better understanding of the bidirectional connectivity of these pulvinar subdivisions, its functions remain somewhat of an enigma. Over the past few years, researchers have started to tackle this problem by addressing it from the angle of development and visual cortical lesions. In this review, we will draw together literature from the realms of studies in nonhuman primates and humans that have informed much of the current understanding. This literature has been responsible for changing many long-held opinions on the development of the visual cortex and how the pulvinar interacts dynamically with cortices during early life to ensure rapid development and functional capacity Furthermore, there is evidence to suggest involvement of the pulvinar following lesions of the primary visual cortex (V1) and geniculostriate pathway in early life which have far better functional outcomes than identical lesions obtained in adulthood. Shedding new light on the pulvinar and its role following lesions of the visual brain has implications for our understanding of visual brain disorders and the potential for recovery.

Action and perception are intimately coupled systems; one clear case is saccadic suppression, the reduced visibility around the time of saccades, important in mediating visual stability; another is the oscillatory modulation of visibility synchronized with hand action. To suppress effectively the spurious retinal motion generated by the eye movements, it is crucial that saccadic suppression and saccadic onset be temporally synchronous. However, the mechanisms that determine this temporal synchrony are unknown. We investigated the effect of saccades on contrast discrimination sensitivity over a long period stretching over more than 1 second before and after saccade execution. Human subjects made horizontal saccades at will to two stationary saccadic targets separated by 20 degrees degrees. At a random interval, a brief Gabor patch was displayed between the two fixations in either the upper or lower visual field, and the subject had to detect its location. Strong saccadic suppression was measured between -50 and 50 ms from saccadic onset. However, the suppression was systematically embedded in a trough of oscillations of contrast sensitivity that fluctuated rhythmically in the delta range (at about 3 Hz), commencing about one second before saccade execution and lasting for up to one second after the saccade. The results show that saccadic preparation and visual sensitivity oscillations are coupled, and the coupling might be instrumental in temporally aligning the initiation of the saccade with the visual suppression.Significant statementSaccades are known to produce a suppression of contrast sensitivity at saccadic onset and an enhancement after saccadic offset. Here we show that these dynamics are systematically embedded in visual oscillations of contrast sensitivity that fluctuate rhythmically in the delta range (at about 3 Hz), commencing about one second before saccade execution and lasting for up to one second after the saccade. The results show that saccadic preparation and visual sensitivity oscillations are coupled, and the coupling might be instrumental in aligning temporally the initiation of the saccade with the visual suppression.

Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark) and task requirements (minimizing body and gaze movements), slow pupil oscillations, "hippus," spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry). This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure) provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of approximately 42 degrees between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.

To efficiently interact with the external environment, our nervous system combines information arising from different sensory modalities. Recent evidence suggests that cross-modal interactions can be automatic and even unconscious, reflecting the ecological relevance of cross-modal processing. Here, we use continuous flash suppression (CFS) to directly investigate whether haptic signals can interact with visual signals outside of visual awareness. We measured suppression durations of visual gratings rendered invisible by CFS either during visual stimulation alone or during visuo-haptic stimulation. We found that active exploration of a haptic grating congruent in orientation with the suppressed visual grating reduced suppression durations both compared with visual-only stimulation and to incongruent visuo-haptic stimulation. We also found that the facilitatory effect of touch on visual suppression disappeared when the visual and haptic gratings were mismatched in either spatial frequency or orientation. Together, these results demonstrate that congruent touch can accelerate the rise to consciousness of a suppressed visual stimulus and that this unconscious cross-modal interaction depends on visuo-haptic congruency. Furthermore, since CFS suppression is thought to occur early in visual cortical processing, our data reinforce the evidence suggesting that visuo-haptic interactions can occur at the earliest stages of cortical processing.

Development of the motor system lags behind that of the visual system and might delay some visual properties more closely linked to action. We measured the developmental trajectory of the discrimination of object size from observation of the biological motion of a grasping action in egocentric and allocentric viewpoints (observing action of others or self), in children and adolescents from 5 to 18 years of age. Children of 5-7 years of age performed the task at chance, indicating a delayed ability to understand the goal of the action. We found a progressive improvement in the ability of discrimination from 9 to 18 years, which parallels the development of fine motor control. Only after 9 years of age did we observe an advantage for the egocentric view, as previously reported for adults. Given that visual and haptic sensitivity of size discrimination, as well as biological motion, are mature in early adolescence, we interpret our results as reflecting immaturity of the influence of the motor system on visual perception.

When different images are presented to the eyes, the brain is faced with ambiguity, causing perceptual bistability: visual perception continuously alternates between the monocular images, a phenomenon called binocular rivalry. Many models of rivalry suggest that its temporal dynamics depend on mutual inhibition among neurons representing competing images. These models predict that rivalry should be different in autism, which has been proposed to present an atypical ratio of excitation and inhibition [the E/I imbalance hypothesis; Rubenstein & Merzenich, 2003]. In line with this prediction, some recent studies have provided evidence for atypical binocular rivalry dynamics in autistic adults. In this study, we examined if these findings generalize to autistic children. We developed a child-friendly binocular rivalry paradigm, which included two types of stimuli, low- and high-complexity, and compared rivalry dynamics in groups of autistic and age- and intellectual ability-matched typical children. Unexpectedly, the two groups of children presented the same number of perceptual transitions and the same mean phase durations (times perceiving one of the two stimuli). Yet autistic children reported mixed percepts for a shorter proportion of time (a difference which was in the opposite direction to previous adult studies), while elevated autistic symptomatology was associated with shorter mixed perception periods. Rivalry in the two groups was affected similarly by stimulus type, and consistent with previous findings. Our results suggest that rivalry dynamics are differentially affected in adults and developing autistic children and could be accounted for by hierarchical models of binocular rivalry, including both inhibition and top-down influences.

This study investigated whether functional transcranial Doppler ultrasound (fTCD) is a suitable tool for studying hemispheric lateralization of language in patients with pre-perinatal left hemisphere (LH) lesions and right hemiparesis. Eighteen left-hemisphere-damaged children and young adults and 18 healthy controls were assessed by fTCD and fMRI to evaluate hemispheric activation during two language tasks: a fTCD animation description task and a fMRI covert rhyme generation task. Lateralization indices (LIs), measured by the two methods, differed significantly between the two groups, for a clear LH dominance in healthy participants and a prevalent activation of right hemisphere in more than 80% of brain-damaged patients. Distribution of participants in terms of left, right, and bilateral lateralization was highly concordant between fTCD and fMRI values. Moreover, right hemisphere language dominance in patients with left hemispheric lesions was significantly associated with severity of cortical and subcortical damage in LH. This study suggests that fTCD is an easily applicable tool that might be a valid alternative to fMRI for large-scale studies of patients with congenital brain lesions.

Cognitive attention and perceptual saliency jointly govern our interaction with the environment. Yet, we still lack a universally accepted account of the interplay between attention and luminance contrast - a fundamental dimension of saliency. We measured the attentional modulation of V4 neurons' Contrast Response Functions (CRFs) in awake, behaving macaque monkeys and applied a new approach which emphasizes the temporal dynamics of cell responses. We found that attention modulates CRFs via different gain mechanisms during subsequent epochs of visually driven activity: an early contrast-gain - strongly dependent on pre-stimulus activity changes (baseline shift), a time-limited stimulus-dependent multiplicative modulation, reaching its maximal expression around 150 ms after stimulus onset, and a late resurgence of contrast-gain modulation. Attention produced comparable time-dependent attentional gain changes on cells heterogeneously coding contrast, supporting the notion that the same circuits mediate attention mechanisms in V4 regardless of the form of contrast selectivity expressed by the given neuron. Surprisingly, attention was also sometimes capable of inducing radical transformations in the shape of CRFs. These findings offer important insights into the mechanisms that underlie contrast coding and attention in primate visual cortex and a new perspective on their interplay, one in which time becomes a fundamental factor.

In a recent study, Ebitz and Moore described how subthreshold electrical microstimulation of the macaque frontal eye fields (FEF) modulates the pupillary light reflex. This elegant study suggests that the influence of the FEF and prefrontal cortex on attentional modulation of cortical visual processing extends to the subcortical circuit that mediates a very basic reflex, the pupillary light reflex.

Much evidence points to an interaction between vision and audition at early cortical sites. However, the functional role of these interactions is not yet understood. Here we show an early response of the occipital cortex to sound that it is strongly linked to the spatial localization task performed by the observer. The early occipital response to a sound, usually absent, increased by more than 10-fold when presented during a space localization task, but not during a time localization task. The response amplification was not only specific to the task, but surprisingly also to the position of the stimulus in the two hemifields. We suggest that early occipital processing of sound is linked to the construction of an audio spatial map that may utilize the visual map of the occipital cortex.

Covertly shifting attention to a brighter or darker image (without moving one's eyes) is sufficient to evoke pupillary constriction or dilation, respectively. One possibility is that this attentional modulation involves the pupillary light response pathway, which pivots around the olivary pretectal nucleus. We investigate this possibility by studying patients with Parinaud's syndrome, where the normal pupillary light response is strongly impaired due to lesions in the pretectal area. Four patients and nine control participants covertly attended (while maintaining fixation at the center of a monitor screen) to one of two disks located in the left and right periphery: one brighter, the other darker than the background. Patients and control subjects behaved alike, showing smaller pupils when attending to the brighter stimulus (despite no eye movements); consistent results were obtained with a dynamic version of the stimulus. We interpret this as proof of principle that attention to bright or dark stimuli can dynamically modulate pupil size in patients with Parinaud's syndrome, suggesting that attention acts independently of the pretectal circuit for the pupillary light response and indicating that several components of the pupillary response can be isolated - including one related to the focus of covert attention.

Area prostriata is a cortical area at the fundus of the calcarine sulcus, described anatomically in humans [ 1–5 ] and other primates [ 6–9 ]. It is lightly myelinated and lacks the clearly defined six-layer structure evident throughout the cerebral cortex, with a thinner layer 4 and thicker layer 2 [ 10 ], characteristic of limbic cortex [ 11 ]. In the marmoset and rhesus monkey, area prostriata has cortical connections with MT+ [ 12 ], the cingulate motor cortex [ 8 ], the auditory cortex [ 13 ], the orbitofrontal cortex, and the frontal polar cortices [ 14 ]. Here we use functional magnetic resonance together with a wide-field projection system to study its functional properties in humans. With population receptive field mapping [ 15 ], we show that area prostriata has a complete representation of the visual field, clearly distinct from the adjacent area V1. As in the marmoset, the caudal-dorsal border of human prostriata—abutting V1—represents the far peripheral visual field, with eccentricities decreasing toward its rostral boundary. Area prostriata responds strongly to very fast motion, greater than 500°/s. The functional properties of area prostriata suggest that it may serve to alert the brain quickly to fast visual events, particularly in the peripheral visual field.

Alpha oscillations are particularly important in determining our percepts and have been implicated in fundamental brain functions. Oscillatory activity can be spontaneous or stimulus-related. Furthermore, stimulus-related responses can be phase- or non-phase-locked to the stimulus. Non-phase-locked (induced) activity can be identified as the average amplitude changes in response to a stimulation, while phase-locked activity can be measured via reverse correlation techniques (echo function). However, the mechanisms and the functional roles of these oscillations are far from clear. Here, we investigated the effect of ambient luminance changes, known to dramatically modulate neural oscillations, on spontaneous and stimulus-related alpha. We investigated the effect of ambient luminance on EEG alpha during spontaneous human brain activity at rest (experiment 1) and during visual stimulation (experiment 2). Results show that spontaneous alpha amplitude increased by decreasing ambient-luminance, while alpha frequency remained unaffected. In the second experiment, we found that under low-luminance viewing the stimulus-related alpha amplitude was lower, and its frequency was slightly faster. These effects were evident in the phase-locked part of the alpha response (echo function), but weaker or absent in the induced (non-phase-locked) alpha responses. Finally, we explored the possible behavioral correlates of these modulations in a monocular critical flicker frequency task (experiment 3), finding that dark adaptation in the left eye decreased the temporal threshold of the right eye. Overall, we found that ambient luminance changes impact differently on spontaneous and stimulus-related alpha expression. We suggest that stimulus-related alpha activity is crucial in determining human temporal segmentation abilities. This article is protected by copyright. All rights reserved.

There is good evidence that biological perceptual systems exploit the temporal continuity in the world: When asked to reproduce or rate sequentially presented stimuli (varying in almost any dimension), subjects typically err toward the previous stimulus, exhibiting so-called "serial dependence." At this stage it is unclear whether the serial dependence results from averaging within the perceptual system, or at later stages. Here we demonstrate that strong serial dependencies occur within both perceptual and decision processes, with very little contribution from the response. Using a technique to isolate pure perceptual effects (Fritsche, Mostert, & de Lange, 2017), we show strong serial dependence in orientation judgements, over the range of orientations where theoretical considerations predict the effects to be maximal. In a second experiment we dissociate responses from stimuli to show that serial dependence occurs only between stimuli, not responses. The results show that serial dependence is important for perception, exploiting temporal redundancies to enhance perceptual efficiency.

Humans and other animals are able to make rough estimations of quantities using what has been termed the approximate number system (ANS). Much evidence suggests that sensitivity to numerosity correlates with symbolic math capacity, leading to the suggestion that the ANS may serve as a start-up tool to develop symbolic math. Many experiments have demonstrated that numerosity perception transcends the sensory modality of stimuli and their presentation format (sequential or simultaneous), but it remains an open question whether the relationship between numerosity and math generalizes over stimulus format and modality. Here we measured precision for estimating the numerosity of clouds of dots and sequences of flashes or clicks, as well as for paired comparisons of the numerosity of clouds of dots. Our results show that in children, formal math abilities correlate positively with sensitivity for estimation and paired-comparisons of the numerosity of visual arrays of dots. However, precision of numerosity estimation for sequences of flashes or sounds did not correlate with math, although sensitivities in all estimations tasks (for sequential or simultaneous stimuli) were strongly correlated with each other. In adults, we found no significant correlations between math scores and sensitivity to any of the psychophysical tasks. Taken together these results support the existence of a generalized number sense, and go on to demonstrate an intrinsic link between mathematics and perception of spatial, but not temporal numerosity.

Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.

Continuous flash suppression (CFS) is a psychophysical technique where a rapidly changing Mondrian pattern viewed by one eye suppresses the target in the other eye for several seconds. Despite the widespread use of CFS to study unconscious visual processes, the temporal tuning of CFS suppression is currently unknown. In the present study we used spatiotemporally filtered dynamic noise as masking stimuli to probe the temporal characteristics of CFS. Surprisingly, we find that suppression in CFS peaks very prominently at approximately 1 Hz, well below the rates typically used in CFS studies (10 Hz or more). As well as a strong bias to low temporal frequencies, CFS suppression is greater for high spatial frequencies and increases with increasing masker contrast, indicating involvement of parvocellular/ventral mechanisms in the suppression process. These results are reminiscent of binocular rivalry, and unifies two phenomenon previously thought to require different explanations.

It is known that, after a prolonged period of visual deprivation, the adult visual cortex can be recruited for nonvisual processing, reflecting cross-modal plasticity. Here, we investigated whether cross-modal plasticity can occur at short timescales in the typical adult brain by comparing the interaction between vision and touch during binocular rivalry before and after a brief period of monocular deprivation, which strongly alters ocular balance favoring the deprived eye. While viewing dichoptically two gratings of orthogonal orientation, participants were asked to actively explore a haptic grating congruent in orientation to one of the two rivalrous stimuli. We repeated this procedure before and after 150 min of monocular deprivation. We first confirmed that haptic stimulation interacted with vision during rivalry promoting dominance of the congruent visuo-haptic stimulus and that monocular deprivation increased the deprived eye and decreased the nondeprived eye dominance. Interestingly, after deprivation, we found that the effect of touch did not change for the nondeprived eye, whereas it disappeared for the deprived eye, which was potentiated after deprivation. The absence of visuo-haptic interaction for the deprived eye lasted for over 1 hr and was not attributable to a masking induced by the stronger response of the deprived eye as confirmed by a control experiment. Taken together, our results demonstrate that the adult human visual cortex retains a high degree of cross-modal plasticity, which can occur even at very short timescales.

Retinal prosthesis technologies require that the visual system downstream of the retinal circuitry be capable of transmitting and elaborating visual signals. We studied the capability of plastic remodeling in late blind subjects implanted with the Argus II Retinal Prosthesis with psychophysics and functional MRI (fMRI). After surgery, six out of seven retinitis pigmentosa (RP) blind subjects were able to detect high-contrast stimuli using the prosthetic implant. However, direction discrimination to contrast modulated stimuli remained at chance level in all of them. No subject showed any improvement of contrast sensitivity in either eye when not using the Argus II. Before the implant, the Blood Oxygenation Level Dependent (BOLD) activity in V1 and the lateral geniculate nucleus (LGN) was very weak or absent. Surprisingly, after prolonged use of Argus II, BOLD responses to visual input were enhanced. This is, to our knowledge, the first study tracking the neural changes of visual areas in patients after retinal implant, revealing a capacity to respond to restored visual input even after years of deprivation.

We use the simple, but prominent Helmholtz’s squares illusion in which a vertically striped square appears wider than a horizontally striped square of identical physical dimensions to determine whether functional magnetic resonance imaging (fMRI) BOLD responses in V1 underpin illusions of size. We report that these simple stimuli which differ in only one parameter, orientation, to which V1 neurons are highly selective elicited activity in V1 that followed their physical, not perceived size. To further probe the role of V1 in the illusion and investigate plausible extrastriate visual areas responsible for eliciting the Helmholtz squares illusion, we performed a follow-up transcranial magnetic stimulation (TMS) experiment in which we compared perceptual judgments about the aspect ratio of perceptually identical Helmholtz squares when no TMS was applied against selective stimulation of V1, LO1, or LO2. In agreement with fMRI results, we report that TMS of area V1 does not compromise the strength of the illusion. Only stimulation of area LO1, and not LO2, compromised significantly the strength of the illusion, consistent with previous research that LO1 plays a role in the processing of orientation information. These results demonstrate the involvement of a specific extrastriate area in an illusory percept of size.

Brain connectivity is associated to behavioral states (e.g. wake, sleep) and modified by physical activity although, to date, it is not clear which components (e.g. hypothalamus-pituitary-adrenal axis hormones, cytokines) associated to the exercise are involved. In this pilot study, we used extreme exercise (UltraTriathlon) as a model to investigate physical-activity-related changes of brain connectivity. We studied post-race brain synchronization during wakefulness and sleep as well as possible correlations between exercise-related cytokines/hormones and synchronization features. For wakefulness, global synchronization was evaluated by estimating from fMRI data (12 athletes) the brain global connectivity (GC). GC increased in several brain regions, mainly related to sensory-motor activity, emotional modulation and response to stress that may foster rapid exchange of information across regions, and reflect post-race internally-focused mental activity or disengagement from previous motor programs. No significant correlations between cytokines/hormones and GC were found. For sleep (8 athletes), synchronization was evaluated by estimating the local-(cortical) and global-related (thalamocortical) EEG features associated to the phenomenon of Sleep Slow Oscillations (SSO) of NREM sleep. Results showed that: power of fast rhythms in the baseline preceding the SSO increased in midline and parietal regions; amplitude and duration of SSOs increased, mainly in posterior areas; sigma modulation in the SSO up state decreased. In the post race, IL-10 positively correlated with fast rhythms baseline, SSO rate and positive slope; IL-1ra and cortisol inversely correlated with SSO duration; TNF-alpha and C-reactive protein positively correlated with fast rhythm modulation in the SSO up state. Sleep results suggest that: arousal during sleep, estimated by baseline fast rhythms, is increased; SSO may be sustained by cortical excitability, linked to anti-inflammatory markers (IL-10); thalamo-cortical entrainment, (sigma modulation), is impaired in athletes with higher inflammatory markers.

Psychophysical studies have shown that numerosity is a sensory attribute susceptible to adaptation. Neuroimaging studies have reported that, at least for relatively low numbers, numerosity can be accurately discriminated in the intra-parietal sulcus. Here we developed a novel rapid adaptation paradigm where adapting and test stimuli are separated by pauses sufficient to dissociate their BOLD activity. We used multivariate pattern recognition to classify brain activity evoked by non-symbolic numbers over a wide range (20-80), both before and after psychophysical adaptation to the highest numerosity. Adaptation caused underestimation of all lower numerosities, and decreased slightly the average BOLD responses in V1 and IPS. Using support vector machine, we showed that the BOLD response of IPS, but not in V1, classified numerosity well, both when tested before and after adaptation. However, there was no transfer from training pre-adaptation responses to testing post-adaptation, and vice versa, indicating that adaptation changes the neuronal representation of the numerosity. Interestingly, decoding was more accurate after adaptation, and the amount of improvement correlated with the amount of perceptual underestimation of numerosity across subjects. These results suggest that numerosity adaptation acts directly on IPS, rather than indirectly via other low-level stimulus parameters analysis, and that adaptation improves the capacity to discriminate numerosity.

The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz's ‘perception as inference’.

Some people who are blind due to damage to their primary visual cortex, V1, can discriminate stimuli presented within their blind visual field. This residual function has been recently linked to a pathway that bypasses V1, and connects the thalamic lateral geniculate nucleus directly with the extrastriate cortical area MT.

The proposal that the processing of visual time might rely on a network of distributed mechanisms that are vision-specific and timescale-specific stands in contrast to the classical view of time perception as the product of a single supramodal clock. Evidence showing that some of these mechanisms have a sensory component that can be locally adapted is at odds with another traditional assumption, namely that time is completely divorced from space. Recent evidence suggests that multiple timing mechanisms exist across and within sensory modalities and that they operate in various neural regions. The current review summarizes this evidence and frames it into the broader scope of models for time perception in the visual domain.

Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512?ms auditory-lead to 512?ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication.

Humans share with many animals a number sense, the ability to estimate rapidly the approximate number of items in a scene. Recent work has shown that like many other perceptual attributes, numerosity is susceptible to adaptation. It is not clear, however, whether adaptation works directly on mechanisms selective to numerosity, or via related mechanisms, such as those tuned to texture density. To disentangle this issue we measured adaptation of numerosity of 10 pairs of connected dots, as connecting dots makes them appear to be less numerous than unconnected dots. Adaptation to a 20-dot pattern (same number of dots as the test) caused robust reduction in apparent numerosity of the connected-dot pattern, but not of the unconnected dot-pattern. This suggests that adaptation to numerosity, at least for relatively sparse dot-pattern, occurs at neural levels encoding perceived numerosity, rather than at lower levels responding to the number of elements in the scene.

Adaptation to fast motion reduces the perceived duration of stimuli displayed at the same location as the adapting stimuli. Here we show that the adaptation-induced compression of time is specific for translational motion. Adaptation to complex motion, either circular or radial, did not affect perceived duration of subsequently viewed stimuli. Adaptation with multiple patches of translating motion caused compression of duration only when the motion of all patches was in the same direction. These results show that adaptation-induced compression of event-time occurs only for uni-directional translational motion, ruling out the possibility that the neural mechanisms of the adaptation occur at early levels of visual processing.

We measured pupillary constrictions in response to full-screen flashes of variable luminance, occurring either at the onset of a saccadic eye movement or well before/after it. A large fraction of perisaccadic flashes were undetectable to the subjects, consistent with saccadic suppression of visual sensitivity. Likewise, pupillary responses to perisaccadic flashes were strongly suppressed. However, the two phenomena appear to be dissociable. Across subjects and luminance levels of the flash stimulus, there were cases in which conscious perception of the flash was completely depleted yet the pupillary response was clearly present, as well as cases in which the opposite occurred. On one hand, the fact that pupillary light responses are subject to saccadic suppression reinforces evidence that this is not a simple reflex but depends on the integration of retinal illumination with complex "extraretinal" cues. On the other hand, the relative independence of pupillary and perceptual responses suggests that suppression acts separately on these systems-consistent with the idea of multiple visual pathways that are differentially affected by saccades.

The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.

PURPOSE. Recent studies on adults have shown that short-term monocular deprivation boosts the deprived eye signal in binocular rivalry, reflecting homeostatic plasticity. Here we investigate whether homeostatic plasticity is present also during occlusion therapy for moderate amblyopia. METHODS. Binocular rivalry and visual acuity (using Snellen charts for children) were measured in 10 children (mean age 6.2 ± 1 years) with moderate anisometropic amblyopia before the beginning of treatment and at four intervals during occlusion therapy (2 hours, 1, 2, and 5 months). Visual stimuli were orthogonal gratings presented dichoptically through ferromagnetic goggles and children reported verbally visual rivalrous perception. Bangerter filters were applied on the spectacle lens over the best eye for occlusion therapy. RESULTS. Two hours of occlusion therapy increased the nonamblyopic eye predominance over the amblyopic eye compared with pretreatment measurements, consistent with the results in adults. The boost of the nonamblyopic eye was still present after 1 month of treatment, steadily decreasing afterward to reach pretreatment levels after 2 months of continuous occlusion. Across subjects, the increase in nonamblyopic eye predominance observed after 2 hours of occlusion correlated (rho = -0.65, P = 0.04) with the visual acuity improvement of the amblyopic eye measured after 2 months of treatment. CONCLUSIONS. Homeostatic plasticity operates during occlusion therapy for moderate amblyopia and the increase in nonamblyopic eye dominance observed at the beginning of treatment correlates with the amblyopic eye recovery rate. These results suggest that binocular rivalry might be used to monitor visual cortical plasticity during occlusion therapy, although further investigations on larger clinical populations are needed to validate the predictive power of the technique.

Recent evidence suggests that ongoing brain oscillations may be instrumental in binding and integrating multisensory signals. In this experiment, we investigated the temporal dynamics of visual–motor integration processes. We show that action modulates sensitivity to visual contrast discrimination in a rhythmic fashion at frequencies of about 5 Hz (in the theta range), for up to 1 s after execution of action. To understand the origin of the oscillations, we measured oscillations in contrast sensitivity at different levels of luminance, which is known to affect the endogenous brain rhythms, boosting the power of alpha-frequencies. We found that the frequency of oscillation in sensitivity increased at low luminance, probably reflecting the shift in mean endogenous brain rhythm towards higher frequencies. Importantly, both at high and at low luminance, contrast discrimination showed a rhythmic motor-induced suppression effect, with the suppression occurring earlier at low luminance. We suggest that oscillations play a key role in sensory–motor integration, and that the motor-induced suppression may reflect the first manifestation of a rhythmic oscillation.

Maintained exposure to a specific stimulus property-such as size, color, or motion-induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location.

Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution.

A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time.

Perceived time undergoes distortions when we prepare and perform movements, showing compression and/or expansion for visual, tactile and auditory stimuli. However, the actual motor system contribution to these time distortions is far from clear. In this study we investigated visual time perception during preparation of isometric contractions and real movements of the hand in two different directions (right/left). Comparable modulations of visual event-timing are found in the isometric and in the movement condition, excluding explanations based on movement-induced sensory masking or attenuation. Most importantly, and surprisingly, visual time depends on the movement direction, being expanded for hand movements pointing away from the body and compressed in the other direction. Furthermore, the effect of movement direction is not constant, but rather undergoes non-monotonic modulations in the brief moments preceding movement initiation. Our findings indicate that time distortions are strongly linked to the motor system, and they may be unavoidable consequences of the mechanisms subserving sensory-motor integration.

Humans and other species have perceptual mechanisms dedicated to estimating approximate quantity: a sense of number. Here we show a clear interaction between self-produced actions and the perceived numerosity of subsequent visual stimuli. A short period of rapid finger-tapping (without sensory feedback) caused subjects to underestimate the number of visual stimuli presented near the tapping region; and a period of slow tapping caused overestimation. The distortions occurred both for stimuli presented sequentially (series of flashes) and simultaneously (clouds of dots); both for magnitude estimation and forced-choice comparison. The adaptation was spatially selective, primarily in external, real-world coordinates. Our results sit well with studies reporting links between perception and action, showing that vision and action share mechanisms that encode numbers: a generalized number sense, which estimates the number of self-generated as well as external events.

Exposure to a patch of dots produces a repulsive shift in the perceived numerosity of subsequently viewed dot patches. Although a remarkably strong effect, in which the perceived numerosity can be shifted by up to 50% of the actual numerosity, very little is known about the temporal dynamics. Here we demonstrate a novel adaptation paradigm that allows numerosity adaptation to be rapidly induced at several distinct locations simultaneously. We show that not only is this adaptation to numerosity spatially specific, with different locations of the visual field able to be adapted to high, low, or neutral stimuli, but it can occur with only very brief periods of adaptation. Further investigation revealed that the adaptation effect was primarily driven by the number of unique adapting events that had occurred and not by either the duration of each event or the total duration of exposure to adapting stimuli. This event-based numerosity adaptation appears to fit well with statistical models of adaptation in which the dynamic adjustment of perceptual experiences, based on both the previous experience of the stimuli and the current percept, acts to optimize the limited working range of perception. These results implicate a highly plastic mechanism for numerosity perception, which is dependent on the number of discrete adaptation events, and also demonstrate a quick and efficient paradigm suitable for examining the temporal properties of adaptation.

Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively sparse, segregatable items (24 dots); numerosity of very dense textured patterns (250 dots); and discrimination of direction of motion. Thresholds in all tasks improved with age, but at different rates, implying the action of different mechanisms: In particular, in young children, thresholds were lower for sparse than textured patterns (the opposite of adults), suggesting earlier maturation of numerosity mechanisms. Importantly, numerosity thresholds for sparse stimuli correlated strongly with math skills, even after controlling for the influence of age, gender and nonverbal IQ. However, neither motion-direction discrimination nor numerosity discrimination of texture patterns showed a significant correlation with math abilities. These results provide further evidence that numerosity and texture-density are perceived by independent neural mechanisms, which develop at different rates; and importantly, only numerosity mechanisms are related to math. As developmental dyscalculia is characterized by a profound deficit in discriminating numerosity, it is fundamental to understand the mechanism behind the discrimination.

Humans, including infants, and many other species have a capacity for rapid, nonverbal estimation of numerosity. However, the mechanisms for number perception are still not clear; some maintain that the system calculates numerosity via density estimates-similar to those involved in texture-while others maintain that more direct, dedicated mechanisms are involved. Here we show that provided that items are not packed too densely, human subjects are far more sensitive to numerosity than to either density or area. In a two-dimensional space spanning density, area and numerosity, subjects spontaneously react with far greater sensitivity to changes in numerosity, than either area or density. Even in tasks where they were explicitly instructed to make density or area judgments, they responded spontaneously to number. We conclude, that humans extract number information, directly and spontaneously, via dedicated mechanisms.

Criticality reportedly describes brain dynamics. The main critical feature is the presence of scale-free neural avalanches, whose auto-organization is determined by a critical branching ratio of neural-excitation spreading. Other features, directly associated to second-order phase transitions, are: (i) scale-free-network topology of functional connectivity, stemming from suprathreshold pairwise correlations, superimposable, in waking brain activity, with that of ferromagnets at Curie temperature; (ii) temporal long-range memory associated to renewal intermittency driven by abrupt fluctuations in the order parameters, detectable in human brain via spatially distributed phase or amplitude changes in EEG activity. Herein we study intermittent events, extracted from 29 night EEG recordings, including presleep wakefulness and all phases of sleep, where different levels of mentation and consciousness are present. We show that while critical avalanching is unchanged, at least qualitatively, intermittency and functional connectivity, present during conscious phases (wakefulness and REM sleep), break down during both shallow and deep non-REM sleep. We provide a theory for fragmentation-induced intermittency breakdown and suggest that the main difference between conscious and unconscious states resides in the backwards causation, namely on the constraints that the emerging properties at large scale induce to the lower scales. In particular, while in conscious states this backwards causation induces a critical slowing down, preserving spatiotemporal correlations, in dreamless sleep we see a self-organized maintenance of moduli working in parallel. Critical avalanches are still present, and establish transient auto-organization, whose enhanced fluctuations are able to trigger sleep-protecting mechanisms that reinstate parallel activity. The plausible role of critical avalanches in dreamless sleep is to provide a rapid recovery of consciousness, if stimuli are highly arousing.

Rapid eye movements (REMs) are a peculiar and intriguing aspect of REM sleep, even if their physiological function still remains unclear. During this work, a new automatic tool was developed, aimed at a complete description of REMs activity during the night, both in terms of their timing of occurrence that in term of their directional properties. A classification stage of each singular movement detected during the night according to its main direction, was in fact added to our procedure of REMs detection and ocular artifact removal. A supervised classifier was constructed, using as training and validation set EOG data recorded during voluntary saccades of five healthy volunteers. Different classification methods were tested and compared. The further information about REMs directional characteristic provided by the procedure would represent a valuable tool for a deeper investigation into REMs physiological origin and functional meaning.

Brain plasticity, defined as the capability of cerebral neurons to change in response to experience, is fundamental for behavioral adaptability, learning, memory, functional development, and neural repair. The visual cortex is a widely used model for studying neuroplasticity and the underlying mechanisms. Plasticity is maximal in early development, within the so-called critical period, while its levels abruptly decline in adulthood [1]. Recent studies, however, have revealed a significant residual plastic potential of the adult visual cortex by showing that, in adult humans, short-term monocular deprivation alters ocular dominance by homeostatically boosting responses to the deprived eye [2-4]. In animal models, a reopening of critical period plasticity in the adult primary visual cortex has been obtained by a variety of environmental manipulations, such as dark exposure, or environmental enrichment, together with its critical component of enhanced physical exercise [5-8]. Among these non-invasive procedures, physical exercise emerges as particularly interesting for its potential of application to clinics, though there has been a lack of experimental evidence available that physical exercise actually promotes visual plasticity in humans. Here we report that short-term homeostatic plasticity of the adult human visual cortex induced by transient monocular deprivation is potently boosted by moderate levels of voluntary physical activity. These findings could have a bearing in orienting future research in the field of physical activity application to clinical research.

Visual objects presented briefly at the time of saccade onset appear compressed toward the saccade target. Compression strength depends on the presentation of a visual saccade target signal and is strongly reduced during the second saccade of a double-step saccade sequence (Zimmermann et al., 2014b). Here, I tested whether perisaccadic compression is linked to saccade planning by contrasting two double-step paradigms. In the same-direction double-step paradigm, subjects were required to perform two rightward 10 degrees saccades successively. At various times around execution of the saccade sequence a probe dot was briefly flashed. Subjects had to localize the position of the probe dot after they had completed both saccades. I found compression of visual space only at the time of the first but not at the time of the second saccade. In the reverse-direction paradigm, subjects performed first a rightward 10 degrees saccade followed by a leftward 10 degrees saccade back to initial fixation. In this paradigm compression was found in similar magnitude during both saccades. Analysis of the saccade parameters did not reveal indications of saccade sequence preplanning in this paradigm. I therefore conclude that saccade planning, rather than saccade execution factors, is involved in perisaccadic compression.

Sleep spindles are electroencephalographic oscillations peculiar of non-REM sleep, related to neuronal mechanisms underlying sleep restoration and learning consolidation. Based on their very singular morphology, sleep spindles can be visually recognized and detected, even though this approach can lead to significant mis-detections. For this reason, many efforts have been put in developing a reliable algorithm for spindle automatic detection, and a number of methods, based on different techniques, have been tested via visual validation. This work aims at improving current pattern recognition procedures for sleep spindles detection by taking into account their physiological sources of variability. We provide a method as a synthesis of the current state of art that, improving dynamic threshold adaptation, is able to follow modification of spindle characteristics as a function of sleep depth and inter-subjects variability. The algorithm has been applied to physiological data recorded by a high density EEG in order to perform a validation based on visual inspection and on evaluation of expected results from normal night sleep in healthy subjects.

Sleep Slow Oscillations (SSOs), paradigmatic EEG markers of cortical bistability (alternation between cellular downstates and upstates), and sleep spindles, paradigmatic EEG markers of thalamic rhythm, are two hallmarks of sleeping brain. Selective thalamic lesions are reportedly associated to reductions of spindle activity and its spectrum ~14 Hz (sigma), and to alterations of SSO features. This apparent, parallel behavior suggests that thalamo-cortical entrainment favors cortical bistability. Here we investigate temporally-causal associations between thalamic sigma activity and shape, topology, and dynamics of SSOs. We recorded sleep EEG and studied whether spatio-temporal variability of SSO amplitude, negative slope (synchronization in downstate falling) and detection rate are driven by cortical-sigma-activity expression (12-18Hz), in 3 consecutive 1s-EEG-epochs preceding each SSO event (Baselines). We analyzed: (i) spatial variability, comparing maps of baseline sigma power and of SSO features, averaged over the first sleep cycle; (ii) event-by-event shape variability, computing for each electrode correlations between baseline sigma power and amplitude/slope of related SSOs; (iii) event-by-event spreading variability, comparing baseline sigma power in electrodes showing an SSO event with the homologous ones, spared by the event. The scalp distribution of baseline sigma power mirrored those of SSO amplitude and slope; event-by-event variability in baseline sigma power was associated with that in SSO amplitude in fronto-central areas; within each SSO event, electrodes involved in cortical bistability presented higher baseline sigma activity than those free of SSO. In conclusion, spatio-temporal variability of thalamocortical entrainment, measured by background sigma activity, is a reliable estimate of the cortical proneness to bistability.

Priming is an implicit memory effect in which previous exposure to one stimulus influences the response to another stimulus. The main characteristic of priming is that it occurs without awareness. Priming takes place also when the physical attributes of previously studied and test stimuli do not match; in fact, it greatly refers to a general stimulus representation activated at encoding independently of the sensory modality engaged. Our aim was to evaluate whether, in a cross-modal word-stem completion task, negative priming scores could depend on inefficient word processing at study and therefore on an altered stimulus representation. Words were presented in the auditory modality, and word-stems to be completed in the visual modality. At study, we recorded auditory ERPs, and compared the P300 (attention/memory) and N400 (meaning processing) of individuals with positive and negative priming. Besides classical averaging-based ERPs analysis, we used an ICA-based method (ErpICASSO) to separate the potentials related to different processes contributing to ERPs. Classical analysis yielded significant difference between the two waves across the whole scalp. ErpICASSO allowed separating the novelty-related P3a and the top-down control-related P3b sub-components of P300. Specifically, in the component C3, the positive deflection identifiable as P3b, was significantly greater in the positive than in the negative priming group, while the late negative deflection corresponding to the parietal N400, was reduced in the positive priming group. In conclusion, inadequacy of specific processes at encoding, such as attention and/or meaning retrieval, could generate weak semantic representations, making words less accessible in subsequent implicit retrieval.

Very little is known about plasticity in the adult visual cortex. In recent years psychophysical studies have shown that short-term monocular deprivation alters visual perception in adult humans. Specifically, after 150 min of monocular deprivation the deprived eye strongly dominates the dynamics of binocular rivalry, reflecting homeostatic plasticity. Here we investigate the neural mechanisms underlying this form of short-term visual cortical plasticity by measuring visual evoked potentials (VEPs) on the scalp of adult humans during monocular stimulation before and after 150 min of monocular deprivation. We found that monocular deprivation had opposite effects on the amplitude of the earliest component of the VEP (C1) for the deprived and non-deprived eye stimulation. C1 amplitude increased (+66%) for the deprived eye, while it decreased (-29%) for the non-deprived eye. Source localization analysis confirmed that the C1 originates in the primary visual cortex. We further report that following monocular deprivation, the amplitude of the peak of the evoked alpha spectrum increased on average by 23% for the deprived eye and decreased on average by 10% for the non-deprived eye, indicating a change in cortical excitability. These results indicate that a brief period of monocular deprivation alters interocular balance in the primary visual cortex of adult humans by both boosting the activity of the deprived eye and reducing the activity of the non-deprived eye. This indicates a high level of residual homeostatic plasticity in the adult human primary visual cortex, probably mediated by a change in cortical excitability.

Although humans are the only species to possess language-driven abstract mathematical capacities, we share with many other animals a nonverbal capacity for estimating quantities or numerosity. For some time, researchers have clearly differentiated between small numbers of items—less than about four—referred to as the subitizing range, and larger numbers, where counting or estimation is required. In this review, we examine more recent evidence suggesting a further division, between sets of items greater than the subitizing range, but sparse enough to be individuated as single items; and densely packed stimuli, where they crowd each other into what is betterconsidered as a texture. These two different regimes are psychophysically discriminable in that they follow distinct psychophysical laws and show different dependencies on eccentricity and on luminance levels. But provided the elements are not too crowded (less than about two items per square degree in central vision, less in the periphery), there is little evidence that estimation of numerosity depends on mechanisms responsive to texture. The distinction is important, as the ability to discriminate numerosity, but not texture, correlates with formal maths skills.

Briefly presented stimuli occurring just before or during a saccadic eye movement are mislocalized, leading to a compression of visual space toward the target of the saccade. In most cases this has been measured in subjects over-trained to perform a stereotyped and unnatural task where saccades are repeatedly driven to the same location, marked by a highly salient abrupt onset. Here, we asked to what extent the pattern of perisaccadic mislocalization depends on this specific context. We addressed this question by studying perisaccadic localization in a set of participants with no prior experience in eye-movement research, measuring localization performance as they practiced the saccade task. Localization was marginally affected by practice over the course of the experiment and it was indistinguishable from the performance of expert observers. The mislocalization also remained similar when the expert observers were tested in a condition leading to less stereotypical saccadic behavior-with no abrupt onset marking the saccade target location. These results indicate that perisaccadic compression is a robust behavior, insensitive to the specific paradigm used to drive saccades and to the level of practice with the saccade task.Biagi, L., Crespi, S. A., Tosetti, M. & Morrone, M. C. (2015). BOLD Response Selective to Flow-Motion in Very Young Infants, PLoS Biol, 9 (13), e1002260.PDF

In adults, motion perception is mediated by an extensive network of occipital, parietal, temporal, and insular cortical areas. Little is known about the neural substrate of visual motion in infants, although behavioural studies suggest that motion perception is rudimentary at birth and matures steadily over the first few years. Here, by measuring Blood Oxygenated Level Dependent (BOLD) responses to flow versus random-motion stimuli, we demonstrate that the major cortical areas serving motion processing in adults are operative by 7 wk of age. Resting-state correlations demonstrate adult-like functional connectivity between the motion-selective associative areas, but not between primary cortex and temporo-occipital and posterior-insular cortices. Taken together, the results suggest that the development of motion perception may be limited by slow maturation of the subcortical input and of the cortico-cortical connections. In addition they support the existence of independent input to primary (V1) and temporo-occipital (V5/MT+) cortices very early in life.

We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80 degrees ). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields.

As living organisms, we have the capability to explore our environments through different senses, each making use of specialized organs and return ing unique information. This is relayed to a set of cortical areas, each of which appears to be specialized for processing information from a single sense — hence the definition of ‘unisensory’ areas. Many models assume that primary unisensory cortices passively reproduce information from each sensory organ; these then project to associative areas, which actively combine multisensory signals with each other and with cognitive stances. By the same token, the textbook view holds that sensory cortices undergo plastic changes only within a limited ‘critical period’; their function and architecture should remain stable and unchangeable thereafter. This model has led to many fundamental discoveries on the architecture of the sensory systems (e.g., oriented receptive fields, binocularity, topographic maps, to name just the best known). However, a growing body of evidence calls for a review of this conceptual scheme. Based on single-cell recordings from non-human primates, fMRI in humans, psychophysics, and sensory deprivation studies, early sensory areas are losing their status of fixed readouts of receptor activity; they are turning into functional nodes in a network of brain areas that flexibly adapts to the statistics of the input and the behavioral goals. This special issue in Multisensory Research aims to cover three such lines of evidence: suggesting that (1) the flexibility of spatial representations, (2) adult plasticity and (3) multimodality, are not properties of associative areas alone, but may depend on the primary visual cortex V1.

A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.

Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect—superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)—is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple “compulsory fusion” model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination.

Dyslexia is a specific impairment in reading that affects 1 in 10 people. Previous studies have failed to isolate a single cause of the disorder, but several candidate genes have been reported. We measured motion perception in two groups of dyslexics, with and without a deletion within the DCDC2 gene, a risk gene for dyslexia. We found impairment for motion particularly strong at high spatial frequencies in the population carrying the deletion. The data suggest that deficits in motion processing occur in a specific genotype, rather than the entire dyslexia population, contributing to the large variability in impairment of motion thresholds in dyslexia reported in the literature.

Neuroplasticity is a fundamental property of the nervous system that is maximal early in life, within the critical period [1-3]. Resting GABAergic inhibition is necessary to trigger ocular dominance plasticity and to modulate the onset and offset of the critical period [4, 5]. GABAergic inhibition also plays a crucial role in neuroplasticity of adult animals: the balance between excitation and inhibition in the primary visual cortex (V1), measured at rest, modulates the susceptibility of ocular dominance to deprivation [6-10]. In adult humans, short-term monocular deprivation strongly modifies ocular balance, unexpectedly boosting the deprived eye, reflecting homeostatic plasticity [11, 12]. There is no direct evidence, however, to support resting GABAergic inhibition in homeostatic plasticity induced by visual deprivation. Here, we tested the hypothesis that GABAergic inhibition, measured at rest, is reduced by deprivation, as demonstrated by animal studies. GABA concentration in V1 of adult humans was measured using ultra-high-field 7T magnetic resonance spectroscopy before and after short-term monocular deprivation. After monocular deprivation, resting GABA concentration decreased in V1 but was unaltered in a control parietal area. Importantly, across participants, the decrease in GABA strongly correlated with the deprived eye perceptual boost measured by binocular rivalry. Furthermore, after deprivation, GABA concentration measured during monocular stimulation correlated with the deprived eye dominance. We suggest that reduction in resting GABAergic inhibition triggers homeostatic plasticity in adult human V1 after a brief period of abnormal visual experience. These results are potentially useful for developing new therapeutic strategies that could exploit the intrinsic residual plasticity of the adult human visual cortex.

Autism is known to be associated with major perceptual atypicalities. We have recently proposed a general model to account for these atypicalities in Bayesian terms, suggesting that autistic individuals underuse predictive information or priors. We tested this idea by measuring adaptation to numerosity stimuli in children diagnosed with autism spectrum disorder (ASD). After exposure to large numbers of items, stimuli with fewer items appear to be less numerous (and vice versa). We found that children with ASD adapted much less to numerosity than typically developing children, although their precision for numerosity discrimination was similar to that of the typical group. This result reinforces recent findings showing reduced adaptation to facial identity in ASD and goes on to show that reduced adaptation is not unique to faces (social stimuli with special significance in autism), but occurs more generally, for both parietal and temporal functions, probably reflecting inefficiencies in the adaptive interpretation of sensory signals. These results provide strong support for the Bayesian theories of autism.

It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, approximately 500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop.

Premature birth has been associated with damage in many regions of the cerebral cortex, although there is a particularly strong susceptibility for damage within the parieto-occipital lobes (Volpe, 2009). As these areas have been shown to be critical for both visual attention and magnitudes perception (time, space, and number), it is important to investigate the impact of prematurity on both the magnitude and attentional systems, particularly for children without overt white matter injuries, where the lack of obvious injury may cause their difficulties to remain unnoticed. In this study, we investigated the ability to judge time intervals (visual, audio and audio-visual temporal bisection), discriminate between numerical quantities (numerosity comparison), map numbers onto space (numberline task) and to maintain visuo-spatial attention (multiple-object-tracking) in school-age preterm children (N29). The results show that various parietal functions may be more or less robust to prematurity-related difficulties, with strong impairments found on time estimation and attentional task, while numerical discrimination or mapping tasks remained relatively unimpaired. Thus while our study generally supports the hypothesis of a dorsal stream vulnerability in children born preterm relative to other cortical locations, it further suggests that particular cognitive processes, as highlighted by performance on different tasks, are far more susceptible than others.

We have recently provided evidence that the perception of number and texture density is mediated by two independent mechanisms: numerosity mechanisms at relatively low numbers, obeying Weber’s law, and texture-density mechanisms at higher numerosities, following a square root law. In this study we investigated whether the switch between the two mechanisms depends on the capacity to segregate individual dots, and therefore follows similar laws to those governing visual crowding. We measured numerosity discrimination for a wide range of numerosities at three eccentricities. We found that the point where the numerosity regime (Weber’s law) gave way to the density regime (square root law) depended on eccentricity. In central vision, the regime changed at 2.3 dots/82, while at 158 eccentricity, it changed at 0.5 dots/82, three times less dense. As a consequence, thresholds for low numerosities increased with eccentricity, while at higher numerosities thresholds remained constant. We further showed that like crowding, the regime change was independent of dot size, depending on distance between dot centers, not distance between dot edges or ink coverage. Performance was not affected by stimulus contrast or blur, indicating that the transition does not depend on low-level stimulus properties. Our results reinforce the notion that numerosity and texture are mediated by two distinct processes, depending on whether the individual elements are perceptually segregable. Which mechanism is engaged follows laws that determine crowding.

Presenting different images to each eye triggers ‘binocular rivalry’ in which one image is visible and the other suppressed, with the visible image alternating every second or so. We previously showed that binocular rivalry between cross-oriented gratings is altered when the fingertip explores a grooved stimulus aligned with one of the rivaling gratings: the matching visual grating's dominance duration was lengthened and its suppression duration shortened. In a more robust test, we here measure visual contrast sensitivity during rivalry dominance and suppression, with and without exploration of the grooved surface, to determine if rivalry suppression strength is modulated by touch. We find that a visual grating undergoes 45% less suppression when observers touch an aligned grating, compared to a cross-oriented one. Touching an aligned grating also improved visual detection thresholds for the ‘invisible’ suppressed grating by 2.4 dB, relative to a vision-only condition. These results show that congruent haptic stimulation prevents a visual stimulus from becoming deeply suppressed in binocular rivalry. Moreover, because congruent touch acted on the phenomenally invisible grating, this visuo-haptic interaction must precede awareness and likely occurs early in visual processing.

Visual objects briefly presented around the time of saccadic eye movements are perceived compressed towards the saccade target. Here, we investigated perisaccadic mislocalization with a double-step saccade paradigm, measuring localization of small probe dots briefly flashed at various times around the sequence of the two saccades. At onset of the first saccade, probe dots were mislocalized towards the first and, to a lesser extent, also towards the second saccade target. However, there was very little mislocalization at the onset of the second saccade. When we increased the presentation duration of the saccade targets prior to onset of the saccade sequence, perisaccadic mislocalization did occur at the onset of the second saccade.

2014

Recent studies show that perception is driven not only by the stimuli currently impinging on our senses, but also by the immediate past history. The influence of recent perceptual history on the present reflects the action of efficient mechanisms that exploit temporal redundancies in natural scenes.

Prolonged adaptation to delayed sensory feedback to a simple motor act (such as pressing a key) causes recalibration of sensory-motor synchronization, so instantaneous feedback appears to precede the motor act that caused it (Stetson, Cui, Montague & Eagleman, 2006). We investigated whether similar recalibration occurs in school-age children. Although plasticity may be expected to be even greater in children than in adults, we found no evidence of recalibration in children aged 8-11 years. Subjects adapted to delayed feedback for 100 trials, intermittently pressing a key that caused a tone to sound after a 200 ms delay. During the test phase, subjects responded to a visual cue by pressing a key, which triggered a tone to be played at variable intervals before or after the keypress. Subjects judged whether the tone preceded or followed the keypress, yielding psychometric functions estimating the delay when they perceived the tone to be synchronous with the action. The psychometric functions also gave an estimate of the precision of the temporal order judgment. In agreement with previous studies, adaptation caused a shift in perceived synchrony in adults, so the keypress appeared to trail behind the auditory feedback, implying sensory-motor recalibration. However, school children of 8 to 11 years showed no measureable adaptation of perceived simultaneity, even after adaptation with 500 ms lags. Importantly, precision in the simultaneity task also improved with age, and this developmental trend correlated strongly with the magnitude of recalibration. This suggests that lack of recalibration of sensory-motor simultaneity after adaptation in school-age children is related to their poor precision in temporal order judgments. To test this idea we measured recalibration in adult subjects with auditory noise added to the stimuli (which hampered temporal precision). Under these conditions, recalibration was greatly reduced, with the magnitude of recalibration strongly correlating with temporal precision.

Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense. Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique ofadaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats: adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.

Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after.

To interact rapidly and effectively with our environment, our brain needs access to a neural represen-tation of the spatial layout of the external world. However, the construction of such a map poses majorchallenges, as the images on our retinae depend on where the eyes are looking, and shift each time wemove our eyes, head and body to explore the world. Research from many laboratories including ourown suggests that the visual system does compute spatial maps that are anchored to real-world coordi-nates. However, the construction of these maps takes time (up to 500 ms) and also attentional resources.We discuss research investigating how retinotopic reference frames are transformed into spatiotopicreference-frames, and how this transformation takes time to complete. These results have implicationsfor theories about visual space coordinates and particularly for the current debate about the existence ofspatiotopic representations.

Saccades cause compression of visual space around the saccadic target, and also a compression of time, both phenomena thought to be related to the problem of maintaining saccadic stability (Morrone et al., 2005; Burr and Morrone, 2011). Interestingly, similar phenomena occur at the time of hand movements, when tactile stimuli are systematically mislocalized in the direction of the movement (Dassonville, 1995; Watanabe et al., 2009). In this study, we measured whether hand movements also cause an alteration of the perceived timing of tactile signals. Human participants compared the temporal separation between two pairs of tactile taps while moving their right hand in response to an auditory cue. The first pair of tactile taps was presented at variable times with respect to movement with a fixed onset asynchrony of 150 ms. Two seconds after test presentation, when the hand was stationary, the second pair of taps was delivered with a variable temporal separation. Tactile stimuli could be delivered to either the right moving or left stationary hand. When the tactile stimuli were presented to the motor effector just before and during movement, their perceived temporal separation was reduced. The time compression was effector-specific, as perceived time was veridical for the left stationary hand. The results indicate that time intervals are compressed around the time of hand movements. As for vision, the mislocalizations of time and space for touch stimuli may be consequences of a mechanism attempting to achieve perceptual stability during tactile exploration of objects, suggesting common strategies within different sensorimotor systems.

The mapping of number onto space is fundamental to measurement and mathematics. However, the mapping of young children, unschooled adults, and adults under attentional load shows strong compressive nonlinearities, thought to reflect intrinsic logarithmic encoding mechanisms, which are later "linearized" by education. Here we advance and test an alternative explanation: that the nonlinearity results from adaptive mechanisms incorporating the statistics of recent stimuli. This theory predicts that the response to the current trial should depend on the magnitude of the previous trial, whereas a static logarithmic nonlinearity predicts trialwise independence. We found a strong and highly significant relationship between numberline mapping of the current trial and the magnitude of the previous trial, in both adults and school children, with the current response influenced by up to 15% of the previous trial value. The dependency is sufficient to account for the shape of the numberline, without requiring logarithmic transform. We show that this dynamic strategy results in a reduction of reproduction error, and hence improvement in accuracy.

In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene.

Faivre, N., Arzi, A., Lunghi, C. & Salomon, R. (2017). Consciousness is more than meets the eye: a call for a multisensory study of subjective experience, Neuroscience of Consciousness, 1-8.PDF

Over the last 30 years, our understanding of the neurocognitive bases of consciousness has improved, mostly through studies employing vision. While studying consciousness in the visual modality presents clear advantages, we believe that a comprehensive scientific account of subjective experience must not neglect other exteroceptive and interoceptive signals as well as the role of multisensory interactions for perceptual and self-consciousness. Here, we briefly review four distinct lines of work which converge in documenting how multisensory signals are processed across several levels and contents of consciousness. Namely, how multisensory interactions occur when consciousness is prevented because of perceptual manipulations (i.e. subliminal stimuli) or because of low vigilance states (i.e. sleep, anesthesia), how interactions between exteroceptive and interoceptive signals give rise to bodily self-consciousness, and how multisensory signals are combined to form metacognitive judgments. By describing the interactions between multisensory signals at the perceptual, cognitive, and metacognitive levels, we illustrate how stepping out the visual comfort zone may help in deriving refined accounts of consciousness, and may allow cancelling out idiosyncrasies of each sense to delineate supramodal mechanisms involved during consciousness.

Fornaciai, M. & Park, J. (2017). Distinct Neural Signatures for Very Small and Very Large Numerosities, Frontiers in Human Neuroscience, (11), PDF

Behavioral studies of numerical cognition have shown that perceptual threshold for numerosity discrimination depends on the range of numerical values to be estimated. Discrimination threshold is constant when comparing very small numerosities via the mechanism called subitizing, while it increases as a function of numerosity for numbers beyond that range governed by subitizing. However, when numerosity gets so large that the individual elements start to form a cluttered ensemble, discrimination threshold increases as a function of the square root of numerosity. These behavioral patterns suggest that our sense of number is not based on a unitary mechanism and is rather based on multiple numerosity processing mechanisms depending on the absolute numerosity to be estimated. In this study, we demonstrate neurophysiological evidence for such multiple mechanisms. Participants electroencephalogram (EEG) was recorded while they viewed arrays containing either very small (14) or very large (100400) number of dots with systematic variations in non-numerical cues. A linear model that tested the effects of numerical and non-numerical cues on the visual-evoked potentials (VEPs) revealed strong neural sensitivity to numerosity around 160180 ms over right occipito-parietal sites irrespective of the numerical range presented. In contrast, earlier neural responses (similar to 100 ms) showed markedly distinct patterns across the different numerical ranges tested. These results indicate that differences in behavioral response patterns in numerosity estimation across various numerical ranges may arise from the differences in the first stages of visual analysis. Collectively, the findings provide a firmer ground for the idea that there exists a brain system specifically dedicated for numerosity processing, yet they also suggest that multiple early visual cortical mechanisms converge to that numerosity processing stage later in the visual stream.

Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an ‘ensemble’ emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

The pulvinar is the largest of the thalamic nuclei in the primates, including humans. In the primates, two of the three major subdivisions, the lateral and inferior pulvinar, are heavily interconnected with a significant proportion of the visual association cortex. However, while we now have a better understanding of the bidirectional connectivity of these pulvinar subdivisions, its functions remain somewhat of an enigma. Over the past few years, researchers have started to tackle this problem by addressing it from the angle of development and visual cortical lesions. In this review, we will draw together literature from the realms of studies in nonhuman primates and humans that have informed much of the current understanding. This literature has been responsible for changing many long-held opinions on the development of the visual cortex and how the pulvinar interacts dynamically with cortices during early life to ensure rapid development and functional capacity Furthermore, there is evidence to suggest involvement of the pulvinar following lesions of the primary visual cortex (V1) and geniculostriate pathway in early life which have far better functional outcomes than identical lesions obtained in adulthood. Shedding new light on the pulvinar and its role following lesions of the visual brain has implications for our understanding of visual brain disorders and the potential for recovery.

Action and perception are intimately coupled systems; one clear case is saccadic suppression, the reduced visibility around the time of saccades, important in mediating visual stability; another is the oscillatory modulation of visibility synchronized with hand action. To suppress effectively the spurious retinal motion generated by the eye movements, it is crucial that saccadic suppression and saccadic onset be temporally synchronous. However, the mechanisms that determine this temporal synchrony are unknown. We investigated the effect of saccades on contrast discrimination sensitivity over a long period stretching over more than 1 second before and after saccade execution. Human subjects made horizontal saccades at will to two stationary saccadic targets separated by 20 degrees degrees. At a random interval, a brief Gabor patch was displayed between the two fixations in either the upper or lower visual field, and the subject had to detect its location. Strong saccadic suppression was measured between -50 and 50 ms from saccadic onset. However, the suppression was systematically embedded in a trough of oscillations of contrast sensitivity that fluctuated rhythmically in the delta range (at about 3 Hz), commencing about one second before saccade execution and lasting for up to one second after the saccade. The results show that saccadic preparation and visual sensitivity oscillations are coupled, and the coupling might be instrumental in temporally aligning the initiation of the saccade with the visual suppression.Significant statementSaccades are known to produce a suppression of contrast sensitivity at saccadic onset and an enhancement after saccadic offset. Here we show that these dynamics are systematically embedded in visual oscillations of contrast sensitivity that fluctuate rhythmically in the delta range (at about 3 Hz), commencing about one second before saccade execution and lasting for up to one second after the saccade. The results show that saccadic preparation and visual sensitivity oscillations are coupled, and the coupling might be instrumental in aligning temporally the initiation of the saccade with the visual suppression.

Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark) and task requirements (minimizing body and gaze movements), slow pupil oscillations, "hippus," spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry). This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure) provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of approximately 42 degrees between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.

To efficiently interact with the external environment, our nervous system combines information arising from different sensory modalities. Recent evidence suggests that cross-modal interactions can be automatic and even unconscious, reflecting the ecological relevance of cross-modal processing. Here, we use continuous flash suppression (CFS) to directly investigate whether haptic signals can interact with visual signals outside of visual awareness. We measured suppression durations of visual gratings rendered invisible by CFS either during visual stimulation alone or during visuo-haptic stimulation. We found that active exploration of a haptic grating congruent in orientation with the suppressed visual grating reduced suppression durations both compared with visual-only stimulation and to incongruent visuo-haptic stimulation. We also found that the facilitatory effect of touch on visual suppression disappeared when the visual and haptic gratings were mismatched in either spatial frequency or orientation. Together, these results demonstrate that congruent touch can accelerate the rise to consciousness of a suppressed visual stimulus and that this unconscious cross-modal interaction depends on visuo-haptic congruency. Furthermore, since CFS suppression is thought to occur early in visual cortical processing, our data reinforce the evidence suggesting that visuo-haptic interactions can occur at the earliest stages of cortical processing.

When different images are presented to the eyes, the brain is faced with ambiguity, causing perceptual bistability: visual perception continuously alternates between the monocular images, a phenomenon called binocular rivalry. Many models of rivalry suggest that its temporal dynamics depend on mutual inhibition among neurons representing competing images. These models predict that rivalry should be different in autism, which has been proposed to present an atypical ratio of excitation and inhibition [the E/I imbalance hypothesis; Rubenstein & Merzenich, 2003]. In line with this prediction, some recent studies have provided evidence for atypical binocular rivalry dynamics in autistic adults. In this study, we examined if these findings generalize to autistic children. We developed a child-friendly binocular rivalry paradigm, which included two types of stimuli, low- and high-complexity, and compared rivalry dynamics in groups of autistic and age- and intellectual ability-matched typical children. Unexpectedly, the two groups of children presented the same number of perceptual transitions and the same mean phase durations (times perceiving one of the two stimuli). Yet autistic children reported mixed percepts for a shorter proportion of time (a difference which was in the opposite direction to previous adult studies), while elevated autistic symptomatology was associated with shorter mixed perception periods. Rivalry in the two groups was affected similarly by stimulus type, and consistent with previous findings. Our results suggest that rivalry dynamics are differentially affected in adults and developing autistic children and could be accounted for by hierarchical models of binocular rivalry, including both inhibition and top-down influences.