The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects.
Strabismus has a negative impact on patients’ lives regardless of their age. Factors such as self-esteem, relationships with others, education and the ability to find employment may all be negatively affected by strabismus. It is possible to correct strabismus in adulthood successfully; the chances of achieving good ocular alignment are high and the risks of intractable diplopia low. Successful surgery to realign the visual axes can improve visual function, and offer psychosocial benefits that ultimately improve quality of life. The potential benefits of strabismus surgery should be discussed with patients, regardless of their age or the age of onset of strabismus. This article reviews the impact of strabismus, focusing on the psychosocial consequences of the condition, of which many optometrists may be less aware.
The ability to identify a target is reduced by the presence of nearby objects, a phenomenon known as visual crowding. The extent to which crowding impairs our perception is generally governed by the degree of similarity between a target stimulus and its surrounding flankers. Here we investigated the influence of disparity differences between target and flankers on crowding. Orientation discrimination thresholds for a parafoveal target were first measured when the target and flankers were presented at the same depth to establish a flanker separation that induced a significant elevation in threshold for each individual. Flankers were subsequently fixed at this spatial separation while the disparity of the flankers relative to the target was altered. For all participants, thresholds showed a systematic decrease as flanker-target disparity increased. The resulting tuning function was asymmetric: Crowding was lower when the target was perceived to be in front of the flankers rather than behind. A series of control experiments confirmed that these effects were driven by disparity, as opposed to other factors such as flanker-target separation in three-dimensional (3-D) space or monocular positional offsets used to create disparity. When flankers were distributed over a range of crossed and uncrossed disparities, such that the mean was in the plane of the target, there was an equivalent or greater release of crowding compared to when all flankers were presented at the maximum disparity of that range. Overall, our results suggest that depth cues can reduce the effects of visual crowding, and that this reduction is unlikely to be caused by grouping of flankers or positional shifts in the monocular image.
The aim of this study was to define the nature of functional visual loss in amblyopia and to identify those subjects whose amblyopia is chiefly due to one or more of the following deficits: abnormal contour interaction, abnormal eye movements, abnormal contrast perception, or positional uncertainty.Fifty amblyopic children with a mean age of 5.6+/-1.3 years were referred from diverse sources. In addition to routine orthoptic and optometric evaluation the principal visual deficits in the amblyopic eye of each subject were identified using the following measures of visual acuity: high contrast linear, single optotype, repeat letter and low contrast linear, plus Vernier and displacement thresholds. These measures were repeated as the children underwent a prescribed occlusion therapy regime, after parental consent.All amblyopic subjects demonstrated a functional loss in each of the tests used, and occlusion therapy appeared to improve all aspects of the amblyopia. High contrast visual acuity was not always the primary deficit in visual function, and when amblyopic subjects were divided according to their primary visual loss, this visual function was found to show the greatest improvement with treatment.These results suggest that to successfully identify the primary visual deficit and monitor the success of occlusion therapy it is necessary to assess other aspects of visual function in amblyopia.
The task of deciding how long sensory events seem to last is one that the human nervous system appears to perform rapidly and, for sub-second intervals, seemingly without conscious effort. That these estimates can be performed within and between multiple sensory and motor domains suggest time perception forms one of the core, fundamental processes of our perception of the world around us. Given this significance, the current paucity in our understanding of how this process operates is surprising. One candidate mechanism for duration perception posits that duration may be mediated via a system of duration-selective ‘channels’, which are differentially activated depending on the match between afferent duration information and the channels' ‘preferred’ duration. However, this model awaits experimental validation. In the current study, we use the technique of sensory adaptation, and we present data that are well described by banks of duration channels that are limited in their bandwidth, sensory-specific, and appear to operate at a relatively early stage of visual and auditory sensory processing. Our results suggest that many of the computational principles the nervous system applies to coding visual spatial and auditory spectral information are common to its processing of temporal extent.
Most studies of the early stages of visual analysis (V1-V3) have focused on the properties of neurons that support processing of elemental features of a visual stimulus or scene, such as local contrast, orientation, or direction of motion. Recent evidence from electrophysiology and neuroimaging studies, however, suggests that early visual cortex may also play a role in retaining stimulus representations in memory for short periods. For example, fMRI responses obtained during the delay period between two presentations of an oriented visual stimulus can be used to decode the remembered stimulus orientation with multivariate pattern analysis. Here, we investigated whether orientation is a special case or if this phenomenon generalizes to working memory traces of other visual features. We found that multivariate classification of fMRI signals from human visual cortex could be used to decode the contrast of a perceived stimulus even when the mean response changes were accounted for, suggesting some consistent spatial signal for contrast in these areas. Strikingly, we found that fMRI responses also supported decoding of contrast when the stimulus had to be remembered. Furthermore, classification generalized from perceived to remembered stimuli and vice versa, implying that the corresponding pattern of responses in early visual cortex were highly consistent. In additional analyses, we show that stimulus decoding here is driven by biases depending on stimulus eccentricity. This places important constraints on the interpretation for decoding stimulus properties for which cortical processing is known to vary with eccentricity, such as contrast, color, spatial frequency, and temporal frequency.
Vision loss is a common, devastating complication of cerebral strokes. In some cases the complete contra-lesional visual field is affected, leading to problems with routine tasks and, notably, the ability to read. Although visual information crucial for reading is imaged on the foveal region, readers often extract useful parafoveal information from the next word or two in the text. In hemianopic field loss, parafoveal processing is compromised, shrinking the visual span and resulting in slower reading speeds. Recent approaches to rehabilitation using perceptual training have been able to demonstrate some recovery of useful visual capacity. As gains in visual sensitivity were most pronounced at the border of the scotoma, it may be possible to use training to restore some of the lost visual span for reading. As restitutive approaches often involve prolonged training sessions, it would be beneficial to know how much recovery is required to restore reading ability. To address this issue, we employed a gaze-contingent paradigm using a low-pass filter to blur one side of the text, functionally simulating a visual field defect. The degree of blurring acts as a proxy for visual function recovery that could arise from restitutive strategies, and allows us to evaluate and quantify the degree of visual recovery required to support normal reading fluency in patients. Because reading ability changes with age, we recruited a group of younger participants, and another with older participants who are closer in age to risk groups for ischaemic strokes. Our results show that changes in patterns of eye movement observed in hemianopic loss can be captured using this simulated reading environment. This opens up the possibility of using participants with normal visual function to help identify the most promising strategies for ameliorating hemianopic loss, before translation to patient groups.
Abstract Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties.