Abstract The visual field region where a stimulus evokes a neural response is called the receptive field (RF). Analytical tools combined with functional MRI can estimate the receptive field of the population of neurons within a voxel. Circular population RF (pRF) methods accurately specify the central position of the pRF and provide some information about the spatial extent (diameter) of the receptive field. A number of investigators developed methods to further estimate the shape of the pRF, for example whether the shape is more circular or elliptical. There is a report that there are many pRFs with highly elliptical pRFs in early visual cortex (V1-V3; Silson et al., 2018). Large aspect ratios (>2) are difficult to reconcile with the spatial scale of orientation columns or visual field map properties in early visual cortex. We started to replicate the experiments and found that the software used in the publication does not accurately estimate RF shape: it produces elliptical fits to circular ground-truth data. We analyzed an independent data set with a different software package that was validated over a specific range of measurement conditions, to show that in early visual cortex the aspect ratios are less than 2. Furthermore, current empirical and theoretical methods do not have enough precision to discriminate ellipses with aspect ratios of 1.5 from circles. Through simulation we identify methods for improving sensitivity that may estimate ellipses with smaller aspect ratios. The results we present are quantitatively consistent with prior assessments using other methodologies. Significance Statement We evaluated whether the shape of many population receptive fields in early visual cortex is elliptical and differs substantially from circular. We evaluated two tools for estimating elliptical models of the pRF; one tool was valid over the measured compliance range. Using the validated tool, we found no evidence that confidently rejects circular fits to the pRF in visual field maps V1, V2 and V3. The new measurements and analyses are consistent with prior theoretical and experimental assessments in the literature.
The discriminability of motion direction is asymmetric, with some motion directions that are better discriminated than others. For example, discrimination of directions near the cardinal axes (upward/downward/leftward/rightward) tends to be better than oblique directions. Here, we tested discriminability for multiple motion directions at multiple polar angle locations. We found three systematic asymmetries. First, we found a large cardinal advantage in a cartesian reference frame – better discriminability for motion near cardinal reference directions than oblique directions. Second, we found a moderate cardinal advantage in a polar reference frame – better discriminability for motion near radial (inward/outward) and tangential (clockwise/counterclockwise) reference directions than other directions. Third, we found a small advantage for discriminating motion near radial compared to tangential reference directions. The three advantages combine in an approximately linear manner, and together predict variation in motion discrimination as a function of both motion direction and location around the visual field. For example, best performance is found for radial motion on the horizontal and vertical meridians, as these directions encompass all three advantages, whereas poorest performance is found for oblique motion stimuli located on the horizontal and vertical meridians, as these directions encompass all three disadvantages. Our results constrain models of motion perception and suggest that reference frames at multiple stages of the visual processing hierarchy limit performance.
Background and Goal. Covert spatial attention modulates behavioral and neural sensitivity. fMRI studies have reported that endogenous (voluntary) attention increases BOLD amplitude, shifts population receptive fields (pRFs) and alters pRF sizes. Here, we used a combined fMRI/psychophysics experiment to investigate these effects concurrently, at polar angle locations that typically show discriminability differences. Methods. In every trial, a precue directed participants to either attend to one of four isoeccentric (6°) locations on the cardinal meridians, or to distribute attention across four locations. 300 ms after the precue, a stimulus for mapping pRFs (a contrast pattern masked by a bar aperture) was presented for 1 or 2 s. Shortly after, four small, low-contrast Gabor patches appeared and participants discriminated the orientation of the target Gabor indicated by a response cue. PRF models were solved for voxels in V1-hV4 and V3A/B. Results. Focal attention improved behavioral performance at the cued location and decreased performance at the other locations to the same extent across the four locations. In all visual field maps, BOLD amplitude increased for voxels with pRF centers near the attended location, and decreased at unattended locations. The amplitude changes were independent of mapping stimulus location, reflecting a baseline shift rather than a multiplicative gain. The magnitude and spatial spread of amplitude changes were similar across locations and maps. pRF centers shifted slightly towards the cued location and there was a trend for smaller peripheral pRF sizes in the focal than the distributed attention condition. These two effects increased across the visual hierarchy. Conclusions. We observed a pronounced attention-related baseline shift in BOLD response, accompanied by small but detectable changes in properties of visual field maps. Our results suggest that endogenous spatial attention, prior to target appearance, primarily affects visual cortex by a retinotopic change in mean neural activity.
Does mental imagery of motion recruit populations of direction-selective neurons that also respond to perceptual motion? We show first that imagining a moving pattern while fixating a stationary target yielded a motion aftereffect (MAE), as measured by the response to directionally ambiguous perceptual test stimuli (dynamic dot displays). In a second experiment we replicated the effect and also observed the MAE when subjects' eyes were closed during imagery. In a further set of experiments, we asked whether photographs of objects frozen in motion (animals, people and vehicles) could also lead to motion adaptation. When a series of unrelated photographs was viewed, all with implied motion in the same direction, an MAE in the opposite direction was induced, again measured with dynamic dot test stimuli. The MAE was found both for right / left implied motion and for in / out implied motion, the latter created by using mirror-revered pairs of identical implied motion images either facing towards or away from each other. Similar to the perceptual MAE, the MAE to implied motion significantly declined if a delay (3 s) was introduced between adaptation and test. The MAEs to imagined and implied motion ranged from 20 – 35 % of the size of the MAE from perceived motion. The transfer of adaptation from imagined and implied motion to perception of real motion demonstrates that at least some of the same direction-selective neurons are involved in imagination and actual perception.
The visual neurosciences have made enormous progress in recent decades, in part because of the ability to drive visual areas by their sensory inputs, allowing researchers to define visual areas reliably across individuals and across species. Similar strategies for parcellating higher-order cortex have proven elusive. Here, using a novel experimental task and nonlinear population receptive field modeling, we map and characterize the topographic organization of several regions in human frontoparietal cortex. We discover representations of both polar angle and eccentricity that are organized into clusters, similar to visual cortex, where multiple gradients of polar angle of the contralateral visual field share a confluent fovea. This is striking because neural activity in frontoparietal cortex is believed to reflect higher-order cognitive functions rather than external sensory processing. Perhaps the spatial topography in frontoparietal cortex parallels the retinotopic organization of sensory cortex to enable an efficient interface between perception and higher-order cognitive processes. Critically, these visual maps constitute well-defined anatomical units that future studies of frontoparietal cortex can reliably target.
Abstract Population receptive field (pRF) models fit to fMRI data are used to non-invasively measure retinotopic maps in human visual cortex, and these maps are a fundamental component of visual neuroscience experiments. We examined the reproducibility of retinotopic maps across two datasets: a newly acquired retinotopy dataset from New York University (NYU) (n=44) and a public dataset from the Human Connectome Project (HCP) (n=181). Our goal was to assess the degree to which pRF properties are similar across datasets, despite substantial differences in their experimental protocols. The two datasets differ in stimulus design, participant pool, fMRI protocol, MRI field strength, and preprocessing pipelines. We assessed the cross-dataset reproducibility of the two datasets in terms of the similarity of vertex-wise pRF estimates and in terms of large-scale cortical magnification properties. Within V1, V2, V3, and hV4, the group-median NYU and HCP vertex-wise polar angle estimates were nearly identical. Both eccentricity and pRF size estimates were also strongly correlated between the two datasets, but with a slope different from 1; the eccentricity and pRF size estimates were systematically greater in the NYU data. Next, to compare large-scale map properties, we quantified two polar angle asymmetries in V1 cortical magnification previously identified in the HCP data. The prior work reported more cortical surface area representing the horizontal than vertical visual field meridian, and the lower than upper vertical visual field meridian. We confirm both of these results in the NYU dataset. Together, our findings show that the retinotopic properties of V1-hV4 can be reliably measured between two datasets, despite numerous differences in their experimental design. fMRI-derived retinotopic maps are reproducible because they rely on an explicit computational model that is grounded in physiological evidence of how visual receptive fields are organized, allowing one to quantitatively characterize the BOLD signal in terms of stimulus properties (i.e., location and size). The new NYU Retinotopy Dataset will serve as a useful benchmark for testing hypotheses about the organization of visual areas and for comparison to the HCP Retinotopy Dataset.
Abstract Visual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of polar angle asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cone photon absorptions contributes to polar angle asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1 CMF. We find that both polar angle asymmetries and eccentricity gradients increase from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of phototransduction by the cones and spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, cone isomerizations, phototransduction, and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows that asymmetries in a decision-maker’s performance across polar angle are greater when assessing the photocurrents than when assessing isomerizations and are greater still when assessing mRGC signals. Nonetheless, the polar angle asymmetries of the mRGC outputs are still considerably smaller than those observed from human performance. We conclude that cone isomerizations, phototransduction and the spatial filtering properties of mRGCs contribute to polar angle performance differences, but that a full account of these differences will entail additional contribution from cortical representations.
Crowding — the inability to recognize objects in clutter — severely limits object recognition and reading. In crowding, a simple target (e.g. a letter) that is recognizable alone cannot be recognized when surrounded by clutter that is less than the psychophysical crowding distance away (deg). Prior work shows that crowding distance scales linearly with target eccentricity and varies with the direction of crowding: crowding distance is approximately double for flankers placed radially rather than tangentially. Multiplying the psychophysical crowding distance by the cortical magnification factor yields the cortical crowding distance (mm of cortex). In V1, radial cortical crowding distance is a fixed number of mm and conserved across eccentricity, but not across orientation (Pelli, 2008). Since crowding distance in V1 is conserved radially across eccentricity, we imagined that there might be some downstream area, more involved in crowding, where the crowding distance is isotropic, conserved across both eccentricity and orientation. METHOD: We measured psychophysical crowding distances on 4 observers at eccentricities of ±2.5°, ±5°, and ±10°, radially and tangentially, for letter targets on the horizontal meridian. Results confirmed the well-known dependence on eccentricity and orientation. Using anatomical and functional MRI, we also measured each observer's retinotopic maps, and quantified tangential and radial cortical magnification in areas V1-hV4. RESULTS & CONCLUSION: We find that all four areas conserve cortical crowding distance across eccentricity, but only hV4 conserves crowding distance across both eccentricity and orientation. After averaging measurements across observers (n=4), we find that the V4 crowding distance is 3.0±0.2 mm (mean±rms error across orientation and eccentricity). Across both dimensions, conservation fails in V1-V3, with rms error exceeding 0.7 mm. The conservation of crowding distance in hV4 suggests that it mediates the receptive field of crowding, i.e. the integration of features to recognize a simple object. Meeting abstract presented at VSS 2018
Crowding, the unwanted perceptual merging of adjacent stimuli, is well studied and easily measured, but its physiological basis is contentious. We explore its link to physiology by combining fMRI retinotopy of cortical area hV4 and psychophysical measurements of crowding in the same observers. Crowding distance (i.e. critical spacing) was measured radially and tangentially at eight equally spaced sites at 5° eccentricity, and ±2.5° and ±10° on the horizontal midline. fMRI mapped the retinotopy of area hV4 in each hemisphere of the 5 observers. From the map we read out cortical magnification, radially and tangentially, at the 12 sites tested psychophysically. We also estimated the area of hV4 in mm2. Combining fMRI with psychophysics, last year we reported conservation of a roughly 1.8 mm crowding distance on the surface of hV4 (the product of cortical magnification in mm/deg and crowding distance in deg) across eccentricity and orientation, in data averaged across observers (Zhou et al. 2018 VSS). Crowding distances were less well preserved in the V1–V3 maps. Conservation of the hV4 crowding distance across individual observers would mean a fixed product of visual crowding distance and cortical magnification, which implies a negative correlation between log crowding distance and log magnification. Separate linear mixed-effects models of log crowding area and log cortical magnification each account for about 98% of the variance. Log areal hV4 cortical magnification shows a trend toward a negative correlation with log areal crowding across 10 hemispheres (r=−0.53, p=0.11); log hV4 surface area and log areal crowding show a similar negative correlation (r=−0.55, p=0.1). The trend toward larger crowding distances in observers with less surface area in hV4 is consistent with the possibility that crowding distances, though highly variable when measured in the visual field, are approximately conserved on the surface of the hV4 map.