Neural and behavioral evidence for cortical reorganization in the adult somatosensory system after loss of sensory input (e.g., amputation) has been well documented. In contrast, evidence for reorganization in the adult visual system is far less clear: neural evidence is the subject of controversy, behavioral evidence is sparse, and studies combining neural and behavioral evidence have not previously been reported. Here, we report converging behavioral and neuroimaging evidence from a stroke patient (B.L.) in support of cortical reorganization in the adult human visual system. B.L.'s stroke spared the primary visual cortex (V1), but destroyed fibers that normally provide input to V1 from the upper left visual field (LVF). As a consequence, B.L. is blind in the upper LVF, and exhibits distorted perception in the lower LVF: stimuli appear vertically elongated, toward and into the blind upper LVF. For example, a square presented in the lower LVF is perceived as a rectangle extending upward. We hypothesized that the perceptual distortion was a consequence of cortical reorganization in V1. Extensive behavioral testing supported our hypothesis, and functional magnetic resonance imaging (fMRI) confirmed V1 reorganization. Together, the behavioral and fMRI data show that loss of input to V1 after a stroke leads to cortical reorganization in the adult human visual system, and provide the first evidence that reorganization of the adult visual system affects visual perception. These findings contribute to our understanding of the human adult brain's capacity to change and has implications for topics ranging from learning to recovery from brain damage.
What is a face? Intuition, along with abundant behavioral and neural evidence, indicates that internal features (e.g., eyes, nose, mouth) are critical for face recognition, yet some behavioral work suggests that external features (e.g., hair, jawline, shoulders) may likewise be processed as part of the face. Here we addressed this question by asking how the brain represents isolated internal and external face features. We tested three predictions in particular. First, if a "face" includes both internal and external face features, then these features should activate similar neural systems. Consistent with this prediction, we found highly overlapping activation for internal and external face features within face-selective cortex. Second, if a "face" includes both internal and external face features, then face-selective regions should respond strongly and selectively to both internal and external face features. Consistent with this prediction, we found strong and selective responses to both internal and external features in four face-selective regions, including the occipital face area (OFA), fusiform face area (FFA), posterior superior temporal sulcus (pSTS), and anterior temporal lobe (ATL). Third, if a face includes both internal and external features, then face-selective regions should perform the same computations across both features. Consistent with this prediction, we found that OFA and pSTS extract the "parts" of both internal and external face features, while FFA and ATL represent the coherent arrangement of both internal and external face parts. Taken together, these results provide strong neural evidence that external features, like internal features, constitute a face. Meeting abstract presented at VSS 2018
Decades of research in the cognitive and neural sciences have shown that shape perception is crucial for object recognition. However, it remains unknown how object shape is represented to accomplish recognition. Here we used behavioral and neural techniques to test whether human object representations are well described by a model of shape based on an object’s skeleton when compared with other computational descriptors of visual similarity. Skeletal representations may be an ideal model for object recognition because they (1) provide a compact description of a shape’s structure by describing the relations between contours and component parts, and (2) provide a metric by which to compare the visual similarity between shapes. In a first experiment, we tested whether a model of skeletal similarity was predictive of human behavioral similarity judgments for novel objects. We found that the skeletal model explained the greatest amount of unique variance in participants’ judgments (33.13%) when compared with other models of visual similarity (Gabor-jet, GIST, HMAX, AlexNet), suggesting that skeletal descriptions uniquely contribute to object recognition. In a second experiment, we used fMRI and representational similarity analyses to examine whether object-selective regions (LO, pFs), or even early-visual regions, code for an object’s skeleton. We found that skeletal similarity explained the greatest amount of unique variance in LO (19.32%) and V3 (18.74%) in the right hemisphere (rLO; rV3), but not in other regions. That a skeletal description was most predictive of rLO is consistent with its role in specifying object shape via the relations between components parts. Moreover, our findings may shed new light on the functional role of V3 in using skeletons to integrate contours into complete shapes. Together, our results highlight the importance of skeletal descriptors for human object recognition and the computation of shape in the visual system.
Rodent lesion studies have revealed the existence of two causally dissociable spatial memory systems, localized to the hippocampus and striatum that are preferentially sensitive to environmental boundaries and landmark objects, respectively. Here we test whether these two memory systems are causally dissociable in humans by examining boundary- and landmark-based memory in typical and atypical development. Adults with Williams syndrome (WS)-a developmental disorder with known hippocampal abnormalities-and typical children and adults, performed a navigation task that involved learning locations relative to a boundary or a landmark object. We found that boundary-based memory was severely impaired in WS compared to typically-developing mental-age matched (MA) children and chronological-age matched (CA) adults, whereas landmark-based memory was similar in all groups. Furthermore, landmark-based memory matured earlier in typical development than boundary-based memory, consistent with the idea that the WS cognitive phenotype arises from developmental arrest of late maturing cognitive systems. Together, these findings provide causal and developmental evidence for dissociable spatial memory systems in humans.
Human replicas highly resembling people tend to elicit eerie sensations—a phenomenon known as the uncanny valley. To test whether this effect is attributable to people’s ascription of mind to (i.e., mind perception hypothesis) or subtraction of mind from androids (i.e., dehumanization hypothesis), in Study 1, we examined the effect of face exposure time on the perceived animacy of human, android, and mechanical-looking robot faces. In Study 2, in addition to exposure time, we also manipulated the spatial frequency of faces, by preserving either their fine (high spatial frequency) or coarse (low spatial frequency) information, to examine its effect on faces’ perceived animacy and uncanniness. We found that perceived animacy decreased as a function of exposure time only in android but not in human or mechanical-looking robot faces (Study 1). In addition, the manipulation of spatial frequency eliminated the decrease in android faces’ perceived animacy and reduced their perceived uncanniness (Study 2). These findings link perceived uncanniness in androids to the temporal dynamics of face animacy perception. We discuss these findings in relation to the dehumanization hypothesis and alternative hypotheses of the uncanny valley phenomenon.
We report on the cloning and expression of hKv4.3, a fast inactivating, transient, A-type potassium channel found in both heart and brain that is 91% homologous to the rat Kv4.3 channel. Two isoforms of hKv4.3 were cloned. One is full length (hKv4.3 long), and the other has a 19 amino acid deletion (hKv4.3 short). RT-PCR shows that the brain contains both forms of the channel RNA, whereas the heart predominantly has the longer version. Both versions of the channel were expressed in Xenopus oocytes, and both contain a significant window or noninactivating current seen near potentials of -30 to -40 mV. The inactivation curve for hKv4.3 short is shifted 10 mV positive relative to hKv4.3 long. This causes the peak window current for the short version to occur near -30 mV and the peak for the longer version to be at -40 mV. There was little difference in the recovery from inactivation or in the kinetics of inactivation between the two isoforms of the channel.
There is a rift in the human scene processing literature. While several neuroimaging studies have argued that the parahippocampal place area (PPA) is involved in landmark recognition (i.e., recognizing a particular place or stable object in the environment), evidence from many other studies suggests otherwise. Based on results from the latter studies, we propose that PPA is not well suited to recognize landmarks in the environment, but rather is involved in recognizing the category membership of scenes (e.g., recognizing a scene as a coffee shop). We used fMRI multi-voxel pattern analysis to test this hypothesis. We scanned participants after they learned the layout of a virtual town that consisted of a park square surrounded by eight buildings. There were two buildings on each corner of the town. Each building belonged to a particular category: two coffee shops, two hardware stores, two gyms, and two dentist offices. Importantly, the locations of any two buildings belonging to the same category were dissociable from the category information (e.g., one gym was in the northeast corner of the town, while the other was in the southwest corner). If PPA represents landmark information, then it must be able to discriminate between two places of the same category, but in different locations of town. By contrast, if PPA represents general category information, then it will not represent the location of a particular place, but only the category of the place. As predicted, we found that PPA represents two buildings from the same category, but in different locations, as more similar than two buildings from different categories, but in the same location, while another scene-selective region of cortex, the retrosplenial complex (RSC), showed the opposite pattern of results. Such a double dissociation suggests distinct neural systems selectively involved in navigation and categorization of scenes. Meeting abstract presented at VSS 2018
In adult animals, regions of primary visual cortex deprived of normal input show “reorganization” (Kaas et al., 1990): they begin responding to stimuli that normally activate adjacent cortex only. However, it is unknown how quickly this cortical reorganization can happen, and some studies have failed to find it at all, spawning considerable controversy (e.g., Smirnakis et al., 2005). We investigated the existence and speed of reorganization in the adult human visual system, using a novel perceptual test. Specifically, we patched one eye, thus depriving input to the cortical region corresponding to the natural blind spot (BS) in the unpatched eye. To ask whether and how quickly deprivation produces reorganization, we tested for perceptual distortions that have recently been shown to reflect cortical reorganization in retinotopic cortex following stroke (Dilks et al., 2007). Within only one minute of eye patching, participants perceived rectangles placed adjacent to the BS to be elongated toward the BS, exactly as expected if deprived cortex starts to respond to stimuli adjacent to the BS. These findings further document the existence of cortical reorganization in the adult human visual system, show that this reorganization can occur very rapidly, and implicate unmasking of horizontal connections in early visual cortex as the underlying mechanism.