The neural computations for looming detection are strikingly similar across species. In mammals, information about approaching threats is conveyed from the retina to the midbrain superior colliculus, where approach variables are computed to enable defensive behavior. Although neuroscientific theories posit that midbrain representations contribute to emotion through connectivity with distributed brain systems, it remains unknown whether a computational system for looming detection can predict both defensive behavior and phenomenal experience in humans. Here, we show that a shallow convolutional neural network based on the
An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content.
Research with human and non-human primates suggests specialized visual processing of evolutionary-based threats (e.g., Ohman & Mineka, 2001). For instance, human adults and infants underestimate the arrival time of looming animals that are evolutionarily threatening (e.g., snakes and spiders) to a greater degree than nonthreatening animals (e.g., rabbits and butterflies) (Ayzenberg, Longo, & Lourenco, 2015; Vagoni, Lourenco, & Longo, 2012). However, it is unclear what accounts for this relationship between threat and visual perception. In the current study, we tested the possibility that human infants misperceive the speed of evolutionarily threatening animals. More specifically, we used a predictive tracking paradigm to determine whether the perceived speed of laterally moving images differed depending on the threat value. Twenty-six infants (8- to 11-month-olds) were presented with horizontally moving images of threatening (i.e., snakes, spiders) and non-threatening (i.e., rabbits, butterflies) animals at two velocities (50 mm/s and 100 mm/s). A portion of the movement trajectory was covered in the center by an occluder, thereby forcing infants to anticipate the image' reappearance. As in previous studies (Von Hofsten et al., 2007) infants predictively tracked the images through the occluder (p < .05) and the speed of anticipatory looks to the exit-side of the occluder scaled according to velocity (p < .05). Finally, and critically, preliminary results reveal that, among slow trials, infants' anticipatory looks were earlier for threatening than nonthreatening stimuli (p = .078, η2 = .205). These data suggest that infants' perception of speed may be modulated by the threat value of the animal, providing evidence for specialized spatiotemporal perceptual processing of evolutionary-based threat. Meeting abstract presented at VSS 2016
Leibovich et al. claim that number representations are non-existent early in life and that the associations between number and continuous magnitudes reside in stimulus confounds. We challenge both claims - positing, instead, that number is represented independently of continuous magnitudes already in infancy, but is nonetheless more deeply connected to other magnitudes through adulthood than acknowledged by the "sense of magnitude" theory.
Although the dorsal visual pathway is traditionally associated with visuospatial processing to enable action, accumulating evidence indicates that dorsal visual areas also contribute to object perception (Freud et al., 2017). In particular, recent fMRI data suggest that the dorsal pathway contributes to shape perception by computing the spatial arrangement of object parts – a descriptor of global form (Ayzenberg & Behrmann, 2021). However, a full understanding of these computations is limited by the spatial and temporal resolution of fMRI. Here, we leverage the superior spatial and temporal resolution of stereotactic electroencephalography (sEEG), a technique that allows for direct measurement of neural activity, in a pediatric patient with 18 electrode implantations implanted across bilateral parietal cortex (256 channels). To assess whether dorsal regions display sensitivity to the spatial arrangement of object parts, the patient viewed displays wherein the spatial arrangement of object parts varied while holding the features constant or wherein the features of the parts changed while holding the spatial arrangement constant. Consistent with fMRI findings, multiple dorsal channels responded more to the spatial arrangement of parts than to the features of the parts. This finding provides converging evidence, with much finer granularity that dorsal visual regions compute representations that are crucial for shape perception, namely a representation of the spatial arrangement of object parts. Future analyses will build on these findings to test whether, for example, there are hemispheric differences in these representations and whether the global shape of object categories (e.g. guitars independent of specific type of guitar) can be decoded from these regions. Moreover, we will leverage the temporal resolution of sEEG to examine the time course of object processing in dorsal cortex. Together, these findings improve our understanding of functional specificity in visual regions and suggest an important role for dorsal cortex in facilitating object recognition.
Although there is mounting evidence that input from the dorsal visual pathway is crucial for object processes in the ventral pathway, the specific functional contributions of dorsal cortex to these processes remain poorly understood. Here, we hypothesized that dorsal cortex computes the spatial relations among an object9s parts, a process crucial for forming global shape percepts, and transmits this information to the ventral pathway to support object categorization. Using fMRI with human participants (females and males), we discovered regions in the intraparietal sulcus (IPS) that were selectively involved in computing object-centered part relations. These regions exhibited task-dependent functional and effective connectivity with ventral cortex, and were distinct from other dorsal regions, such as those representing allocentric relations, 3D shape, and tools. In a subsequent experiment, we found that the multivariate response of posterior (p)IPS, defined on the basis of part-relations, could be used to decode object category at levels comparable to ventral object regions. Moreover, mediation and multivariate effective connectivity analyses further suggested that IPS may account for representations of part relations in the ventral pathway. Together, our results highlight specific contributions of the dorsal visual pathway to object recognition. We suggest that dorsal cortex is a crucial source of input to the ventral pathway and may support the ability to categorize objects on the basis of global shape. SIGNIFICANCE STATEMENT Humans categorize novel objects rapidly and effortlessly. Such categorization is achieved by representing an object9s global shape structure, that is, the relations among object parts. Yet, despite their importance, it is unclear how part relations are represented neurally. Here, we hypothesized that object-centered part relations may be computed by the dorsal visual pathway, which is typically implicated in visuospatial processing. Using fMRI, we identified regions selective for the part relations in dorsal cortex. We found that these regions can support object categorization, and even mediate representations of part relations in the ventral pathway, the region typically thought to support object categorization. Together, these findings shed light on the broader network of brain regions that support object categorization.
Abstract Looming objects afford threat of collision across the animal kingdom. Defensive responses to looming and neural computations for looming detection are strikingly conserved across species. In mammals, information about rapidly approaching threats is conveyed from the retina to the midbrain superior colliculus, where variables that indicate the position and velocity of approach are computed to enable defensive behavior. Although neuroscientific theories posit that midbrain representations contribute to emotion through connectivity with distributed brain systems, it remains unknown whether a computational system for looming detection can predict both defensive behavior and phenomenal experience in humans. Here, we show that a shallow convolutional neural network based on the Drosophila visual system predicts defensive blinking to looming objects in infants and superior colliculus responses to optical expansion in adults. Further, the responses of the convolutional network to a broad array of naturalistic video clips predict self-reported emotion largely on the basis of subjective arousal. Our findings illustrate how motor and experiential components of human emotion relate to species-general systems for survival in unpredictable environments.
Categorization of everyday objects requires that humans form representations of shape that are tolerant to variations among exemplars. Yet, how such invariant shape representations develop remains poorly understood. By comparing human infants (6–12 months; N=82) to computational models of vision using comparable procedures, we shed light on the origins and mechanisms underlying object perception. Following habituation to a never-before-seen object, infants classified other novel objects across variations in their component parts. Comparisons to several computational models of vision, including models of high-level and low-level vision, revealed that infants’ performance was best described by a model of shape based on the skeletal structure. Interestingly, infants outperformed a range of artificial neural network models, selected for their massive object experience and biological plausibility, under the same conditions. Altogether, these findings suggest that robust representations of shape can be formed with little language or object experience by relying on the perceptually invariant skeletal structure.