Effects of Category Labels on Induction and Visual Processing: Support or Interference? Anna V. Fisher (fisher.449@osu.edu) Department of Psychology & Center for Cognitive Science The Ohio State University 208B Ohio Stadium East, 1961 Tuttle Park Place Columbus, OH 43210 USA Vladimir M. Sloutsky (sloutsky.1@osu.edu) Center for Cognitive Science The Ohio State University 208C Ohio Stadium East, 1961 Tuttle Park Place Columbus, OH 43210 USA based approach, when presented with entities that share the same name (i.e., both are called Cats), people, including young children, first infer (by the linguistic assumption) that the entities belong to the same category. Then (by the category assumption) they infer that things that belong to the same category share important properties, thus performing category-based induction. Proponents of the alternative similarity-based approach argue that early in development generalizations are performed on the basis of multiple commonalities among presented entities (French, et al. 2004; Mareschal, Quinn, & French, 2002; McClelland & Rogers, 2003; Sloutsky, 2003; Sloutsky & Fisher, 2004a, 2004b). Members of a category often happen to be perceptually similar to each other, and different from the non-members; therefore, young children are more likely to generalize properties to members of a category, than to the non-members. Under this view, conceptual knowledge (i.e., knowledge that members of the same category share many important properties) is a product rather than a prerequisite of learning. The similarity-based approach to early induction is exemplified by a model SINC (abbreviated for Similarity- Induction-Categorization), proposed recently by Sloutsky and colleagues (Sloutsky et al., 2001; Sloutsky & Fisher, 2004a). Unlike the knowledge-based approach, assuming that linguistic labels denote categories, SINC assumes that for young children labels are features of objects contributing to the overall similarity of compared entities. Support for this assumption comes from the finding that when two entities share the same name, young children but not adults, perceive these entities as looking more similar (Sloutsky & Fisher, 2004a). Furthermore, attentional weights of linguistic attributes are assumed to be greater than weights of other attributes early in development. In particular, it has been demonstrated that auditory input often overshadows (or attenuates processing of) the visual input for infants and young children, however this effect disappears by adulthood (Sloutsky & Napolitano, 2003; Napolitano & Sloutsky, 2004; Robinson & Sloutsky, 2004). In sum, according to the knowledge-based approach, even early in development people realize that labels denote Abstract Linguistic labels have been demonstrated to promote inductive generalizations even early in development, however, the mechanism by which labels contribute to induction remains unknown. According to one theoretical position, even young children, realize that labels denote categories. Therefore, labels enable categorization of presented entities, and thus contribute to category-based induction. According to the alternative proposal, early in development labels are features of objects that promote induction through their contribution to the overall similarity of compared entities. The goal of the experiments presented below was to distinguish between these positions. Keywords: Induction, Categorization, Cognitive Development, Language. Introduction The ability to make inductive generalizations is crucial for acquiring new knowledge. For instance, upon learning that a particular cat uses serotonin for neural transmission, one can generalize this knowledge to other felines and possibly other mammals. The ability to perform inductive generalizations appears very early in development (Gelman & Markman, 1986; Sloutsky & Fisher, 2004a, 2004b; Welder & Graham, 2001), however the mechanisms underlying early induction remain unknown. Two theoretical positions emerged in the course of study of early induction: a similarity-based and a knowledge- based approach. Proponents of the knowledge-based position argue that even early in development induction is driven by “theory-like” knowledge, implemented as a set of conceptual assumptions. These assumptions include among others the category and the linguistic assumptions. The category assumption is the belief that individual entities belong to more general categories and that members of the same category share many important properties. The linguistic assumption is the belief that linguistic labels presented as count nouns denote categories (for review of these assumptions see Gelman, 2003; Keil, et al, 1998; Murphy, 2002). Therefore, according to the knowledge-
Young children can generalize from known to novel, but the underlying mechanism is still debated. Some argue that from an early age generalization is category-based and undergoes little development, while others believe that early generalization is similarity-based, and the use of categories emerges over time. The current research brings new evidence to the debate. In Experiment 1 (N = 118), we presented 3- to 5-year-olds and adults with a category learning task followed by an exemplar generation task. Then, in Experiment 2 (N = 126), we presented the same tasks but provided participants with additional conceptual information about the category members. Our results indicate that early reasoning undergoes dramatic development: whereas young children rely mostly on salient features, adults rely on category information. These results challenge category-based accounts of early generalization while supporting similarity-based accounts. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
One of the lawlike regularities of psychological science is that of developmental progression-an increase in sensorimotor, cognitive, and social functioning from childhood to adulthood. Here, we report a rare violation of this law, a developmental reversal in attention. In Experiment 1, 4- to 5-year-olds ( n = 34) and adults ( n = 35) performed a change-detection task that included externally cued and uncued shapes. Whereas the adults outperformed the children on the cued shapes, the children outperformed the adults on the uncued shapes. In Experiment 2, the same participants completed a visual search task, and their memory for search-relevant and search-irrelevant information was tested. The young children outperformed the adults with respect to search-irrelevant features. This demonstration of a paradoxical property of early attention deepens current understanding of the development of attention. It also has implications for understanding early learning and cognitive development more broadly.
Linguistic labels play an important role in young children's conceptual organization: When 2 entities share a label, people expect these entities to share many other properties. Two classes of explanations of the importance of labels seem plausible: a language‐specific and a general auditory explanation. The general auditory explanation argues that the importance of labels stems from a privileged processing status of auditory input (as compared with visual input) for young children. This hypothesis was tested and supported in 4 experiments. When auditory and visual stimuli were presented separately, 4‐year‐olds were likely to process both kinds of stimuli, whereas when auditory and visual stimuli were presented simultaneously, 4‐year‐olds were more likely to process auditory stimuli than visual stimuli.
Evidence for auditory dominance in a passive oddball task Christopher W. Robinson (robinson.777@osu.edu) Center for Cognitive Science The Ohio State University 208F Ohio Stadium East, 1961 Tuttle Park Place Columbus, OH 43210, USA Nayef Ahmar (ahmar.1@osu.edu) Center for Cognitive Science The Ohio State University 208F Ohio Stadium East, 1961 Tuttle Park Place Columbus, OH 43210, USA Vladimir M. Sloutsky (sloutsky.1@osu.edu) Center for Cognitive Science The Ohio State University 208C Ohio Stadium East, 1961 Tuttle Park Place Columbus, OH 43210, USA information to one sensory modality interferes with learning in a second modality. These modality dominance effects can occur on detection tasks and on more complex discrimination tasks, with auditory input often attenuating visual processing in young children (Sloutsky & Napolitano, 2003; Robinson & Sloutsky, 2004) and visual input often attenuating auditory processing in adults (Colavita, 1974; Colavita & Weisberg, 1979). Support for visual dominance in adults comes from a long history of research examining how multimodal stimuli affect the detection of auditory and visual input (Colavita, 1974; Colavita & Weisberg, 1979; Klein, 1977; Posner, Nissen, & Klein, 1976; see also Sinnett, Spence, & Soto-Faraco, 2007; Spence, Shore, & Klein, 2001, for reviews). For example, in a classic study Colavita (1974) presented adults with a tone, a light, or the tone and light paired together. Participants had to press one button when they heard the tone and a different button when they saw the light. While participants were accurate when the tone and light were presented unimodally, they often responded to the visual stimulus when the stimuli were paired together, with many adults failing to detect the auditory stimulus. This finding has been replicated using a variety of stimuli and procedures, with little evidence demonstrating that auditory input attenuates visual processing in adults (see Sinnett, Spence, & Soto-Faraco, 2007 for a review). There appears to be an attentional component underlying visual dominance (Posner, Nissen, & Klein, 1976). In particular, the underlying idea is that the auditory and visual modalities share the same pool of attentional resources. While auditory stimuli Abstract Simultaneous presentation of auditory and visual input can often lead to visual dominance. Most studies supporting visual dominance often require participants to make an explicit response, therefore, it is unclear if visual input disrupt encoding/discrimination of auditory input or results in a response bias. The current study begins to address this issue by examining how multimodal presentation affects discrimination of auditory and visual stimuli, while using a passive oddball task that does not require an explicit response. Participants in the current study ably discriminated auditory and visual stimuli in all unimodal and multimodal conditions. Furthermore, there was no evidence that visual stimuli attenuated auditory processing. Rather, multimodal presentation sped up auditory processing (shorter latency of P300) and slowed down visual processing (longer latency of P300). These findings are consistent with research examining modality dominance in young children and suggest that visual dominance effects may be restricted to tasks that require an explicit response. Keywords: Attention, Cross-modal Processing, Electroencephalograph (EEG), Neurophysiology, Psychology. Introduction Most of our experiences are multimodal in nature. The objects and events that we encounter in the environment can be seen, touched, heard, and smelled. The fact that the brain can integrate this knowledge into a coherent experience is amazing given that each modality simultaneously receives different types of input, and this information is processed, at least in the early stages of processing, by dedicated sensory systems. While multimodal presentation can sometimes facilitate learning, there are many occasions when presenting
Two fundamental difficulties when learning novel categories are deciding (a) what information is relevant and (b) when to use that information. Although previous theories have specified how observers learn to attend to relevant dimensions over time, those theories have largely remained silent about how attention should be allocated on a within-trial basis, which dimensions of information should be sampled, and how the temporal order of information sampling influences learning. Here, we use the adaptive attention representation model (AARM) to demonstrate that a common set of mechanisms can be used to specify: (a) How the distribution of attention is updated between trials over the course of learning and (b) how attention dynamically shifts among dimensions within a trial. We validate our proposed set of mechanisms by comparing AARM's predictions to observed behavior in four case studies, which collectively encompass different theoretical aspects of selective attention. We use both eye-tracking and choice response data to provide a stringent test of how attention and decision processes dynamically interact during category learning. Specifically, how does attention to selected stimulus dimensions gives rise to decision dynamics, and in turn, how do decision dynamics influence which dimensions are attended to via gaze fixations? (PsycInfo Database Record (c) 2022 APA, all rights reserved).