This study investigated the development of contextual dependencies for sequential perceptual-motor learning on static features in the learning environment. In three experiments we assessed the effect of manipulating task irrelevant static context features in a serial reaction-time task. Experiment 1 demonstrated impaired performance after simultaneously changing display color, placeholder shape, and placeholder location. Experiment 2 showed that this effect was mainly caused by changing placeholder shape. Finally, Experiment 3 indicated that changing context affected both the application of sequence knowledge and the selection of individual responses. It is proposed either that incidental stimulus features are integrated with a global sequence representation, or that the changed context causes participants to strategically inhibit sequence skills.
Optimal decision-making is based on integrating information from several dimensions of decisional space (e.g., reward expectation, cost estimation, effort exertion). Despite considerable empirical and theoretical efforts, the computational and neural bases of such multidimensional integration have remained largely elusive. Here we propose that the current theoretical stalemate may be broken by considering the computational properties of a cortical-subcortical circuit involving the dorsal anterior cingulate cortex (dACC) and the brainstem neuromodulatory nuclei: ventral tegmental area (VTA) and locus coeruleus (LC). From this perspective, the dACC optimizes decisions about stimuli and actions, and using the same computational machinery, it also modulates cortical functions (meta-learning), via neuromodulatory control (VTA and LC). We implemented this theory in a novel neuro-computational model-the Reinforcement Meta Learner (RML). We outline how the RML captures critical empirical findings from an unprecedented range of theoretical domains, and parsimoniously integrates various previous proposals on dACC functioning.
Embodied cognition postulates that perceptual and motor processes serve higher-order cognitive faculties like language. A major challenge for embodied cognition concerns the grounding of abstract concepts. Here we zoom in on abstract spatial concepts and ask the question to what extent the sensorimotor system is involved in processing these. Most of the empirical support in favor of an embodied perspective on (abstract) spatial information has derived from so-called compatibility effects in which a task-irrelevant feature either facilitates (for compatible trials) or hinders (in incompatible trials) responding to the task-relevant feature. This type of effect has been interpreted in terms of (task-irrelevant) feature-induced response activation. The problem with such approach is that incompatible features generate an array of task-relevant and -irrelevant activations [e.g., in primary motor cortex (M1)], and lateral hemispheric interactions render it difficult to assign credit to the task-irrelevant feature per se in driving these activations. Here, we aim to obtain a cleaner indication of response activation on the basis of abstract spatial information. We employed transcranial magnetic stimulation (TMS) to probe response activation of effectors in response to semantic, task-irrelevant stimuli (i.e., the words left and right) that did not require an overt response. Results revealed larger motor evoked potentials (MEPs) for the right (left) index finger when the word right (left) was presented. Our findings provide support for the grounding of abstract spatial concepts in the sensorimotor system.
sequence learning, implicit learning, sensory redundancy, serial reaction time task in daily life we encounter multiple sources of sensory information at any given moment.Unknown is whether such sensory redundancy in some way affects implicit learning of a sequence of events.in the current paper we explored this issue in a serial reaction time task.our results indicate that redundant sensory information does not enhance sequence learning when all sensory information is presented at the same location (responding to the position and/or color of the stimuli; experiment 1), even when the distinct sensory sources provide more or less similar baseline response latencies (responding to the shape and/or color of the stimuli; experiment 2).these findings support the claim that sequence learning does not (necessarily) benefit from sensory redundancy.Moreover, transfer was observed between various sets of stimuli, indicating that learning was predominantly response-based.