Viewing our interlocutor facilitates speech perception, unlike for instance when we telephone. Several neural routes and mechanisms could account for this phenomenon. Using magnetoencephalography, we show that when seeing the interlocutor, latencies of auditory responses (M100) are the shorter the more predictable speech is from visual input, whether the auditory signal was congruent or not. Incongruence of auditory and visual input affected auditory responses approximately 20 ms after latency shortening was detected, indicating that initial content-dependent auditory facilitation by vision is followed by a feedback signal that reflects the error between expected and received auditory input (prediction error). We then used functional magnetic resonance imaging and confirmed that distinct routes of visual information to auditory processing underlie these two functional mechanisms. Functional connectivity between visual motion and auditory areas depended on the degree of visual predictability, whereas connectivity between the superior temporal sulcus and both auditory and visual motion areas was driven by audiovisual (AV) incongruence. These results establish two distinct mechanisms by which the brain uses potentially predictive visual information to improve auditory perception. A fast direct corticocortical pathway conveys visual motion parameters to auditory cortex, and a slower and indirect feedback pathway signals the error between visual prediction and auditory input.
The ability to precisely anticipate the timing of upcoming events at the time-scale of seconds is essential to predict objects' trajectories or to select relevant sensory information. What neurophysiological mechanism underlies the temporal precision in anticipating the occurrence of events? In a recent article,1 we demonstrated that the sensori-motor system predictively controls neural oscillations in time to optimize sensory selection. However, whether and how the same oscillatory processes can be used to keep track of elapsing time and evaluate short durations remains unclear. Here, we aim at testing the hypothesis that the brain tracks durations by converting (external, objective) elapsing time into an (internal, subjective) oscillatory phase-angle. To test this, we measured magnetoencephalographic oscillatory activity while participants performed a delayed-target detection task. In the delayed condition, we observe that trials that are perceived as longer are associated with faster delta-band oscillations. This suggests that the subjective indexing of time is reflected in the range of phase-angles covered by delta oscillations during the pre-stimulus period. This result provides new insights into how we predict and evaluate temporal structure and support models in which the active entrainment of sensori-motor oscillatory dynamics is exploited to track elapsing time.
Abstract Background Recent studies have suggested that prodromal stages of Alzheimer’s Disease (AD) are accompanied with central auditory system dysfunction, which may be used as early indicators of disease onset and progression. In AD patients of 60 years old and more and in APOE4 carriers, atypical patterns of oscillatory entrainment to repetitive sound transients have been reported and suggested as potential neuromarkers of AD. Whether such alterations of auditory functions relate to genetic risk factor of AD (APOE4) at an early age (<30) is unknown. Method We used EEG recordings to measure auditory responses to repetitive sounds (1 second click trains presented at various frequencies (10‐250Hz) in 32 young neurotypical participants). To test whether auditory responsivity is affected by AD risk factor, we compared auditory brain responses from 17 APOE3 (age mean = 21.6, sd = 1.8) and 17 APOE4 carriers (age mean = 23.6, sd = 4.9). Result Comparing the magnitude of auditory event related potentials (ERP) we observed that APOE4 carriers exhibit slightly attenuated P2 and P3 ERP responses as well as delayed N1 and P2 ERP responses compared to APOE3 carriers. Focusing on Auditory Steady State Response (ASSR) power across frequencies (10‐90Hz), we observed that APOE3 carriers exhibit reliably larger neural entrainment than APOE4 carriers (Cohens’ d = 0.8, ‘large’ effect size). This difference was sustained across the peristimulus time course and did vary across stimulus frequencies. Conclusion Overall, these results suggest that central auditory differences can be detected very early in at‐risk populations. Studying these signals could help identify early AD pathology and provide an entry point for therapeutic interventions against neurodegeneration.
Objective Lipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST). Design Video recordings were created by dubbing the existing audio files. Sample Thirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats. Results Lipreading ability (Vo) varied from 1% to 77%-word comprehension. The absolute AV benefit was 9.25ℒdB SPL in quiet and 4.6ℒdB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated. Conclusions The French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format.
The dataset contains EEG recordings pre- (with the presleep tag), during (with the sleep tag) and post- (with the postsleep tag) sleep during a evening to morning experimental session. During the EEG recordings, participants were exposed to sounds with various emotional intensity. Half of the stimuli were pseudo-words uttered with an angry or neutral voice. Details about these stimuli can be found here: https://doi.org/10.1093/texcom/tgac003. The other half of the stimuli were vocalizations shouted loudly or screamed. For more information by https://doi.org/10.1016/j.cub.2015.06.043
Abstract Being able to produce sounds that capture attention and elicit rapid reactions is the prime goal of communication. One strategy, exploited by alarm signals, consists in emitting fast but perceptible amplitude modulations in the roughness range (30–150 Hz). Here, we investigate the perceptual and neural mechanisms underlying aversion to such temporally salient sounds. By measuring subjective aversion to repetitive acoustic transients, we identify a nonlinear pattern of aversion restricted to the roughness range. Using human intracranial recordings, we show that rough sounds do not merely affect local auditory processes but instead synchronise large-scale, supramodal, salience-related networks in a steady-state, sustained manner. Rough sounds synchronise activity throughout superior temporal regions, subcortical and cortical limbic areas, and the frontal cortex, a network classically involved in aversion processing. This pattern correlates with subjective aversion in all these regions, consistent with the hypothesis that roughness enhances auditory aversion through spreading of neural synchronisation.
Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.