The present study examined the effect of glycerol ingestion on aspects of auditory performance in subjects having Ménière’s disease. It was hypothesized that Ménière’s disease may be associated with abnormal firing in the auditory nerve and that this should result in a decreased ability to code the auditory temporal fine structure. Psychoacoustical measures of interaural time discrimination and quasi frequency modulation rate discrimination were used as measures of temporal coding, and performance on these tasks was examined both before and after glycerol ingestion. Pre- and postglycerol measures of speech recognition and audiometric thresholds were also obtained. In agreement with previous results, glycerol-related changes in audiometric thresholds were modest or absent, but improvements in speech recognition were relatively reliable. Improvements in interaural time discrimination and quasi frequency modulation rate discrimination were also observed. The results provide limited support for the hypothesis that Ménière’s disease may be associated with a reduced ability to code the temporal fine structure of sound.
The present study sought to clarify the role of non-simultaneous masking in the binaural masking level difference for maskers that fluctuate in level. In the first experiment the signal was a brief 500-Hz tone, and the masker was a bandpass noise (100–2000 Hz), with the initial and final 200-ms bursts presented at 40-dB spectrum level and the inter-burst gap presented at 20-dB spectrum level. Temporal windows were fitted to thresholds measured for a range of gap durations and signal positions within the gap. In the second experiment, individual differences in out of phase (NoSπ) thresholds were compared for a brief signal in a gapped bandpass masker, a brief signal in a steady bandpass masker, and a long signal in a narrowband (50-Hz-wide) noise masker. The third experiment measured brief tone detection thresholds in forward, simultaneous, and backward masking conditions for a 50- and for a 1900-Hz-wide noise masker centered on the 500-Hz signal frequency. Results are consistent with comparable temporal resolution in the in phase (NoSo) and NoSπ conditions and no effect of temporal resolution on individual observers’ ability to utilize binaural cues in narrowband noise. The large masking release observed for a narrowband noise masker may be due to binaural masking release from non-simultaneous, informational masking.
Children must learn in classrooms that contain multiple sources of competing sounds. While there are national standards aimed at creating classroom environments that optimize speech intelligibility (e.g., ANSI/ASA 2010), these standards are voluntary and many unoccupied classrooms fail to meet the acceptable levels specified. Moreover, little attention has been given to measuring and understanding effects of competing speech on children’s performance in the classroom. Data will be presented that describe typical noise levels in the classroom. Results from experiments investigating the consequences of competing noise and speech on speech perception at different time points during childhood will be presented. Findings from experiments investigating potential benefits associated with manipulating acoustic cues thought to aid in separating target from background speech will also be discussed.
Cochlear implant (CI) recipients demonstrate variable speech recognition when listening with a CI-alone or electric-acoustic stimulation (EAS) device, which may be due in part to electric frequency-to-place mismatches created by the default mapping procedures. Performance may be improved if the filter frequencies are aligned with the cochlear place frequencies, known as place-based mapping. Performance with default maps versus an experimental place-based map was compared for participants with normal hearing when listening to CI-alone or EAS simulations to observe potential outcomes prior to initiating an investigation with CI recipients.A noise vocoder simulated CI-alone and EAS devices, mapped with default or place-based procedures. The simulations were based on an actual 24-mm electrode array recipient, whose insertion angles for each electrode contact were used to estimate the respective cochlear place frequency. The default maps used the filter frequencies assigned by the clinical software. The filter frequencies for the place-based maps aligned with the cochlear place frequencies for individual contacts in the low- to mid-frequency cochlear region. For the EAS simulations, low-frequency acoustic information was filtered to simulate aided low-frequency audibility. Performance was evaluated for the AzBio sentences presented in a 10-talker masker at +5 dB signal-to-noise ratio (SNR), +10 dB SNR, and asymptote.Performance was better with the place-based maps as compared with the default maps for both CI-alone and EAS simulations. For instance, median performance at +10 dB SNR for the CI-alone simulation was 57% correct for the place-based map and 20% for the default map. For the EAS simulation, those values were 59% and 37% correct. Adding acoustic low-frequency information resulted in a similar benefit for both maps.Reducing frequency-to-place mismatches, such as with the experimental place-based mapping procedure, produces a greater benefit in speech recognition than maximizing bandwidth for CI-alone and EAS simulations. Ongoing work is evaluating the initial and long-term performance benefits in CI-alone and EAS users.https://doi.org/10.23641/asha.19529053.
This study tested the hypothesis that word recognition in a complex, two-talker masker is more closely related to real-world speech perception for children with hearing loss than testing performed in quiet or steady-state noise.Sixteen school-age hearing aid users were tested on aided word recognition in noise or two-talker speech. Unaided estimates of speech perception in quiet were retrospectively obtained from the clinical record. Ten parents completed a questionnaire regarding their children's ease of communication and understanding in background noise.Unaided performance in quiet was correlated with aided performance in competing noise, but not in two-talker speech. Only results in the two-talker masker were correlated with parental reports of their children's functional hearing abilities.Speech perception testing in a complex background such as two-talker speech may provide a more accurate predictor of the communication challenges of children with hearing loss than testing in steady noise or quiet.
Monaural envelope correlation perception is the ability to discriminate between stimuli composed of two or more bands of noise based on envelope correlation. Sensitivity decreases as stimulus bandwidth is reduced below 100 Hz. The present study manipulated stimulus bandwidth (25–100 Hz) and duration (25–800 ms) to evaluate whether performance of highly trained listeners is limited by the number of inherent modulation periods in each presentation. Stimuli were two bands of noise, separated by a 500-Hz gap centered on 2250 Hz. Performance improved reliably with increasing numbers of envelope modulation periods, although there were substantial individual differences.
The purpose of this study was to evaluate the ability to discriminate yes/no questions from statements in three groups of children: bilateral cochlear implant (CI) users, nontraditional CI users with aidable hearing preoperatively in the ear to be implanted, and controls with normal hearing. Half of the nontraditional CI users had sufficient postoperative acoustic hearing in the implanted ear to use electric-acoustic stimulation, and half used a CI alone.
Objective To investigate the influence of cochlear implant (CI) use on subjective benefits in quality of life in cases of asymmetric hearing loss (AHL). Study Design Prospective clinical trial. Setting Tertiary academic center. Subjects and Methods Subjects included CI recipients with AHL (n = 20), defined as moderate‐to‐profound hearing loss in the affected ear and mild‐to‐moderate hearing loss in the contralateral ear. Quality of life was assessed with the Speech, Spatial, and Qualities of Hearing Scale (SSQ) pragmatic subscales, which assess binaural benefits. Subjective benefit on the pragmatic subscales was compared to word recognition in quiet and spatial hearing abilities (ie, masked sentence recognition and localization). Results Subjects demonstrated an early, significant improvement ( P <. 01) in abilities with the CI as compared to preoperative abilities on the SSQ pragmatic subscales by the 1‐month interval. Perceived abilities were either maintained or continued to improve over the study period. There were no significant correlations between results on the Speech in Quiet subscale and word recognition in quiet, the Speech in Speech Contexts subscale and masked sentence recognition, or the Localization subscale and sound field localization. Conclusions CI recipients with AHL report a significant improvement in quality of life as measured by the SSQ pragmatic subscales over preoperative abilities. Reported improvements are observed as early as 1 month postactivation, which likely reflect the binaural benefits of listening with bimodal stimulation (CI and contralateral hearing aid). The SSQ pragmatic subscales may provide a more in‐depth insight into CI recipient experience as compared to behavioral sound field measures alone.