Objective: To determine speech perception in quiet and noise of adult cochlear implant listeners retaining a hearing aid contralaterally. Second, to investigate the influence of contralateral hearing thresholds and speech perception on bimodal hearing.Patients and methods: Sentence recognition with hearing aid alone, cochlear implant alone and bimodally at 6 months after cochlear implantation were assessed in 148 postlingually deafened adults. Data were analyzed for bimodal summation using measures of speech perception in quiet and in noise.Results: Most of the subjects showed improved sentence recognition in quiet and in noise in the bimodal condition compared to the hearing aid-only or cochlear implant-only mode. The large variability of bimodal benefit in quiet can be partially explained by the degree of pure tone loss. Also, subjects with better hearing on the acoustic side experience significant benefit from the additional electrical input.Conclusions: Bimodal summation shows different characteristics in quiet and noise. Bimodal benefit in quiet depends on hearing thresholds at higher frequencies as well as in the lower- and middle-frequency ranges. For the bimodal benefit in noise, no correlation with hearing threshold in any frequency range was found.
The primary objective of this study was the comparison of younger and older (>75 yr) CI recipients' performance for speech perception in quiet and in competing continuous and fluctuating noise.Prospective, comparative clinical study.University hospital.Fifty patients, 25 older and 25 younger than 75 years, with a postlingually acquired profound hearing loss who received a cochlear implant at least 1 year before study start were enrolled.Cochlear implantation.We measured speech perception using monosyllable (Freiburg monosyllables) and sentence materials (Göttingen sentences) in quiet. In addition, speech perception for sentences was measured under two different noise conditions: with a continuous, speech-simulating noise signal (CCITT noise) and the FASTL noise (fluctuating noise).We did not find a significant difference between the performance for younger and the older cohort on speech perception tasks in quiet for Freiburg monosyllables (63.4% ± 20% and 61.7% ± 18.1%, respectively) and for the Göttingen sentences in quiet (73.5% ± 24.3% and 75% ± 25%, respectively). No significant difference was observed for performance between the two age groups when listening in continuous CCITT noise (18.9% ± 24.0% and 29.5% ± 25.2% perception score respectively) or in FASTL noise (27.8% ± 24.2% and 34.4% ± 27.8% perception score, respectively).There is no supporting evidence from our evaluations of word and sentence perception in quiet and noise that elderly CI users older than 75 years of age perform more poorly than those younger than 75 years of age.
In practice, the unilateral monosyllabic speech recognition score with hearing aid (WRS65(HA)) is often below the maximum word recognition score with headphones (WRSmax), in particular for subjects with severe hearing loss. The aim of this study was to evaluate the efficiency factor Q of hearing aid provision, the ratio WRS65(HA)/WRSmax, in patients with severe to profound hearing loss.Data from real-ear measurements (REM), pure tone and speech audiogram, and speech recognition with and without hearing aid of 93 ears in 64 patients were examined. The patients visited the authors' hearing center for hearing aid evaluation in 2019. Deviations of the real-ear measured frequency-dependent output level values from the prescription targets NAL-NL2 and DSL v5.0 were analyzed. Spearman correlation coefficients for the speech intelligibility index (SII) were calculated for the parameters WRS65(HA) and Q.In more than 67% of the hearing aid fittings, output level values matched the target curves of NAL-NL2 or DSL v5.0 in the range of ±5 dB for frequencies from 0.5 to 4 kHz at 65 dB SPL. Nevertheless, WRSmax was not achieved with hearing aid at conversational speech levels of 65 dB SPL (mean deviations: 34.4%). However, WRS65(HA) and Q were best when target values for DSL v5.0 were achieved at 65 dB SPL, which is associated with a higher SII.For patients with severe to profound hearing loss, the prescription targets of NAL-NL2 and DSL v5.0 do not provide sufficient amplification for WRSmax to be achieved at a normal speech level of 65 dB SPL. It remains to be investigated whether alternative prescriptions with better audibility for input levels of 50 and 65 dB SPL might improve the effectiveness of hearing aid provision.HINTERGRUND: In der Praxis liegt das unilaterale Einsilberverstehen mit Hörgerät bei 65 dB SPL (EV65(HG)) häufig unter dem maximalen Einsilberverstehen aus dem Sprachaudiogramm (mEV), insbesondere bei Hörgeräteträgern mit hochgradigem Hörverlust. Diese Arbeit zielte darauf ab, den Wirkungsgrad Q der Hörgeräteversorgung, den Quotienten aus EV65(HG) und mEV, bei Hörgeräteträgern mit hochgradigem bis an Taubheit grenzendem Hörverlust zu untersuchen.Es wurden Daten aus In-situ-Messungen, dem Reinton- und Sprachaudiogramm und dem Sprachverstehen mit und ohne Hörgerät von 93 Ohren von 64 Patienten ausgewertet. Die Patienten stellten sich im Jahr 2019 für eine Hörgerätekontrolle in unserem Hörzentrum vor. Es wurden die Abweichung der in-situ gemessenen frequenzabhängigen Ausgangspegelwerte von den Zielvorgaben der präskriptiven Anpassformeln NAL-NL2 und DSL v5.0 analysiert. Für die Parameter EV65(HG) und Q wurden jeweils die Spearman-Korrelationskoeffizienten für den Sprachverständlichkeitsindex (SII) berechnet.Bei mehr als 67 % der Hörgeräteeinstellungen stimmten die Ausgangspegelwerte mit den Zielkurven für NAL-NL2 oder DSL v5.0 im Bereich von ±5 dB für Frequenzen von 0,5 bis 4 kHz für 65 dB SPL überein. Trotzdem wurde das mEV mit Hörgerät bei 65 dB SPL nicht erreicht (mittlere Abweichungen: 34,4 %). EV65(HG) und Q waren jedoch am besten, wenn Zielwerte für DSL v5.0 bei 65 dB SPL erreicht wurden, was mit einem höheren SII einhergeht.Für Hörgeräteträger mit hochgradigem bis an Taubheit grenzendem Hörverlust führen die Anpassformeln NAL-NL2 und DSL v5.0 nicht zu einer solchen Verstärkung, dass bei Alltagssprache von 65 dB SPL das mEV erreicht wird. Es bleibt zu untersuchen, ob alternative Präskriptionen mit besserer Hörbarkeit für Eingangspegel von 50 und 65 dB SPL den Wirkungsgrad der Hörgeräteversorgung verbessern könnten.
Cochlear implants (CI) are fairly successful in improving speech perception by CI recipients (Lin et al., 2009). However, current CI devices are limited in encoding music and other melodic sounds a...
To determine the influence of spectrotemporal properties of naturally produced consonant-vowel syllables to speech-evoked auditory event-related potentials (ERPs) for stimuli with very similar or even identical wide-band envelopes. Speech-evoked ERPs may be useful for validating the neural representation of speech.Speech-evoked ERPs were obtained from 10 normal-hearing young adults in response to the syllables /da/ and /ta/. Both monosyllables were obtained from ongoing speech. They have quite similar wide-band envelopes, and they mainly differ in the spectrotemporal content of the consonant parts. Additionally, each stimulus derivatives were investigated with (1) isolated consonant part ("consonant stimulus"), (2) isolated vowel part ("vowel stimulus"), and (3) removed spectral information but identical wide-band envelope. Latencies and amplitudes of the N1 and P2 components were determined and analyzed.ERPs in response to the naturally produced /ta/ syllable had significant shorter N1 and P2 latencies and larger amplitudes than ERPs in response to /da/. Similar differences were observed for the ERPs evoked by the consonant stimuli alone. For the vowel stimuli and stimuli with removed spectral information, no significant differences were observed. In summary, differences between the ERPs of /da/ and /ta/ corresponded to the distinct spectrotemporal content in the consonant parts of the original consonant-vowel (CV) syllables.The study shows that even small differences in spectrotemporal features of speech may evoke different ERPs, despite very similar or even identical wide-band envelopes. The results are consistent with a model that ERPs evoked by short CVs are an onset response to the consonant merged with an acoustic change complex evoked by the vowel part. However, all components appear as one P1-N1-P2 complex. The results may be explained by differences in the narrow-band envelopes of the stimuli. Therefore, this study underlines the limitations of the wide-band envelope in explaining speech-evoked ERPs. Additionally, the results of this study are of special interest for clinical application since some of the ERP parameter differences, as the N1 latency, are present not only in the ERPs of each single subject but also in the group mean value of all N1 latencies. Thus, presented ERP measurements in response to CVs might be used for identification of potential problems in phoneme differentiation caused by spectrotemporal analysis problems.
Hintergrund: Bei einer einseitigen Schwerhörigkeit (UHV) stellt sich die Frage, ob eine Versorgung mit verstärkenden Hörhilfen erfolgen soll. Hier kommen neben der konventionellen CROS-Versorgung auch knochenverankerte CROS-Versorgungen (Baha) infrage. Bei einer minimalen beidseitigen Schwerhörigkeit kann der Beeinträchtigung im Sprachverständnis entweder mit einer beidseitigen Hörgeräteversorgung oder auch mit anderen Hilfsmitteln, wie z. B. einem FM-System, begegnet werden.
<i>Objective:</i> This study examined the role of central auditory completion in speech understanding. The perception of periodically interrupted speech was investigated. For this purpose, gaps were inserted into speech signals by silencing out defined intervals. The main hypothesis was that word recognition increases with shorter gaps and is less influenced by the total amount of gaps. <i>Patients and Methods:</i> Seventeen normal-hearing young adults took part in this study. Phrases from the German HSM speech recognition test were used as speech material. The examination comprised 220 modulated sentences presented binaurally at 65 dB. Intervals with durations ranging from 50 to 700 ms were taken to silence out 50, 65 and 80% of each sentence, respectively. <i>Results:</i> Mean speech perception values were in the range of 65–92%, 35–92% and 35–95% correct answers for gap ratios of 50, 65 and 80%, respectively. When comparing the same interval duration, word recognition was better for smaller gap ratios. Both, gap ratio and gap duration had a significant influence on identification performance. <i>Conclusions:</i> Speech can be understood even when very large proportions are blanked out. A significant decrease in perception can be observed when gap duration exceeds more than half of the syllable length.