How do blind people use semantic and phonological cues for spoken word recognition? An ERP study

2020 
Spoken word recognition is affected by phonemic and semantic details of speech at pre-lexical and lexical levels. The current study examined how blind people used semantic and phonological cues for spoken word recognition. Thirty blind and twenty-nine age-matched sighted people participated in this study. We manipulated the semantic similarity and phonological similarity between the primes and targets in three experimental conditions: semantic-related with different phonology (S + P-), semantic-unrelated with different phonology (S-P-), and semantic-unrelated with the same phonology (S-P + ). Results showed that blind participants had higher accuracies than sighted individuals in both the S-P- and S-P + conditions. As both groups exhibited larger N400 amplitude in the S-P- versus S + P- conditions, the semantic-unrelated N400 effect was stronger in blind than in sighted participants, suggesting a more sensitive processing of semantic information in blind people. Moreover, sighted participants showed stronger N400 effect in the S-P + than in the S-P- conditions, but blind participants did not, indicating an interference of phonological similarity in spoken word recognition for sighted listeners only. In summary, blind people are more sensitive to semantic cues and less susceptible to phonological similarity interference during spoken word recognition than sighted people.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []