Use of Automated Scoring in Spoken Language Assessments for Test Takers With Speech Impairments

2017 
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses on one type of scoring technology, automatic speech scoring (the SpeechRaterSM automated scoring engine); one type of assessment, spontaneous spoken English by nonnative adults (six TOEFL iBT® test speaking items per test taker); and one category of disability, speech impairments. The results show discrepancies between human and SpeechRater scores for speakers with documented speech or hearing impairments who receive accommodations and for speakers whose responses were deferred to the scoring leader by human raters because the responses exhibited signs of a speech impairment. SpeechRater scores for these studied groups tended to be higher than the human scores. Based on a smaller subsample, the word error rate was higher for these groups relative to the control group, suggesting that the automatic speech recognition system contributed to the discrepancies between SpeechRater and human scores.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    2
    Citations
    NaN
    KQI
    []