Machine Learning of Controller Command Prediction Models from Recorded Radar Data and Controller Speech Utterances

2017 
Recently, the project AcListant® related to automatic speech recognition has achieved command recognition error rates below 1.7% based on Assistant Based Speech Recognition (ABSR). One main issue to transfer ABSR from the laboratory to the ops-rooms is its costs of deployment. Currently each ABSR model must manually be adapted to the local environment due to e.g. different accents and models to predict possible controller commands. The Horizon 2020 funded project MALORCA (Ma-chine Learning of Speech Recognition Models for Controller As-sistance) proposes a general, cheap and effective solution to au-tomate this re-learning, adaptation and customization process to new environments, by taking advantage of the large amount of speech data available in the ATM world. This paper presents an algorithm which automatically learns a model to predict control-ler commands from recorded untranscribed controller utterances and the corresponding radar data. The trained model is validated against transcribed controller commands for Vienna and Prague approach. Command error rates are reduced from 4.1% to 0.9% for Prague approach and from 10.9% to 2.0% for Vienna ap-proach.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    3
    Citations
    NaN
    KQI
    []