Improved cross-task recognition using MMIE training

2002 
This paper investigates the cross-task recognition and adaptation performance of HMMs trained using either conventional maximum likelihood estimation or the discriminative maximum mutual information estimation (MMIE) criterion. Initial experiments used models trained on the low noise North American Business news corpus of read speech. Cross-task testing on Broadcast News data showed that the MMIE models yielded lower error rates both across-task as well as within-task. This result was confirmed using models trained on the Switchboard corpus which were tested on Voicemail (VM)data. This setup was also used to investigate the performance of task-adaptation when using a limited amount of VM data for both acoustic and language modelling. The setup that gave the best performance on the VM test data used Switchboard models trained using MMIE and then adapted to VM data using maximum a posteriori adaptation techniques.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []