Building DNN acoustic models for large vocabulary speech recognition

2017 
Empirical investigation of neural network acoustic models for speech recognition.Evaluation of DNNs up to ten times larger than those used in previous works.Comparison of densely connected, convolutional, and locally-connected untied neural networks.Results on Switchboard and a combined 2100 hour corpus.Explanation of combined corpus baseline system training recipe which is now a part of the Kaldi speech toolkit. Understanding architectural choices for deep neural networks (DNNs) is crucial to improving state-of-the-art speech recognition systems. We investigate which aspects of DNN acoustic model design are most important for speech recognition system performance, focusing on feed-forward networks. We study the effects of parameters like model size (number of layers, total parameters), architecture (convolutional networks), and training details (loss function, regularization methods) on DNN classifier performance and speech recognizer word error rates. On the Switchboard benchmark corpus we compare standard DNNs to convolutional networks, and present the first experiments using locally-connected, untied neural networks for acoustic modeling. Using a much larger 2100-hour training corpus (combining Switchboard and Fisher) we examine the performance of very large DNN models - with up to ten times more parameters than those typically used in speech recognition systems. The results suggest that a relatively simple DNN architecture and optimization technique give strong performance, and we offer intuitions about architectural choices like network depth over breadth. Our findings extend previous works to help establish a set of best practices for building DNN hybrid speech recognition systems and constitute an important first step toward analyzing more complex recurrent, sequence-discriminative, and HMM-free architectures.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    76
    References
    52
    Citations
    NaN
    KQI
    []