Stochastic data sweeping for fast DNN training

2014 
Context-dependent deep neural network (CD-DNN) has been successfully used in large vocabulary continuous speech recognition (LVCSR). However the immense computational cost of the mini-batch based back-propagation (BP) training has become a major block to utilize massive speech data for DNN training. Previous works on BP training acceleration mainly focus on parallelization with multiple GPUs. In this paper, a novel stochastic data sweeping (SDS) framework is proposed from a different perspective to speed up DNN training with a single GPU. Part of the training data is randomly selected from the whole set and the quantity is gradually reduced at each training epoch. SDS utilizes less data in the entire process and consequently save tremendous training time. Since SDS works at data level, it is complementary to parallel training strategies and can be integrated to form a much faster training framework. Experiments showed that, combining SDS with asynchronous stochastic gradient descent (ASGD) can achieve almost 3.0 times speed-up on 2 GPUs at no loss of recognition accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    4
    Citations
    NaN
    KQI
    []