L0 Regularization based Fine-grained Neural Network Pruning Method

2019 
Deep neural networks have made remarkable achievements in many tasks. However, we are not able to deploy such successful but heavy models on mobile devices directly due to the limited power and computing capacity. Thus, an obvious solution to tackle this problem is to compress neural network by pruning useless weights in neural networks. The point is how to remove these redundancies while maintain the performance of neural networks. In this work, we propose a novel neural network pruning method: guiding the weights of a neural network to be sparse by introducing LO regularization during the training stage, which can effectively resist the damage on the performance while pruning as well as dramatically reduce the time overhead of retraining stage. Experiment results on MNIST with LeNet and CIFAR-10 with VGG-16 demonstrate the effectiveness of this method to the classic method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []