Pruning Deep Convolutional Neural Networks via Gradient Support Pursuit

2020 
In this paper, we propose a filter pruning method, namely, Filter Pruning via Gradient Support Pursuit (FPGraSP), which can accelerate and compress very deep Convolutional Neural Networks effectively in an iterative way. Previous work reports that Gradient Support Pursuit (GraSP) is well employed for sparsity-constrained optimization in Machine Learning. We seek to develop a modification that GraSP can be applied to structured pruning in deep CNNs. Specifically, we select the filters with the maximum gradient values and merge their indices with the indices of the filters with the largest weights. We then update parameters over the above union. Finally, we utilize filter selection in a dynamic way to get the filters with the largest magnitude. Different from some previous methods which remove filters of smaller weights but neglect the influence of gradients, we exploit gradient information. Our experimental results on MNIST, CIFAR-10 and CIFAR-100 clearly demonstrate the efficiency of our FPGraSP algorithm. As an example, for pruning ResNet-56 on CIFAR-10, our FPGraSP without fine-tuning obtains 0.04\(\%\) accuracy drop, achieving 52.63\(\%\) FLOPs reduction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []