Input-Dependably Feature-Map Pruning

2018 
Deep neural networks are an accurate tool for solving, among other things, vision tasks. The computational cost of these networks is often high, preventing their adoption in many real time applications. Thus, there is a constant need for computational saving in this research domain. In this paper we suggest trading accuracy with computation using a gated version of Convolutional Neural Networks (CNN). The gated network selectively activates only a portion of its feature-maps, depending on the given example to be classified. The network’s ‘gates’ imply which feature-maps are necessary for the task, and which are not. Specifically, full feature maps are considered for omission, to enable computational savings in a manner compliant with GPU hardware constraints. The network is trained using a combination of back-propagation for standard weights, minimizing an error-related loss, and reinforcement learning for the gates, minimizing a loss related to the number of feature maps used. We trained and evaluated a gated version of dense-net on the CIFAR-10 dataset [1]. Our results show that with slight impact on the network accuracy, a potential acceleration of up to \( \times 3 \) might be obtained.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []