Linear Regularized Compression of Deep Convolutional Neural Networks

2017 
In the last years, deep neural networks have revolutionized machine learning tasks. However, the design of deep neural network architectures is still based on try-and-error procedures, and they are usually complex models with high computational cost. This is the reason behind the efforts that are made in the deep learning community to create small and compact models with comparable accuracy to the current deep neural networks. In literature, different methods to reach this goal are presented; among them, techniques based on low rank factorization are used in order to compress pre trained models with the aim to provide a more compact version of them without losing their effectiveness. Despite their promising results, these techniques produce auxiliary structures between network layers; this work shows that is possible to overcome the need for such elements by using simple regularization techniques. We tested our approach on the VGG16 model obtaining a four times faster reduction without loss in accuracy and avoiding supplementary structures between the network layers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    0
    Citations
    NaN
    KQI
    []