The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary Outputs

2016 
Convolutional neural networks (CNNs) with very deep architectures, such as the residual network (ResNet) [6], have shown encouraging results in various tasks in computer vision and machine learning. Their depth has been one of the key factors behind the great success of CNNs, with the associated gradient vanishing issue having been largely addressed by ResNet. There are other issues associated with increased depth, however. First, when networks get very deep, the supervision information may vanish due to the associated long backpropagation path. This means that intermediate layers receive less training information, which results in redundancy in models. Second, when the model becomes more complex and redundant, inference becomes more expensive. Third, very deep models require larger volumes of training data. We propose here instead an AuxNet and a new training method to propagate not only gradients but also supervision information from multiple auxiliary outputs at intermediate layers. The proposed AuxNet gives rise to a more compact network which outperforms its very deep equivalent (i.e. ResNet). For example, AuxNet with 44 layers performs better than the original ResNet with 110 layers on several benchmark data sets, i.e. CIFAR-10, CIFAR-100 and SVHN.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    1
    Citations
    NaN
    KQI
    []