Shallow and wide fractional max-pooling network for image classification

2019 
Convolutional network (ConvNet) has been shown to be able to increase the depth as well as improve performance. Deep net, however, is not perfect yet because of vanishing/exploding gradients and some weights avoid learning anything during the training. To avoid this, can we just keep the depth shallow and simply make network wide enough to achieve a similar or better performance? To answer this question, we empirically investigate the architecture of popular ConvNet models and try to widen the network enough in the fixed depth. Following this method, we carefully design a shallow and wide ConvNet configured with fractional max-pooling operation with a reasonable number of parameters. Based on our technical approach, we achieve 6.43% test error on CIFAR-10 classification dataset. At the same time, optimal performances are also achieved on benchmark datasets MNIST (0.25% test error) and CIFAR-100 (25.79% test error) compared with related methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    5
    Citations
    NaN
    KQI
    []