Additive Attention for CNN-based Classification

2021 
Attention has been proved effective in many Computer Vision region. Squeeze and Excitation Network is a classic attention module in CNN, and in many regions, it has been proved effective. But some problems remain in SE-Net. We find that weight parameters of Full Connect layers in a trained SE-N et are not very important, compared to biases. Based on this observation, we proposed a light attention module named Additive Attention module, reducing the computation cost while achieving a similar performance with SE-ResNet-18.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []