DFA: Improving Convolutional Networks with Dual Fusion Attention Module

2020 
The attention mechanism based on Convolutional Neural Networks (CNNs) adaptively recalibrates the feature distribution of processed objects by modeling the corresponding attention masks. These attention masks can usually be loaded into the feature vector of each layer in a certain multiplication manner, representing different feature responses. However, these attention masks are often developed independently on spatial module or channel module and less connected with each other, resulting in relatively single feature activation and localization. To this end, we present a Dual Fusion Attention (DFA) module that can tune the distribution of feature by producing an attention mask which relying on dual fusion of spatial location and channel information. So that every corresponding feature representation can adaptively enrich its discriminative regions and minimize the influence of background noise. The attention masks are only calculated by the compressed combination of the spatial and channel descriptors, thus the realization of DFA module is lightweight enough with tiny extra computational complexity and parameters. Integrated with modern CNN models, image classification experiments demonstrate the superiority of DFA module. Specifically, based on ResNet50, our method achieves 1.16% improvement on the CIFAR100 benchmark and 0.93% improvement on the ImageNet-200 benchmark.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []