A Lightweight Gated Global Module for Global Context Modeling in Neural Networks.

2020 
Global context modeling has been used to achieve better performance in various computer-vision-related tasks, such as classification, detection, segmentation and multimedia retrieval applications. However, most of the existing global mechanisms display problems regarding convergence during training. In this paper, we propose a novel gated global module (GGM) that is lightweight and yet effective in terms of achieving better integration of global information in relation to feature representation. Regarding the original structure of the network as a local block, our module infers global information in parallel with local information, and then a gate function is applied to generate global guidance which is applied to the output of the local module to capture representative information. The proposed GGM can be easily integrated with common CNN architectures and is training friendly. We used a classification task as an example to verify the effectiveness of the proposed GGM, and extensive experiments on ImageNet and CIFAR demonstrated that our method can be widely applied and is conducive to integrating global information into common networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    1
    Citations
    NaN
    KQI
    []