Cross-Collaborative Fusion-Encoder Network for Robust RGB-Thermal Salient Object Detection

2022 
With the prevalence of thermal cameras, RGB-T multi-modal data have become more available for salient object detection (SOD) in complex scenes. Most RGB-T SOD works first individually extract RGB and thermal features from two separate encoders and directly integrate them, which pay less attention to the issue of defective modalities. However, such an indiscriminate feature extraction strategy may produce contaminated features and thus lead to poor SOD performance. To address this issue, we propose a novel CCFENet for a perspective to perform robust and accurate multi-modal expression encoding. First, we propose an essential cross-collaboration enhancement strategy (CCE), which concentrates on facilitating the interactions across the encoders and encouraging different modalities to complement each other during encoding. Such a cross-collaborative-encoder paradigm induces our network to collaboratively suppress the negative feature responses of defective modality data and effectively exploit modality-informative features. Moreover, as the network goes deeper, we embed several CCEs into the encoder, further enabling more representative and robust feature generation. Second, benefiting from the proposed robust encoding paradigm, a simple yet effective cross-scale cross-modal decoder (CCD) is designed to aggregate multi-level complementary multi-modal features, and thus encourages efficient and accurate RGB-T SOD. Extensive experiments reveal that our CCFENet outperforms the state-of-the-art models on three RGB-T datasets with a fast inference speed of 62 FPS. In addition, the advantages of our approach in complex scenarios (e.g., bad weather, motion blur, etc.) and RGB-D SOD further verify its robustness and generality. The source code will be publicly available via our project page: https://git.openi.org.cn/OpenVision/CCFENet .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []