Classification Saliency-Based Rule for Visible and Infrared Image Fusion

2021 
Existing image fusion methods always use hand-crafted fusion rules due to the uninterpretability of deep feature maps, which restrict the performance of networks and result in distortion. To address these limitations, this paper for the first time realizes the interpretable importance evaluation of feature maps in a deep learning manner. This importance-oriented fusion rule helps preserve valuable feature maps and thus reduce distortion. In particular, we propose a pixel-wise classification saliency-based fusion rule. First, we employ a classifier to classify two types of source images which capture the differences and uniqueness between two classes. Then, the importance of each pixel is quantified as its contribution to the classification result. The importance is shown in the form of classification saliency maps. Finally, the feature maps are fused according to the saliency maps to generate fusion results. Moreover, because there is no need of manually deciding the characteristics to be retained, it is an unsupervised method with less human participation. Both qualitative and quantitative experiments demonstrate the superiority of our method over the state-of-the-art fusion methods even if using a simple network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    2
    Citations
    NaN
    KQI
    []