Using Taylor Expansion and Convolutional Sparse Representation for Image Fusion

2020 
Abstract Image decomposition and sparse representation (SR) based methods have achieved enormous successes in multi-source image fusion. However, there exists the performance degradation caused by the following two aspects: (i) limitation of image descriptions for decomposition based methods; (ii) limited ability in detail preservation resulted by divided overlap patches for SR based methods. In order to address such deficiencies, a novel method based on Taylor expansion and convolutional sparse representation (TE-CSR) is proposed for image fusion. Firstly, the Taylor expansion theory, to the best of our knowledge, is for the first time introduced to decompose each source image into many intrinsic components including one deviation component and several energy components. Secondly, the convolutional sparse representation with gradient penalties (CSRGP) model is built to fuse these deviation components, and the average rule is employed for combining the energy components. Finally, we utilize the inverse Taylor expansion to reconstruct the fused image. This proposed method is to suppress the gap of image descriptions in existing decomposition based algorithms. In addition, the new method can improve the limited ability to preserve details caused by the sparse patch coding with SR based approaches. Extensive experimental results are provided to demonstrate the effectiveness of the TE-CSR method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    73
    References
    17
    Citations
    NaN
    KQI
    []