Towards Perceptual Image Fusion: A Novel Two-layer Framework

2020 
Abstract Recent studies in neuroscience indicate that the human visual system perceives regular and irregular contents separately: the former mainly illustrate primary visual information such as image structures; while the latter are generally messy and independent, and seem to be less important in perception. However, without any reference to such perceptual theory, the existing image fusion algorithms treat these two types of contents equally, and may not preserve the most perceptually important information in source images. In this work, we propose a new two-layer image fusion framework towards consistency with human perception. The main contributions are as follows: (1) We firstly explore the recently revealed perceptual theory and characterize perceptual significance from regular and irregular image contents in image fusion. This creates the possibilities to develop fusion algorithms towards consistency with human perception and preserve more desirable information in the fused result. (2) Following the concept of active inference mechanisms in perception, we present a perceptual image decomposition model with a local regression method to separate images into regular and irregular layers. In this way, we could treat these two kinds of image contents discriminatively, with elaborately selected fusion strategies based on sparse representation and local energy. We conduct extensive experiments including subjective evaluation, objective evaluation and perceptual assessment, and the experimental results demonstrate the superiority of the proposed model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    64
    References
    6
    Citations
    NaN
    KQI
    []