Exposure Fusion for Dynamic Scenes Combining Retinex Theory and Low-Rank Matrix Completion

2019 
Exposure fusion (EF) directly generates a low dynamic range (LDR) image with similar image quality to the high dynamic range (HDR) image by combining images of different exposures to overcome the limited dynamic range of common digital cameras. EF for dynamic scenes is challenging not only due to the difficulty in removing various ghost artifacts but also because of the requirement of preserving details, especially in the saturated regions. Recently, low-rank (LR) matrix completion (LRMC) has been shown to be efficient in separating the latent background from the sparse motion in the irradiance domain. However, the performance of LRMC models strongly relies on the estimated irradiance images. To address this problem, this paper proposes a novel EF method combining Retinex theory and the LRMC model. The proposed method consists of four steps. First, according to Retinex theory, an image is decomposed into illumination and reflection components. Second, the LRMC model is applied to the reflection component to generate the background reflection component and the sparse error. Third, the motion map is modeled as a Markov random field (MRF), integrating the sparse error with the ordering constraint across all the illumination components. Finally, all the illumination components are fused via a pyramid-based method, where the weight maps are defined based on the obtained motion map and illumination. The experimental results show that the proposed method outperforms the state-of-the-art methods particularly in preserving details in saturated regions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    2
    Citations
    NaN
    KQI
    []