Visual Attention Dehazing Network with Multi-level Features Refinement and Fusion

2021 
Abstract Image dehazing is very important for many computer vision tasks. However, typical CNN-based methods learn a direct mapping from a hazy image to a clear image, ignoring relevant haze priors and multi-level features. In this paper, a new Visual Attention Dehazing Network (VADN) with multi-level refinement and fusion is proposed, which leverages a haze attention map as a haze relevant prior and learns complementary haze information among multi-level features. The VADN contains a feature extraction network, a recurrent refinement network and an encoder-decoder network. The feature extraction network captures the multi-level features. The recurrent refinement network generates and refines the haze attention map by taking low-level features and high-level features as inputs alternatively. Then, the haze attention map is injected into the encoder-decoder network to obtain the clear image with the help of complementary information learned from informative multi-level features. The experimental results demonstrate that the average PSNR of VADN is 32.50 dB which outperforms most state-of-the-art methods by up to 5.14 dB. Besides, the run time of VADN is 0.067 s, only 55 % of the run time spent by the recent enhanced pix2pix dehazing network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    1
    Citations
    NaN
    KQI
    []