Double-Channel Guided Generative Adversarial Network for Image Colorization

2021 
Image colorization has a widespread application in video and image restoration in the past few years. Recently, automatic colorization methods based on deep learning have shown impressive performance. However, these methods map grayscale image input into multi-channel output directly. In the process, it usually loses detailed information during feature extraction, resulting in abnormal colors in local areas of the colorization image. To overcome abnormal colors and improve colorization quality, we propose a novel Double-Channel Guided Generative Adversarial Network (DCGGAN). It includes two modules: a reference component matching module and a double-channel guided colorization module. The reference component matching module is introduced to select suitable reference color components as auxiliary information of the input. The double-channel guided colorization module is designed to learn the mapping relationship from the grayscale to each color channel with the assistance of reference color components. Experimental results show that the proposed DCGGAN outperforms existing methods on different quality metrics and achieves state-of-the-art performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    1
    Citations
    NaN
    KQI
    []