Consistency Guided Network for Degraded Image Classification

2020 
Although clear image classification and degraded image restoration have been studied a lot, regrettably, the degraded image classification problem has been largely overlooked. Degraded images usually have low classification performance due to the presence of blur, noise and other imperfections. Existing methods improve classification performance of degraded images by simple restoration, fine-tuning or data augment techniques. However, they ignore the useful information guided by clear images, which leads to the performance gap between degraded images and clear images. In this article, we find that the category distribution, feature distribution and visual attention of degraded images are usually inconsistent with that of clear images. Motivated by this observation, we propose an end-to-end Consistency Guided Network, named CG-Net, for degraded image classification. More specifically, we first propose Category Consistency Loss (CCL) to guide the model to learn category distribution that is more consistent with clear images. Second, we propose Semantic Consistency Loss (SCL) to enforce the model to learn more robust feature representation that is guided by clear images. Third, we propose Visual Attention Alignment Loss (VAAL), which can align the semantically informative regions between clear images and degraded images to improve the performance of degraded images. Besides, our method is more general and suitable for various kinds of degraded images. We conduct extensive experiments on various degraded images. The experimental results show that the proposed method significantly outperforms baselines, which demonstrate the effectiveness of our proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    4
    Citations
    NaN
    KQI
    []