You Only Need The Image: Unsupervised Few-Shot Semantic Segmentation With Co-Guidance Network

2020 
Few-shot semantic segmentation has recently attracted attention for its ability to segment unseen-class images with only a few annotated support samples. Yet existing methods not only need to be trained with a large scale of pixel-level annotations on certain seen classes, but also require a few annotated support image-mask pairs for the guidance of segmentation on each unseen class. In this paper, we propose the Co-guidance Network (CGNet) for unsupervised few-shot segmentation, which eliminates requirements of annotation on both seen and unseen classes. Specifically, CGNet segments unseen-class images with only unlabeled support images by the newly designed co-guidance mechanism. Moreover, CGNet is trained on seen classes by a novel co-existence recognition loss, which further removes the need of pixel-level annotations. Extensive experiments on the PASCAL $-5^{i}$ dataset show that the unsupervised CGNet performs comparably with the state-of-the-art fully-supervised few-shot methods, while largely alleviating annotation requirement.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []