One-Shot Summary Prototypical Network Toward Accurate Unpaved Road Semantic Segmentation

2021 
Recent studies of driving scene understanding based on image semantic segmentation have achieved dramatic advances in speed and accuracy. Large-scale public datasets for semantic segmentation of paved road driving scenes have led the advances, but there is no large-scale public dataset for unpaved road environments. Building a large-scale image semantic segmentation dataset for unpaved roads is very expensive, and domain gaps between geographically distributed locations and those of seasonal changes hinder building a training dataset that is adequate to train a convolutional neural network model. In this paper, to resolve the data insufficiency problem, we use an one-shot learning setting in unpaved road driving scene understanding. Our One-shot Summary Prototypical Network (OSPNet) is trained with paved road driving scenes, and it identifies drivable regions in unpaved roads given only a single support image and unpaved road mask data. The OSPNet improves previous two branch few-shot segmentation approaches by introducing the summary branch which enables channel-wise weighting for important features in the feature map of support and query branches. Our experiments show that our model quantitatively and qualitatively outperforms recent supervised and few-shot segmentation models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []