Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference

2022 
Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multitask model. Therein, rotation prediction estimates the 2-D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. In particular, label propagation (LP) is applied to make joint prediction for both labeled and unlabeled data. Then, the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks, including Remote sensing image scene classification dataset with 45 classes, published by Northwestern Polytechnical University (NWPU-RESISC45), Aerial Image Dataset (AID), and Remote sensing image scene classification dataset with 19 classes, published by Wuhan University (WHU-RS-19), demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []