Generating and Sifting Pseudo Labeled Samples for Improving the Performance of Remote Sensing Image Scene Classification

2020 
Deep learning-based remote sensing image scene classification methods are the current mainstream, and enough labeled samples are very important for their performance. Considering the fact that manual labeling of samples requires high labor and time cost, many methods have been proposed to automatically generate pseudosamples from real samples, however, existing methods cannot directly sift the pseudosamples from the perspective of model training. To address this problem, a generating and sifting pseudolabeled samples scheme is proposed in this article. First of all, the existing SinGAN is used to generate multiple groups of pseudosamples from the real samples. Afterward, the proposed quantitative sifting measure which can evaluate both the authenticity and diversity from the perspective of model training is employed to select the best pseudosamples from the multiple generated pseudosamples. Finally, the selected pseudosamples and real samples are used to pretrain and finetune the deep scene classification network (DSCN), respectively. Moreover, the focal loss that is originally proposed for object detection is adopted to replace the traditional cross entropy loss in this article. A designed quantitative evaluation shows that the value of proposed quantitative sifting measure is proportional to the overall accuracy, which validates the effectiveness of proposed quantitative sifting measure. The comprehensive quantitative comparisons on AID and NWPU-RESISC45 datasets in terms of overall accuracy and confusion matrices demonstrate that incorporating the pseudosamples selected by proposed sifting measure and the focal loss can improve the performance of DSCN.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    3
    Citations
    NaN
    KQI
    []