A multi-level context-guided classification method with object-based convolutional neural network for land cover classification using very high resolution remote sensing images

2020 
Abstract Classification of very high resolution imagery (VHRI) is challenging due to the difficulty in mining complex spatial and spectral patterns from rich image details. Various object-based Convolutional Neural Networks (OCNN) for VHRI classification have been proposed to overcome the drawbacks of the redundant pixel-wise CNNs, owing to their low computational cost and fine contour-preserving. However, classification performance of OCNN is still limited by geometric distortions, insufficient feature representation, and lack of contextual guidance. In this paper, an innovative multi-level context-guided classification method with the OCNN (MLCG-OCNN) is proposed. A feature-fusing OCNN, including the object contour-preserving mask strategy with the supplement of object deformation coefficient, is developed for accurate object discrimination by learning simultaneously high-level features from independent spectral patterns, geometric characteristics, and object-level contextual information. Then pixel-level contextual guidance is used to further improve the per-object classification results. The MLCG-OCNN method is intentionally tested on two validated small image datasets with limited training samples, to assess the performance in applications of land cover classification where a trade-off between time-consumption of sample training and overall accuracy needs to be found, as it is very common in the practice. Compared with traditional benchmark methods including the patch-based per-pixel CNN (PBPP), the patch-based per-object CNN (PBPO), the pixel-wise CNN with object segmentation refinement (PO), semantic segmentation U-Net (U-NET), and DeepLabV3+(DLV3+), MLCG-OCNN method achieves remarkable classification performance (> 80 %). Compared with the state-of-the-art architecture DeepLabV3+, the MLCG-OCNN method demonstrates high computational efficiency for VHRI classification (4–5 times faster).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    28
    Citations
    NaN
    KQI
    []