Semantic Segmentation of Land Use / Land Cover (LU/LC) Types Using F-CNNS on Multi-Sensor (Radar-Ir-Optical) Image Data

2021 
Land Use/ Land Cover (LU/LC) segmentation is a widely studied topic in the field of remote sensing. Past focus has been on independent studies either on color (RGB) and the Normalized Vegetation Index (NDVI) or on Polarimetric Synthetic Aperture Radar (PolSAR) data. In this paper we explore the fusion potential of RGB images with additional SAR and Near Infra-red (NIR) images for enhanced LU/LC segmentation through Fully-Convolutional Neural Networks (F-CNNs). F-CNNs have been extensively studied for semantic segmentation problems with U-Net and SegNet being two well-known F-CNN architectures. Both these architectures were used as references for this study. High resolution RGB, SAR and NIR images were acquired through Google Earth (GE), German Aerospace Center (DLR) and The Planet Laboratories, respectively. IR was converted to NDVI for its higher potential of segmentation of vegetations areas. Four multi-sensor configurations as input channels to the networks were studied after precise co-registration of these images, and the results were compared to individual channels for both architectures. Simon Fraser University (SFU), Burnaby Campus and its surrounding area was selected for this study due its diverse land types. The area was divided into 5 classes i.e. Roads, Buildings, Forest, Water and No class (unclassified). An overall, best accuracy of ~86% was achieved for a five-channel configuration (R+G+B+SAR+NDVI). We show that the inclusion of SAR and IR channels to RGB based network can significantly improve the performance of LU/LC segmentation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []