Object-Based Mangrove Mapping Using Submeter Superspectral Worldview-3 Imagery and Deep Convolutional Neural Network

2021 
Mangroves provide several ecosystem services but suffer from multi-source degradation. Their global mapping at fine-scale needs efficient classification fusing very high resolution (VHR) satellite imageries. Visible (VIS) and near-infrared (NIR) imageries are great assets for mapping when supervised by machine learners. However those imageries commonly rely on three VIS and one NIR bands, impeding successful discrimination of diverse objects. The mangrove classification is besides largely supported by shallow learners. This research produced the first object-based mangrove classification using the VHR superspectral (16-band) WorldView-3 imagery supervised by convolutional neural networks. Four spectral datasets were segmented and classified using a U-net architecture: red-green-blue (RGB); RGB with yellow and coastal (VIS); VIS with red edge, NIR1 and NIR2 (VIS-NIR); and VIS-NIR with eight mid-infrareds (VIS-NIR-MIR). Even if very satisfactory results were computed for the four segmented datasets (averagely ranging from 0.92 to 0.96), the best model was derived from the 16-band full dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []