From Supervised to Unsupervised Learning for Land Cover Analysis of Sentinel-2 Multispectral Images

2020 
Sentinel-2 provides a large volume of the multi-spectral multi-resolution dataset. Training deep convolutional architecture with such a dataset is still a challenging task in land cover classification due to the absence of ground truth. Also, the selection of appropriate deep architecture for handling the Sentinel-2 dataset is another challenging task. In this paper, we propose a convolutional neural network (CNN) architecture to extract the information from various combinations of bands in Sentinel-2 imagery. We use a loss function, proposed in an earlier work, to train our model in an unsupervised manner. Recent advances in deep learning allow for transferring knowledge from one data set to another. Thus, in our study, we aim at analyzing the “transfer learning” capabilities of our proposed network to land cover classification in Sentinel-2 images. Pre-trained weights of a deep neural architecture, which is trained with very high-resolution optical satellite imagery from Aviris is transferred to a network of almost similar architecture for processing Sentinel-2 data. We used Salinas, a publicly available hyper-spectral dataset, to train this architecture in a supervised fashion and, finally, use transfer learning and fine-tuning on the architecture handling Sentinel-2 data for clustering. Experiments show that bands of 60m resolution have a positive combining effect with bands of 20m resolution for segregating waterbody, urban settlement, and tree canopies distinctly.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    1
    Citations
    NaN
    KQI
    []