Capsulenet-Based Spatial–Spectral Classifier for Hyperspectral Images

2019 
In this paper, a Capsulenet-based framework is proposed for extracting spectral and spatial features for improving hyperspectral image classification. Unlike conventional strategies, the proposed framework simultaneously optimizes both feature extraction and classification. The spectral features/patterns derived at different levels of hierarchies are remodeled as spectral-feature capsules. Consequently, unlike conventional convolutional neural network-based approaches, the relative locations as well as other properties such as depth, width, and position of the spectral patterns are taken into consideration. In addition to learning spectral features/patterns, a convolutional long short-term memory (conv-LSTM) is employed for sequentially integrating the spatial features learned from each band. The integrated spatial-feature representation, thus obtained from the final hidden state of conv-LSTM, forms spatial-feature capsules. The capsule-level integration of spatial and spectral features/patterns yields better convergence and accuracy as compared to both ensemble-based and kernel-level integrations. Along with the margin loss, a spectral-angle-based reconstruction loss is also minimized to regularize the learning of network weights. Experiments over different standard datasets indicate that the proposed approach performs better than other prominent hyperspectral classifiers. Furthermore, in comparison with the recent deep learning models, our approach is found to be less sensitive to the network parameters and achieves better accuracy even with lesser network depth.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    64
    References
    11
    Citations
    NaN
    KQI
    []