Spectral and Spatial Feature Fusion for Hyperspectral Image Classification

2022 
Compared with traditional images, hyperspectral images (HSIs) not only have spatial information, but also have rich spectral information. However, the mainstream hyperspectral image classification (HIC) methods are all based on convolutional neural network (CNN), which has great advantages in extracting spatial features, but it has certain limitations in dealing with spectral continuous sequence information. Therefore, Transformer, which is good at processing sequences, has also been gradually applied to HIC. Besides, since HSI is typical 3-D structures, we believe that the correlation of the three dimensions is also an important information. So, in order to fully extract the spectral–spatial information, as well as the correlation of the three dimensions, we propose a spectral and spatial feature fusion module (i.e., TransCNN) for HIC. TransCNN consists of CNNs and a Transformer. The former is in charge of mining the spatial and spectral information from different dimensions, while the latter not only undertakes the most critical fusion but also captures the deeper relationship characteristics. We transpose the data to extract features and their correlation through three CNNs branches. We believe that these feature maps still have deep spectral information. Therefore, we have embedded them into 1-D vectors and use Transformer’s encoder to extract features. However, some information will be lost when embedding into 1-D vectors. Therefore, we use decoder, which has been ignored in the field of vision, to fuse the features before passing encoder and the features after extracted by encoders. Two kinds of features are fused by decoder, and the obtained information is finally input into the classifier for classification. Experimental results on real HSIs show that the proposed architecture can achieve competitive performance compared with the state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []