Exploring Cross-Domain Pretrained Model for Hyperspectral Image Classification

2022 
A pretrain-finetune strategy is widely used to reduce the overfitting that can occur when data are insufficient for convolutional neural network (CNN) training. The first few layers of a CNN pretrained on a large-scale RGB dataset are capable of acquiring general image characteristics, which are remarkably effective in tasks targeted for different RGB datasets. However, when it comes down to the hyperspectral domain where each domain has its unique spectral properties, the pretrain-finetune strategy no longer can be deployed in a conventional way while presenting three major issues: 1) inconsistent spectral characteristics among the domains (e.g., frequency range); 2) inconsistent number of data channels among the domains; and 3) absence of large-scale hyperspectral dataset. We seek to train a universal cross-domain model, which can later be deployed for various spectral domains. To achieve, we physically furnish multiple inlets to the model while having a universal portion, which is designed to handle the inconsistent spectral characteristics among different domains. Note that only the universal portion is used in the finetune process. This approach naturally enables the learning of our model on multiple domains simultaneously, which acts as an effective workaround for the issue of the absence of large-scale dataset. We have carried out a study to extensively compare models that were trained using cross-domain approach with ones trained from scratch. Our approach was found to be superior both in accuracy and training efficiency. In addition, we have verified that our approach effectively reduces the overfitting issue, enabling us to deepen the model up to 13 layers (from 9) without compromising the accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    71
    References
    1
    Citations
    NaN
    KQI
    []