Pretraining for Hyperspectral Convolutional Neural Network Classification

2018 
Convolutional neural networks (CNNs) have been shown to be a powerful tool for image classification. Recently, they have been adopted into the remote sensing community with applications in material classification from hyperspectral images. However, CNNs are time-consuming to train and often require large amounts of labeled training data. The widespread use of CNNs in the image processing and computer vision communities has been facilitated by the networks that have already been trained on large amounts of data. These pretrained networks can be used to initialize networks for new tasks. This transfer of knowledge makes it far less time-consuming to train a new classifier and reduces the need for a large labeled data set. This concept of transfer learning has not yet been fully explored by those using CNNs to train material classifiers from hyperspectral data. This paper provides an insight into training hyperspectral CNN classifiers by transferring knowledge from well labeled data sets to data sets that are less well labeled. It is shown that these CNNs can transfer between completely different domains and sensing platforms, and still improve classification performance. The application of this work is in the training of material classifiers of data acquired from field-based platforms, by transferring knowledge from publicly accessible airborne data sets. Factors, such as training set size, CNN architectures, and the impact of filter width and wavelength interval, are studied.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    40
    Citations
    NaN
    KQI
    []