Super pixel-level dictionary learning for hyperspectral image classification

2017 
This paper presents a superpixel-level dictionary learning model for hyperspectral data. The idea is to divide the hyperspectral image into a number of super-pixels by means of the super-pixel segmentation method. Each super-pixel is a spatial neighborhood called contextual group. That is, each pixel is represented using a linear combination of a few dictionary items learned from the train data, but since pixels inside a super-pixel are often consisting of the same materials, their linear combinations are constrained to use common items from the dictionary. To this end, the sparse coefficients of the context group have a common sparse pattern by using the joint sparse regularizer for dictionary learning. The sparse coefficients are then used for classification using linear support vector machines. The validity of the proposed method is experimentally verified on a real hyperspectral images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    0
    Citations
    NaN
    KQI
    []