Superpixel-Based Graphical Model for Remote Sensing Image Mapping

2015 
Object-oriented remote sensing image classification is becoming more and more popular because it can integrate spatial information from neighboring regions of different shapes and sizes into the classification procedure to improve the mapping accuracy. However, object identification itself is difficult and challenging. Superpixels, which are groups of spatially connected similar pixels, have the scale between the pixel level and the object level and can be generated from oversegmentation. In this paper, we establish a new classification framework using a superpixel-based graphical model. Superpixels instead of pixels are applied as the basic unit to the graphical model to capture the contextual information and the spatial dependence between the superpixels. The advantage of this treatment is that it makes the classification less sensitive to noise and segmentation scale. The contribution of this paper is the application of a graphical model to remote sensing image semantic segmentation. It is threefold. 1) Gradient fusion is applied to multispectral images before the watershed segmentation algorithm is used for superpixel generation. 2) A probabilistic fusion method is designed to derive node potential in the superpixel-based graphical model to address the problem of insufficient training samples at the superpixel level. 3) A boundary penalty between the superpixels is introduced in the edge potential evaluation. Experiments on three real data sets were conducted. The results show that the proposed method performs better than the related state-of-the-art methods tested.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    68
    Citations
    NaN
    KQI
    []