Bidirectional Grid Fusion Network for Accurate Land Cover Classification of High-Resolution Remote Sensing Images

2020 
Land cover classification has achieved significant advances by employing deep convolutional network (ConvNet) based methods. Following the paradigm of learning deep models, land cover classification is modeled as semantic segmentation of very high resolution remote sensing images. In order to obtain accurate segmentation results, high-level categorical semantics and low-level spatial details should be effectively fused. To this end, we propose a novel bidirectional gird fusion network to aggregate the multilevel features across the ConvNet. Specifically, the proposed model is characterized by a bidirectional fusion architecture, which enriches diversity of feature interaction by encouraging bidirectional information flow. In this way, our model gains mutual benefits between top–down and bottom–up information flows. Moreover, a grid fusion architecture is then followed for further feature refinement in a dense and hierarchical fusion manner. Finally, effective feature upsampling is also critical for the multiple fusion operations. Consequently, a content-aware feature upsampling kernel is incorporated for further improvement. Our whole model consistently achieves significant improvement over state-of-the-art methods on two major datasets, ISPRS and GID.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []