D-CrossLinkNet for Automatic Road Extraction from Aerial Imagery

2020 
Road extraction is of great importance to remote sensing image analysis. Compare to traditional road extraction methods which rely on handcrafted features, Convolutional Neural Network (CNN) uses deep hierarchical structure for automatically learning features and has superior performance of road extraction. In this paper, we propose a novel encoder-decoder architecture D-CrossLinkNet for extracting roads from aerial images. It consists of LinkNet, cross-resolution connections, and two dilated convolution blocks. As one of the representative semantic segmentation models, LinkNet uses the encoder-decoder architecture with skip connections for road extraction. However, the downsampling operation of LinkNet reduces the resolution of the feature maps, leading to the loss of spatial information. Therefore, we use the cross-resolution connections to supplement the spatial information to the decoder. Besides, the dilated-convolution blocks are introduced to increase receptive field of feature points while maintaining short-distance spatial feature information. Experimental results on two benchmark datasets (Beijing and Shanghai, DeepGlobe) demonstrate the proposed method has better performance than other CNN-based methods (e.g. DeepLab, LinkNet and D-LinkNet).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    1
    Citations
    NaN
    KQI
    []