CDANet: Contextual Detail-Aware Network for High-Spatial-Resolution Remote-Sensing Imagery Shadow Detection

2022 
Shadow detection automatically marks shadow pixels in high-spatial-resolution (HSR) imagery with specific categories based on meaningful colorific features. Accurate shadow mapping is crucial in interpreting images and recovering radiometric information. Recent studies have demonstrated the superiority of deep learning in very-high-resolution satellite imagery shadow detection. Previous methods usually overlap convolutional layers but cause the loss of spatial information. In addition, the scale and shape of shadows vary, and the small and irregular shadows are challenging to detect. In addition, the unbalanced distribution of the foreground and the background causes the common binary cross-entropy loss function to be biased, which seriously affects model training. A contextual detail-aware network (CDANet), a novel framework for extracting accurate and complete shadows, is proposed for shadow detection to remedy these issues. In CDANet, a double branch module is embedded in the encoder–decoder structure to effectively alleviate low-level local information loss during convolution. The contextual semantic fusion connection with the residual dilation module is proposed to provide multiscale contextual information of diverse shadows. A hybrid loss function is designed to retain the detailed information of the tiny shadows, which per-pixel calculates the distribution of shadows and improves the robustness of the model. The performance of the proposed method is validated on two distinct shadow detection datasets, and the proposed CDANet reveals higher portability and robustness than other methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    77
    References
    0
    Citations
    NaN
    KQI
    []