Spatial and Spectral Extraction Network With Adaptive Feature Fusion for Pansharpening

2022 
Pansharpening methods based on deep neural networks (DNNs) have been attracting great attention due to their powerful representation capabilities. In this article, to combine the feature maps from different subnetworks efficiently, we propose a novel pansharpening method based on a spatial and spectral extraction network (SSE-Net). Different from the other methods based on DNNs that directly concatenate the features from different subnetworks, we design adaptive feature fusion modules (AFFMs) to merge these features according to their information content. First, the spatial and spectral features are extracted by the subnetworks from low spatial resolution multispectral (LR MS) and panchromatic (PAN) images. Then, by fusing the features at different levels, the desired high spatial resolution MS (HR MS) images are generated by the fusion network consisting of AFFMs. In the fusion network, the features from different subnetworks are integrated adaptively, and the redundancy among them is reduced. Moreover, the spectral ratio loss and the gradient loss are defined to ensure the effective learning of spatial and spectral features. The spectral ratio loss captures the nonlinear relationships among the bands in the MS image to reduce the spectral distortions in the fusion result. Extensive experiments were conducted on QuickBird and GeoEye-1 satellite datasets. Visual and numerical results demonstrate that the proposed method produces better fusion results compared with literature techniques. The source code is available at https://github.com/RSMagneto/SSE-Net .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []