A Robust Descriptor Based on Modality-Independent Neighborhood Information for Optical-SAR Image Matching

2022 
Due to the intensity differences and speckle noise, automatic optical-synthetic aperture radar (SAR) image matching is still a challenging task. This letter addresses this problem by proposing a novel descriptor (MaskMIND) with three different modes using modality-independent neighborhood information. This descriptor aims to sample and active relative structural information to improve accuracy and precision. In addition, the gradient maps are calculated, respectively, in pretreatment to eliminate noise. Then the corresponding metric, which takes into account the increasing positional uncertainty with distance, is defined using the sum of squared differences (SSDs) accelerated by fast Fourier transform (FFT). Our methods are effective because of their relativeness and abstractness. The experimental results in five optical-SAR image pairs show that our methods have great performance and potential. Compared with channel features of orientated gradients (CFOG), which is the state-of-the-art method, the accuracy of our sMaskMIND-grids is improved by 12% on average.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []