Deep Learning Based Landmark Matching For Aerial Geolocalization

2020 
Visual odometry has gained increasing attention due to the proliferation of unmanned aerial vehicles, self-driving cars, and other autonomous robotics systems. Landmark detection and matching are critical for visual localization. While current methods rely upon point-based image features or descriptor mappings we consider landmarks at the object level. In this paper, we propose LMNet a deep learning based landmark matching pipeline for city-scale, aerial images of urban scenes. LMNet consists of a Siamese network, extended with a multi-patch based matching scheme, to handle offcenter landmarks, varying landmark scales, and occlusions of surrounding structures. While there exist a number of landmark recognition benchmark datasets for ground-based and nadir aerial or satellite imagery, there is a lack of datasets and results for oblique aerial imagery. We use a unique unsupervised multi-view landmark image generation pipeline for training and testing the proposed matching pipeline using over 0.5 million real landmark patches. Results for aerial landmark matching across four cities show promising results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []