Robust Multisensor Fusion for Reliable Mapping and Navigation in Degraded Visual Conditions

2021 
We address the problem of robust simultaneous mapping and localization in degraded visual conditions using low-cost off-the-shelf radars. Current methods often use high-end radar sensors or are tightly coupled to specific sensors, limiting the applicability to new robots. In contrast, we present a sensor-agnostic processing pipeline based on a novel forward sensor model to achieve accurate updates of signed distance function-based maps and robust optimization techniques to reach robust and accurate pose estimates. Our evaluation demonstrates accurate mapping and pose estimation in indoor environments under poor visual conditions and higher accuracy compared to existing methods on publicly available benchmark data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []