Dynamic Visual SLAM Based on Semantic Information and Multi-View Geometry

2020 
Visual Simultaneous Localization and Mapping (Visual SLAM) is considered to be one of the important foundations for mobile robots to move toward intelligence, which gives robots the ability to autonomously locate and construct maps in an unknown environment. In the past decades, great progress has been made in the field of visual SLAM, relatively mature algorithm system and program architecture are gradually developed. However, the current researches on visual SLAM mostly assume that the surrounding environment is static, which greatly limits the application of SLAM systems in real world. Aiming at the urgent need of mobile robots for precise localization and map construction in dynamic environment, methods are proposed in this paper. Image semantic segmentation based on deep learning and multi-view geometry methods are combined to recognize and segment moving objects. Only background features are used for camera tracking to avoid the impact of moving objects. Furthermore, an experimental platform is built using RGB-D camera and motion capture device, the effectiveness of the algorithm is verified by public datasets and real scene data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []