Indoor Simultaneous Localization and Mapping for Lego Ev3

2018 
Simultaneous localization and mapping (SLAM) function is a key component for the indoor robot. Many researchers use the Laser Imaging Detection and Ranging (LIDAR) or depth camera to enhance the mapping process. However, LIDAR and depth camera are so expensive, so the image based SLAM algorithm becomes the first choice of SLAM. Visual SLAM only use image information to detect object, but the process is time-consuming, and usually it comes with low precise. In this paper, we propose a SLAM algorithm with the help of ultra sound data, gyroscope data and an ordinary camera on an Android phone. Although the camera does not provide any depth information for mapping, we use combine the ultra sound, gyroscope data and the image information to enhance the mapping process. Moreover, we use deep learning algorithm to detect the object in the image, and then use the information to refine the map. Experimental results show that the proposed algorithm can effectively improve the quality of the map.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []