Edge Assisted Mobile Semantic Visual SLAM

2020 
Localization and navigation play a key role in many location-based services and have attracted numerous research efforts from both academic and industrial community. In recent years, visual SLAM has been prevailing for robots and autonomous driving cars. However, the ever-growing computation resource demanded by SLAM impedes its application to resource-constrained mobile devices. In this paper we present the design, implementation, and evaluation of edgeSLAM, an edge assisted real-time semantic visual SLAM service running on mobile devices. edgeSLAM leverages the state-of-the-art semantic segmentation algorithm to enhance localization and mapping accuracy, and speeds up the computation-intensive SLAM and semantic segmentation algorithms by computation offloading. The key innovations of edgeSLAM include an efficient computation offloading strategy, an opportunistic data sharing mechanism, and an adaptive task scheduling algorithm. We fully implement edgeSLAM on an edge server and different types of mobile devices (2 types of smartphones and a development board). Extensive experiments are conducted under 3 data sets, and the results show that edgeSLAM is able to run on mobile devices at 35fps frame rate and achieves a 5cm localization accuracy, outperforming existing solutions by more than 15%. We also demonstrate the usability of edgeSLAM through 2 case studies of pedestrian localization and robot navigation. To the best of our knowledge, edgeSLAM is the first real-time semantic visual SLAM for mobile devices.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    14
    Citations
    NaN
    KQI
    []