Retrieving Scale on Monocular Visual Odometry Using Low Resolution Range Sensors

2020 
We present a flexible sensor fusion approach to retrieve scale information in monocular visual odometry (VO) through integrating range measurements from a wide variety of depth sensors spanning from small-resolution time-of-flight (ToF) cameras to 2-D and potentially 3-D LiDARs. While many algorithms exist in literature for range-enhanced monocular VO, the majority of them are tailored for a specific sensor choice, limiting the integration on generic mobile systems. Our monocular VO algorithm builds on a standard front end, where the camera tracking is performed relative to a map of triangulated landmarks. The inherent scale ambiguity and drift in monocular perception are resolved by optimizing both the camera poses as well as the landmark map with the depth information provided by the range sensor. Performances have been tested on the custom data sets created with an experimental platform comprising a stereo camera, a low-resolution ToF camera, and a 2-D LiDAR. We present a detailed overview of the extrinsic calibration procedures including an ad hoc solution applicable to very low-resolution depth sensors. The proposed system is tested on short- and long-range motions, showing that: 1) performances in each of the tested configurations are on par or better than the state-of-the-art stereo systems and 2) it is sufficient to use a reduced number of range measurements to obtain a scaled trajectory accurately.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    12
    Citations
    NaN
    KQI
    []