Into Darkness: Visual Navigation Based on a Lidar-Intensity-Image Pipeline

2016 
Visual navigation of mobile robots has become a core capability that enables many interesting applications from planetary exploration to self-driving cars. While systems built on passive cameras have been shown to be robust in well-lit scenes, they cannot handle the range of conditions associated with a full diurnal cycle. Lidar, which is fairly invariant to ambient lighting conditions, offers one possible remedy to this problem. In this paper, we describe a visual navigation pipeline that exploits lidar’s ability to measure both range and intensity (a.k.a., reflectance) information. In particular, we use lidar intensity images (from a scanning-laser rangefinder) to carry out tasks such as visual odometry (VO) and visual teach and repeat (VT&R) in realtime, from full-light to full-dark conditions. This lighting invariance comes at the price of coping with motion distortion, owing to the scanning-while-moving nature of laser-based imagers. We present our results and lessons learned from the last few years of research in this area.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    23
    Citations
    NaN
    KQI
    []