Monocular Visual-inertial Localization in a Point Cloud Map Using Feature-to-Distribution Registration

2021 
In this paper, a visual-inertial localization system that reuses a prior map built by Lidar is proposed. Relying exclusively on a monocular camera and an IMU, the point and line features detected in the images are reconstructed and utilized for geometrically estimating the relative pose of the robot with respect to the prior 3D point cloud map. To leverage the alignment between the body frame and the map frame, a modified normal distribution transformation(NDT) algorithm is tightly coupled into the bundle adjustment(BA). We extract dual layered grid cell map from the raw Lidar-built map for both the point-todistribution and line-to-distribution registration. By utilizing line features, the proposed method can achieve competitive performance in low textured environments. Evaluations on different real-world environments, including tests on both the benchmark dataset and the self-collected one are presented.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []