Estimation of camera pose with respect to terrestrial LiDAR data

2013 
In this paper, we present an algorithm that is to estimate the position of a hand-held camera with respect to terrestrial LiDAR data. Our input is a set of 3D range scans with intensities and one or a set of 2D uncalibrated camera images of the scene. The algorithm that automatically registers range scans and 2D images is composed of following steps. In the first step, we project the terrestrial LiDAR onto 2D images according to several preselected viewpoints. Intensity-based features such as SIFT are extracted from these projected images and these features are projected back onto the LiDAR data to obtain their 3D positions. In the second step, we estimate the initial pose of given 2D images from feature correspondences. In the third step, we refine the coarse camera pose obtained from the previous step through iterative matchings and optimization process. We presents results from experiments in several different urban settings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    7
    Citations
    NaN
    KQI
    []