Time-of-flight depth datasets for indoor semantic SLAM

2017 
This paper introduces a medium-scale point cloud dataset for semantic SLAM (Simultaneous Localization and Mapping) acquired using a SwissRanger time-of-flight camera. An indoor environment with relatively unfluctuating lighting conditions is considered for mapping and localization. The camera is positioned on a mobile tripod and ready to capture images at prearranged locations in the environment. The prearranged locations are in fact used as ground truth for estimating the variance with poses calculated from SLAM, and also as initial pose estimates for the ICP algorithm (Iterative Closest Point). An interesting point is that, in this work, no type of Inertial Measurement Units or visual odometry techniques has been utilized, given the fact that, data from time-of-flight cameras is noisy and sensitive to external conditions (such as lighting, transparent surfaces, parallel overlapping surfaces etc.). Furthermore, a large collection of household objects is made in order to label the scene with semantic information. The whole SLAM dataset with pose files along with the point clouds of household objects is a major contribution in this paper apart from mapping and plane detection using a publicly available toolkit. Also, a novel metric, a context-based similarity score, for evaluating SLAM algorithms is presented.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    0
    Citations
    NaN
    KQI
    []