A Unified Spatio-Temporal Description Model of Environment for Intelligent Vehicles

2021 
Beneficial from fusion of various external sensors such as LiDAR and camera, intelligent vehicles are able to be informed of their surroundings with details in real time. The characteristics of each type of sensors determine its specificity in raw data, leading to inconformity of content, precision, range and timing, which eventually increases the complexity of environment descriptions and effective information extraction for further applications. In order to efficiently describe the environment, as well as remain the diversity of necessary information, we propose a unified description model independent of specific sensors for intelligent vehicle environment perception, containing 3D positions, semantics and time. The spatio-temporal relationship between different types of collected data are established to express all the elements in the same system. As a potential implementation scheme, we take advantages of LiDAR point cloud and color images to acquire the positions and semantics of the environment components. Semantic segmentation based on convolutional neural networks (CNN) provides rich semantics for the model, which could also be enhanced with the fusion of LiDAR point cloud. We validate the effectiveness of our proposed model on the KITTI dataset and our own collected data and exhibit qualitative results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []