Using a Combination of LiDAR, RADAR, and Image Data for 3D Object Detection in Autonomous Vehicles

2020 
One of the topics that is highly regarded and researched in the field of artificial intelligence and machine learning is object detection. Its use is especially important in autonomous vehicles. The various methods used to detect objects are based on different types of data, including image, radar, and lidar. Using a point clouds is one of the new methods for 3D object detection proposed in some recent work. One of the recently presented efficient methods is PointPillars network. It is an encoder that can learn from data available in a point cloud and then organize it as a representation in vertical columns (pillars). This representation can be used for 3D object detection. in this work, we try to develop a high performance model for 3D object detection based on PointPillars network exploiting a combination of lidar, radar, and image data to be used for autonomous vehicles perception. We use lidar, radar, and image data in nuScenes dataset to predict 3D boxes for three classes of objects that are car, pedestrian, and bus. To measure and compare results, we use nuScenes detection score (NDS) that is a combined metric for detection task. Results show that increasing the number of lidar sweeps, and combining them with radar and image data, significantly improve the performance of the 3D object detector. We suggest a method to combine different types of input data (lidar, radar, image) using a weighting system that can be used as the input for the encoder.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []