MF-Net: Meta Fusion Network for 3D object detection

2021 
3D object detection has attracted a significant amount of attention and interest from both academia and industry due to its indispensable role in understanding 3D environments. By fusing the camera and LiDAR sensors, it is expected to improve both the accuracy and robustness of 3D object detection. However, existing fusion approaches are either limited by cascading processing, or easy to be influenced by the interference information in multi-sensors. To this end, this paper incorporates meta learning to fuse the camera and LiDAR data. Specifically, we first extract meta knowledge from images, and then apply it to generate the parameter weights of a set of convolution kernels, which are further exploited for feature extraction on LiDAR point clouds. Furthermore, we propose a meta fusion network (MF -Net), enabling accurate and robust 3D object detection. The superiority and effectiveness of MF -Net have been demonstrated by extensive experiments on KITTI 3D object detection dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    0
    Citations
    NaN
    KQI
    []