LiDAR Depth Completion Using Color-Embedded Information via Knowledge Distillation

2022 
Depth completion is the task of reconstructing dense depth images from sparse LiDAR data. LiDAR depth completion, for which LiDAR data is the only input, is an ill-posed and challenging problem owing to the underlying properties of LiDAR data: extremely few points, presence of discontinuities, and absence of texture information. Accordingly, most approaches are heavily dependent on guided color images, which leads to unsatisfactory results when the color images are degraded. To alleviate the dependency on color images but leverage this information during training, we present a deep convolutional neural network (CNN) consisting of depth and edge CNNs via transferring of knowledge. In order to compensate for the limitations of LiDAR data, we design the edge CNN to learn a gradient depth image from a powerful teacher network through the Knowledge-Distillation method. Since the teacher network is trained with color images, color-embedded information can be obtained in the test phase even if color images are not used as an input. We further propose a Self-Distillation method for transferring the color-embedded features from the edge CNN to the depth CNN. Enforcing the depth features to contain edge information hardly observed in LiDAR data enables the depth CNN to generate more edge-attentive and structure-preserving results. Our novel methods show remarkable results in outdoor and indoor environments for KITTI and NYU-Depth-V2 datasets. Experiments performed with low-channel LiDAR data in KITTI and few depth points in the NYU-Depth-V2 dataset show that our method is robust to data sparsity and applicable in various scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    0
    Citations
    NaN
    KQI
    []