Self-Paced Knowledge Distillation for Real-Time Image Guided Depth Completion

2022 
Image guided depth completion aims to generate a dense depth map from a sparse one with the guidance of a color image. Previous high-accuracy methods often rely on complex networks that are large in size and expensive in computational cost, making them inapplicable to real-time platforms. In this letter, we propose a self-paced knowledge distillation method, which obtains a lightweight but accurate depth completion model via distilling knowledge from a complex teacher network. Specifically, by taking advantage of the easy-to-hard learning curriculum in deep networks, we first design a groundtruth-free hard-pixel mining module to tell hard and noisy pixels in the teacher’s output. Then, we design two self-paced distillation losses, which gradually introduce hard pixels to distill the depth and structure knowledge from the teacher to the compact student network. Experiments on the KITTI benchmark show that the proposed method can improve the original student model by a considerable margin. The distilled compact and real-time student model outperforms all previous lightweight networks, mitigating the performance gap with state-of-the-art high-accuracy but complex models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []