A learning method to optimize depth accuracy and frame rate for Time of Flight camera

2019 
Time of Flight cameras attracts increasing attention due to their ability to robustly predict depth values of the scene. The principle of actively emitting modulated signals and receiving sampled signals causes that the artifacts generated by motion blur seriously affect quality of imaging and the power consumption of the light source. The problem of shorter exposure time is that the raw measurements have a lower signal-to-noise ratio. Reducing the four frames of raw measurements for depth reconstruction to two frames can make it difficult to effectively suppress noise from the background infrared signal and capacitive components. However, shorter exposure time and reducing four frames of raw data to two frames will greatly reduce the motion blur of the time-of-flight camera with a higher frame rate and lower power consumption. We propose a method based on deep learning and make it possible to reliably recover the depth information even under extremely low signal-to-noise ratio and the noise of electronic components for alleviating the motion blur problem. The experiment results fully demonstrate the reliability of our solution. Our algorithms and data will be published in the future.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    1
    Citations
    NaN
    KQI
    []