Denoising of event-based sensors with deep neural networks

2021 
As a novel asynchronous imaging sensor, event camera features low power consumption, low temporal latency and high dynamic range, but abundant noise. In real applications, it is essential to suppress the noise in the output event sequences before successive analysis. However, the event camera is of address-event-representation (AER), and requires developing new denoising techniques rather than conventional frame-based image denoising methods. In this paper, we propose two learning-based methods for the denoising of event-based sensor measurements, i.e., convolutional denoising auto-encoder (ConvDAE) and sequence-fragment recurrent neural network (SeqRNN). The former converts the event sequence into 2D images before denoising, which is compatible with existing deep denoisers and high-level vision tasks. The latter, utilizes recurrent neural network’s advantages in dealing with time series to realize online denoising while keeping the event’s original AER representation. Experiments based on real data demonstrate the effectiveness and flexibility of the proposed methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []