How to Learn a Domain-Adaptive Event Simulator?

2021 
The low-latency streams captured by event cameras have shown impressive potential in addressing vision tasks such as video reconstruction and optical flow estimation. However, these tasks often require massive training event streams, which are expensive to collect and largely bypassed by recently proposed event camera simulators. To align the statistics of synthetic events with that of target event cameras, existing simulators often need to be heuristically tuned with elaborative manual efforts and thus become incompetent to automatically adapt to various domains. To address this issue, this work proposes one of the first learning-based, domain-adaptive event simulator. Given a specific domain, the proposed simulator learns pixel-wise distributions of event contrast thresholds that, after stochastic sampling and paralleled rendering, can generate event representations well aligned with those from the data from realistic event cameras. To achieve such domain-specific alignment, we design a novel divide-and-conquer discrimination scheme that adaptively evaluates the synthetic-to-real consistency of event representations according to the local statistics of images and events. Trained with the data synthesized by the proposed simulator, the performances of state-of-the-art event-based video reconstruction and optical flow estimation approaches are boosted up to 22.9% and 2.8%, respectively. In addition, we show significantly improved domain adaptation capability over existing event simulators and tuning strategies, consistently on three real event datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []