Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather.

2020 
The fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing methods exploit redundant information under good conditions, they fail to do this in adverse weather where the sensory streams can be asymmetrically distorted. These rare ``edge-case'' scenarios are not represented in available datasets, and existing fusion architectures are not designed to handle them. To address this data challenge we present a novel multimodal dataset acquired by over 10,000~km of driving in northern Europe. Although this dataset is the first large multimodal dataset in adverse weather, with 100k labels for lidar, camera, radar and gated NIR sensors, it does not facilitate training as extreme weather is rare. To this end, we present a deep fusion network for robust fusion without a large corpus of labeled training data covering all asymmetric distortions. Departing from proposal-level fusion, we propose a single-shot model that adaptively fuses features, driven by measurement entropy. We validate the proposed method, trained on clean data, on our extensive validation dataset. The dataset and all models will be published.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    95
    References
    8
    Citations
    NaN
    KQI
    []