Multilevel Sensor Fusion With Deep Learning

2019 
In the context of deep learning, this article presents an original deep network, namely CentralNet, for the fusion of information coming from different sensors. This approach is designed to efficiently and automatically balance the tradeoff between early and late fusion (i.e., between the fusion of low-level versus high-level information). More specifically, at each level of abstraction—the different levels of deep networks—unimodal representations of the data are fed to a central neural network which combines them into a common embedding. In addition, a multiobjective regularization is also introduced, helping to both optimize the central network and the unimodal networks. Experiments on four multimodal datasets not only show the state-of-the-art performance but also demonstrate that CentralNet can actually choose the best possible fusion strategy for a given problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    9
    Citations
    NaN
    KQI
    []