AutoFuse: A Semi-supervised Autoencoder based Multi-Sensor Fusion Framework

2021 
The performance of existing methods for multisensor fusion are severely affected by the lack of significant amount of labeled data. In most practical scenarios, the amount of unlabeled data is huge in comparison to labeled data. To address this problem, a novel autoencoder based multi-sensor fusion framework for semi-supervised learning is proposed in this work. Here, both labeled and unlabeled data are used for learning the latent representation from each sensor. Subsequently, the latent representation of all the sensors are combined to perform classification. A joint optimization formulation is presented for learning the sensor-specific latent representation, their encoder and decoder weights and the classification weights together. This ensures discriminative features to be learnt from individual sensors that aids in classification. The requisite solution steps and the closed form updates for the joint learning of all the parameters are given. Experiment results presented on two datasets from different domains demonstrate the generalizability and superior performance of the proposed AutoFuse compared to state-of-the-art methods with relatively less complexity and the ability to work with partially annotated data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []