Multimodal Time Series Data Fusion Based on SSAE and LSTM

2021 
In recent years, sensor multimodal time series data fusion has attracted widespread attention. One of the key challenges is to extract features from multimodal data to obtain shared representations, which combines the characteristics of time series data to further improve prediction performance. To solve this problem, based on Stacked Sparse Auto-Encoder (SSAE) and Long Short-Term Memory (LSTM), we propose a multimodal time series data fusion model called SSAE-LSTM. SSAE mines the inherent correlation features of multimodal data to extract a good shared representation, which is used as the input of the LSTM neural network to perform data fusion processing. Experiments on real time series datasets demonstrate that SSAE-LSTM can obtain a good shared representation of multimodal data to predict the future development trend. Compared with other neural networks, SSAE-LSTM has better performance in Precision, Accuracy, Recall, F-score and so on.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []