Imaging and fusing time series for wearable sensor-based human activity recognition

2020 
Abstract To facilitate data-driven and informed decision making, a novel deep neural network architecture for human activity recognition based on multiple sensor data is proposed in this work. Specifically, the proposed architecture encodes the time series of sensor data as images (i.e., encoding one time series into a two-channel image), and leverages these transformed images to retain the necessary features for human activity recognition. In other words, based on imaging time series, wearable sensor-based human activity recognition can be realized by using computer vision techniques for image recognition. In particular, to enable heterogeneous sensor data to be trained cooperatively, a fusion residual network is adopted by fusing two networks and training heterogeneous data with pixel-wise correspondence. Moreover, different layers of deep residual networks are used to deal with dataset size differences. The proposed architecture is then extensively evaluated on two human activity recognition datasets (i.e., HHAR dataset and MHEALTH dataset), which comprise various heterogeneous mobile device sensor combinations (i.e., acceleration, angular velocity, and magnetic field orientation). The findings demonstrate that our proposed approach outperforms other competing approaches, in terms of accuracy rate and F1-value.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    62
    Citations
    NaN
    KQI
    []