Multi-Sensor Fusion Based Robot Self-Activity Recognition

2018 
Robots play more and more important roles in our daily life. To better complete assigned tasks, it is necessary for the robots to have the ability to recognize their self-activities in real time. To perceive the environment, robots usually equipped with rich sensors, which can be used to recognize their self-activities. However, the intrinsics of the sensors such as accelerometer, servomotor and gyroscope may have significant differences, individual sensor usually exhibits weak performance in perceiving the environment. Therefore, multi-sensor fusion becomes a promising technique so that to achieve better performance. In this paper, facing the issue of robot self-activity recognition, we propose a framework to fuse information from multiple sensory streams. Our framework takes Recurrent Neural Network(RNN) that uses Long Short-Term Memory(LSTM) units to model temporal information conveyed in multiple sensory streams. In the architecture, a hierarchy structure is used to learn the sensor-specific features, a shared layer is used to fuse the features extracted from multiple sensory streams. We collect a dataset on PKU-HR6.0 robot to evaluate the proposed framework. The experiment results demonstrate the effectiveness of the proposed framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    1
    Citations
    NaN
    KQI
    []