A hierarchical parallel fusion framework for egocentric ADL recognition based on discernment frame partitioning and belief coarsening
2020
Recently, egocentric activity recognition has become a major research area in pattern recognition and artificial intelligence due to its high significance in potential applications in medical care, rehabilitation, smart home/office, etc. In this study, we develop a hierarchical parallel multimodal fusion framework for the recognition of egocentric activities in daily living (ADL). This framework uses the Dezert–Smarandache theory and is constructed around three modalities: location, motion and vision data from a wearable hybrid sensor system. The reciprocal distance and a trained support vector machine classifier are used to form the basic belief assignments (BBA) of location and motion. For vision data composed of egocentric photo streams, a well-trained convolutional neural network is utilized to produce a set of textual tags and the entropy-based statistics for these tags are used to construct the vision BBA. Discernment partitioning and belief coarsening theory are adopted for the hierarchical fusion of the three BBA functions from different ADL levels. Experimental results show that the recognition accuracy of the proposed fusion method was significantly higher than that of the methods based on single modality or modality combinations when our method was applied to real-life multimodal egocentric activity datasets. In addition, our method also achieved higher adaptability and scalability.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
49
References
0
Citations
NaN
KQI