Better Performance in Human Action Recognition from Spatiotemporal Depth Information Features Classification

2020 
The recent revolution of sensor-based depth information opens attracting scope to work for human activity recognition. The activities due to human being can have a great interest in every domain of real life where human is always a major factor. Activity recognition is having a key importance due to its advantages in several domains like surveillance systems at the airport, patient monitoring system, and care of elderly people. The variation in spatial and temporal parameters can present any activity efficiently. In the natural color vision, it is not efficient to give complete information because it represents flatness for every portion of the images. The author proposes the objective of this work to recognize daily life human activities by spatiotemporal depth information. This work is carried out by three phases which comprise preprocessing, feature extraction, and action classification. Actions may be performed by a single person or more than one person at a time. For this purpose, the Kinect sensor is used in the data collection phase. The spatiotemporal depth features are computed for recognition by support vector machine classifier. The research work of this problem is experimented on Intel i5 processor with clock speed 3.1 GHz under the windows 8 environment and processing work is performed by commercial software MATLAB 2015b. There are nine classes of human actions in the database described by RGB-D human activity recognition and video database, Cornell activity datasets, and Berkeley multimodal human action database. The accuracy of nine actions is 90.38%. The research work carried out here proves that using the proposed work, the research community and organizations can get better performance that is tough to achieve through the normal video frames of human activities.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []