Automatic Multi-view Action Recognition with Robust Features

2017 
This paper proposes view-invariant features to address multi-view action recognition for different actions performed in different views. The view-invariant features are obtained from clouds of varying temporal scale by extracting holistic features, which are modeled to explicitly take advantage of the global, spatial and temporal distribution of interest points. The proposed view-invariant features are highly discriminative and robust for recognizing actions as the view changes. This paper proposes a mechanism for real world application which can follow the actions of a person in a video based on image sequences and can separate these actions according to given training data. Using the proposed mechanism, the beginning and ending of an action sequence can be labeled automatically without the need for manual setting. It is not necessary in the proposed approach to re-train the system if there are changes in scenario, which means the trained database can be applied in a wide variety of environments. The experiment results show that the proposed approach outperforms existing methods on KTH and WEIZMANN datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    3
    Citations
    NaN
    KQI
    []