Feature and decision level fusion for action recognition

2012 
Classification of actions by human actors from video enables new technologies in diverse areas such as surveillance and content-based retrieval. We propose and evaluate alternative models, one based on feature-level fusion and the second on decision-level fusion. Both models employ direct classification - inferring from low-level features the nature of the action. Interesting points are assumed to have salient 3D (spatial plus temporal) gradients that distinguish them from their neighborhoods. They are identified using three distinct 3D interesting-point detectors. Each detected interest point set is represented as a bag-of-words descriptor. The feature level fusion model concatenates descriptors subsequently used as input to a classifier. The decision level fusion uses an ensemble and majority voting scheme. Public data sets consisting of hundreds of action videos were used in testing. Within the test videos, multiple actors performed various actions including walking, running, jogging, handclapping, boxing, and waving. Performance comparison showed very high classification accuracy for both models with feature-level fusion having an edge. For feature-level fusion the novelty is the fused histogram of visual words derived from different sets of interesting points detected by different saliency detectors. For decision fusion besides Adaboost the majority voting scheme is also utilized in ensemble classifiers based on support vector machines, knearest neighbor, and decision trees. The main contribution, however, is the comparison between the models and, drilling down, the performance of different base classifiers, and different interest point detectors for human motion recognition
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    4
    Citations
    NaN
    KQI
    []