Video event classification with temporal partitioning

2015 
This paper addresses the problem of temporal pruning of noisy parts to improve event recognition performance. We present a new technique based on the temporal partitioning of the processed videos according to their motion patterns and the subsequent analysis of the yielded time segments. For each event type, we automatically learn the types of segments that are discriminative and those that perturb the classification. This process does not require detailed annotation of actions within an event type. A video is described with a set of quantized features and the final classification is performed according to the features that fall within the discriminative segments only. Experimental results show increased classification performance on the NIST MED11 dataset using two types of local features.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    2
    Citations
    NaN
    KQI
    []