Exploiting Semantic and Visual Context for Effective Video Annotation

2013 
We propose a new method to refine the result of video annotation by exploiting the semantic and visual context of video. On one hand, semantic context mining is performed in a supervised way, using the manual concept labels of the training set. It is very useful for boosting video annotation performance, because semantic context is learned from labels given by people, indicating human intention. In this paper, we model the spatial and temporal context in video by using conditional random fields with different structures. Comparing with existing methods, our method could more accurately capture concept relationship in video and could more effectively improve the video annotation performance. On the other hand, visual context mining is performed in a semi-supervised way based on the visual similarities among video shots. It indicates the natural visual property of video, and could be considered as the compensation to semantic context, which generally could not be perfectly modeled. In this paper, we construct a graph based on the visual similarities among shots. Then a semi-supervised learning approach is adopt based on the graph to propagate probabilities of the reliable shots to others having similar visual features with them. Extensive experimental results on the widely used TRECVID datasets exhibit the effectiveness of our method for improving video annotation accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    12
    Citations
    NaN
    KQI
    []