Refining video annotation by exploiting inter-shot context
2010
This paper proposes a new approach to refine video annotation by exploiting the inter-shot context. Our method is mainly novel in two ways. On one hand, to refine annotation result of the target concept, we model the sequence of shots in video as a conditional random field with chain structure. In this way, we can capture different kinds of concept relationships in inter-shot context to improve the annotation accuracy. On the other hand, to exploit inter-shot context for the target concept, we classify shots into different types according to their correlation to the target concept, which will be used to represent different kinds of concept relationships in inter-shot context. Experiments on the widely used TRECVID 2006 data set show that our method is effective for refining video annotation, achieving a significant performance improvement over several state of the art methods.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
10
References
3
Citations
NaN
KQI