Ontology-driven annotation and access of educational video data
2006
The tremendous growth in multimedia data calls for efficient and flexible access mechanisms. In this dissertation, we propose an ontology-driven framework for video annotation and access. The goal is to integrate ontology into video systems to improve users' video access experience.
To realize ontology-driven video annotation, the first and foremost step is video segmentation. Current research in video segmentation has mainly focused on the visual and/or auditory modalities. In this dissertation, we investigate how to combine visual, auditory, and textual information in the segmentation of educational video data. Experiments show that text-based segmentation generally decomposes videos into semantic segments, which facilitates video content understanding and video annotation data extraction.
To extract annotation data from videos and video segments, and to organize them in a way that facilitates video access, we propose a multi-ontology based multimedia annotation model. In this model, domain-independent multimedia ontology is integrated with multiple domain-dependent ontologies. Preliminary evaluation suggests that multi-ontology based multimedia annotation provides multiple, domain-specific views of the same multimedia content and, thus, better meets different users' information needs.
With extracted annotation data, ontology-driven video access explores domain knowledge embedded in domain ontology and tailors the video access to the specific needs of individual users from different domains. Our experience shows that ontology-driven video access can improve video retrieval relevancy and, thus, enhance users' video access experience.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
1
Citations
NaN
KQI