Semantic video search by automatic video annotation using TensorFlow

2016 
The paper discusses a tool for video structure analysis, feature extraction, classification and semantic querying suitable for an extremely broad scale of video data set. The tool analyses the video structure to detect shot boundaries where shots in each video are identified using image duplication techniques. A single frame from each shot is passed to a deep learning model implemented using TensorFlow, that is trained for feature extraction and classification of objects in each frame. Subsequently, an automatic textual annotation is generated for each video and finally with the aid of ontology, semantic searching is done using NLP, which allows receiving an efficient result other than manual video annotation of a large scale dataset. While maintaining accurate querying with automatic video content analysis and annotation with semantic searching with around seventy-four percent accuracy rate, this becomes a useful tool in video tagging and annotation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    7
    Citations
    NaN
    KQI
    []