Real-time summarization of user-generated videos based on semantic recognition

2014 
User-generated contents play an important role in the Internet video-sharing activities. Techniques for summarizing the user-generated videos (UGVs) into short representative clips are useful in many applications. This paper introduces an approach for UGV summarization based on semantic recognition. Different from other types of videos like movies or broadcasting news, where the semantic contents may vary greatly across different shots, most UGVs have only a single long shot with relatively consistent high-level semantics. Therefore, a few semantically representative segments are generally sufficient for a UGV summary, which can be selected based on the distribution of semantic recognition scores. In addition, due to the poor shooting quality of many UGVs, factors such as camera shaking and lighting condition are also considered to achieve more pleasant summaries. Experiments on over 100 UGVs with both subjective and objective evaluations show that our approach clearly outperforms several alternative methods and is highly efficient. Using a regular laptop, it can produce a summary for a 2-minute video in just 10 seconds.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    18
    Citations
    NaN
    KQI
    []