Visual Summarization of Lecture Video Segments for Enhanced Navigation

2020 
Lecture video is an increasingly important learning resource. However, the challenge of quickly finding the content of interest in a long lecture video is a critical limitation of this format. This paper introduces visual summarization of lecture video segments to improve navigation. A lecture video is divided into segments based on the frame-to-frame similarity of content. The user navigates a lecture video assisted by single frame visual and textual summaries of segments. The paper presents a novel methodology to generate the visual summary of a lecture video segment by estimating the importance of each image in the segment, computing similarities between the images, and employing a graph-based algorithm to identify the most representative images. The summarization framework developed is integrated into a real-world lecture video management portal called Videopoints. An evaluation with ground truth from human experts established that the algorithms presented are significantly superior to random selection as well as clustering based selection, and only modestly inferior to human selection. Over 65% of automatically generated summaries were rated at Good or better by the users. Overall, the methodology introduced in this paper was shown to produce good quality visual summaries that are practically useful for lecture video navigation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    4
    Citations
    NaN
    KQI
    []