V-LESS: A Video from Linear Event Summaries

2018 
In this paper, we propose a novel V-LESS technique for generating the event summaries from monocular videos. We employed Linear Discriminant Analysis (LDA) as a machine learning approach. First, we analyze the features of the frames, after breaking the video into the frames. Then these frames are used as input to the model which classifies the frames into active frames and inactive frames using LDA. The clusters are formed with the remaining active frames. Finally, the events are obtained using the key-frames with the assumption that a key-frame is either the centroid or the nearest frame to the centroid of an event. The users can easily opt the number of key-frames without incurring the additional computational overhead. Experimental results on two benchmark datasets show that our model outperforms the state-of-the-art models on Precision and F-measure. It also successfully abates the video content while holding the interesting information as events. The computational complexity indicates that the V-LESS model meets the requirements for the real-time applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    6
    Citations
    NaN
    KQI
    []