Visual Saliency Detection Based on Full Convolution Neural Networks and Center Prior

2019 
Video saliency detection aims to mimic the human's visual attention system of perceiving the world via extracting the most attractive regions or objects in the input video. At present, traditional video saliency-detection models have achieved good performance in many applications. However, it is still challenging in exploiting the consistency of spatiotemporal information. In order to tackle this challenge, this paper proposes a video saliency-detection model based on human attention mechanism and full convolution neural networks. First, visual features are extracted from video frames through the fully convolutional networks. The second stage is to spread attention features to the other layer (i. e. the fifth layer) of fully convolutional networks via a weight sharing strategy. Finally, the final result produced by the convolution network is optimized by considering spatial location information with center prior of the salient object. Experimental results show that the performance of the proposed algorithm is superior to other state-of-the-art methods based on the widely used data set for video saliency detection.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    2
    Citations
    NaN
    KQI
    []