Disparity tuning guided stereoscopic saliency detection for eye fixation prediction

2018 
Abstract The development of emerging 3D display brings increasing attention of stereoscopic techniques such as quality assessment, re-targeting, and compression of 3D image, that require new saliency detection methods to deal with stereoscopic data. In this paper, we present a disparity tuning guided stereoscopic saliency (DTSS) model which apply the disparity tuning mechanism of visual cortical neurons into visual attention modeling. First, we investigate the rationality of converting features from physical quantity into perception quantity before computing saliency. Second, we discuss the biological principles that depth affects visual attention and consider both absolute and relative depth tuning mechanisms to model visual attention. Then, we propose a diffusion saliency feature combining depth and RGB features. Specifically, we use RGB contrast as primitive labels to diffuse a saliency map for depth map and using depth contrast as primitive labels to diffuse a saliency map for RGB image. Finally, an adaptively weighted fusion method is proposed for the integration of feature maps. Experiments demonstrate that the proposed model performs well against to the state-of-art methods on the task of 3D eye fixation prediction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    3
    Citations
    NaN
    KQI
    []