Perception inspired stereoscopic image and video quality assessment.

2018 
Recent developments in 3D media technology have brought to life numerous applications of interactive entertainment such as 3D cinema, 3DTV and gaming. Due to the data intensive nature of 3D visual content, Quality of Experience (QoE) has become a major driving factor to optimise the end-to-end content delivery process. However, to ensure the QoE, there is a need to develop more robust and accurate objective metrics for stereoscopic image and video quality assessment. Existing stereoscopic QoE metrics tend to lack in accuracy and robustness compared to its 2D counterparts as they are either extensions of 2D metrics or are based on simple perceptual models. However, measuring stereoscopic QoE requires more perceptually inspired metrics. This research introduces full-reference stereoscopic image and video quality metrics based on a Human Visual System (HVS) model incorporating important physiological findings on binocular vision. Firstly, a novel HVS model extending existing models in the literature is proposed to include the phenomena of binocular suppression and recurrent excitation towards stereoscopic image quality assessment. Secondly the research is extended to the temporal domain using temporal pooling of the HVS model outputs for individual frames and using a spatio-temporal model in the HVS model towards two distinct temporally inspired stereoscopic video quality metrics. Finally, motion sensitivity is introduced to the HVS model towards a perception inspired stereoscopic video quality metric. The proposed QoE metrics are trained, verified and tested using four publicly available stereoscopic image databases and two stereoscopic video datasets. They indicate an increase of average correlation index from 0.66 (baseline method) to 0.86 for the stereoscopic images and a maximum increase of average correlation index from 0.57 (baseline method) to 0.93 for stereoscopic videos. These results demonstrate the benefits of using a perceptually inspired approach in this research.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []