No-reference video quality evaluation by a deep transfer CNN architecture

2020 
Abstract The standard no-reference video quality assessment (NR-VQA) is designed for a specific type of distortion. It quantifies the visual quality of a distorted video without the reference one. Practically, there is a deviation between the result of NR-VQA and human subjective perception. To tackle this problem, we propose a 3D deep convolutional neural network (3D CNN) to evaluate video quality without reference by generating spatial/temporal deep features within different video clips 3D CNN is designed by collaboratively and seamlessly integrating the features output from VGG-Net on video frames. To prevent our adopted VGG-Net from overfitting, the parameters are transferred from the deep architecture learned from the ImageNet dataset. Extensive IQA/VQA experimental results based on the LIVE, TID, and the CSIQ video quality databases have demonstrated that the proposed IQA/VQA model performs competitively the conventional methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    4
    Citations
    NaN
    KQI
    []