Video Super-Resolution via Pre-Frame Constrained and Deep-Feature Enhanced Sparse Reconstruction
2020
Abstract This paper presents a new video super-resolution (SR) method that can generate high-quality and temporally coherent high-resolution (HR) videos. Starting from the traditional sparse reconstruction framework that works well for image SR, we improve it significantly from the following aspects to obtain an effect video SR method. Firstly, to enhance the temporal coherence between adjacent HR frames, once a HR frame is estimated, we use it to guide the sparse reconstruction of the next low-resolution frame. Secondly, instead of using just hand-craft features, we further incorporate deep features generated by VGG16 into our sparse reconstruction based video SR method. Thirdly, we constantly update the dictionary, which is the core of the sparse reconstruction, by making use of the previously estimated HR frame. Finally, after the HR video is reconstructed, we use a joint bilateral filter to post-process it to remove artifacts and transfer image details. Experiments demonstrate that the proposed four strategies effectively improve our final results. In most of the experiments of this paper, our results are better than those produced by the latest deep learning based approaches.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
71
References
5
Citations
NaN
KQI