Robust Temporal Super-Resolution for Dynamic Motion Videos

2019 
It is difficult to apply most video temporal super-resolution studies for real-world scenes because they are optimized for a specific range of characteristics. In this paper, we propose a video temporal super-resolution method that is tolerant to motion diversity and noise. Our proposed method improves its robustness by fine-tuning the pre-trained SPyNet that is trained for videos with simple motions and moderate conditions. Moreover, our proposed network learns to accurately synthesize two frames generated by a backward warping function without requiring any additional information using the architecture of a modified DHDN. This enables our proposed method to efficiently synthesize two warped frames by saving the computational complexity for pre-training and extracting the additional information. Finally, we apply the self-ensemble method, which is commonly used in studies on image processing but not on video processing. The application of the self-ensemble method enables our network to generate stable output frames with improved quality without any additional training. Our proposed network proved its performance by ranking 5th in the AIM 2019 video temporal super-resolution challenge; the performance gap between our proposed network and the 3rd-and 4th-ranked solutions was very small. The source code and pre-trained models are available at https://github.com/BumjunPark/DVTSR.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []