Long-Term Video Generation with Evolving Residual Video Frames
2018
In this paper, we propose a novel long-term video generation algorithm, motivated by the recent developments of unsupervised deep learning techniques. The proposed technique learns two ingredients of internal video representation, i.e., video textures and motions to reproduce realistic pixels in the future video frames. To this aim, the proposed technique uses two encoders comprising convolutional neural networks (CNN) to extract spatiotemporal features from the original video frame and a residual video frame, respectively. The use of the residual frame facilitates the learning with fewer parameters as there are high spatiotemporal correlations in a video. Moreover, the residual frames are efficiently used for evolving pixel differences in the future frame. In a decoder, the future frame is generated by transforming the combination of two feature vectors into the original video size. Experimental results demonstrate that the proposed technique provides more robust and accurate results of long-term video generation than conventional techniques.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
12
References
4
Citations
NaN
KQI