Generating effect animation with conditional GANs

2018 
Effect animations are used in many application, such as music videos or games. So, there are many templates of effects in softwares. However, it is difficult for an amateur to make an original animation. In this paper, we propose a deep learning based approach for generating an effect animation. This approach uses a next-frame prediction model which is conditional Generative Adversarial Networks (cGAN) [Goodfellow et al. 2014] and let users make a new animation easily by preparing for referenced effect videos. On the contrary, users can make the animation that it is difficult for even professional designers to make. The model is trained by loss between a ground-truth frame and a predicted frame, and the trained model can repeatedly predict the next frame with generated frames in order to make animation video. In experiments, we show several results and that we can partly control what frames are generated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    3
    References
    0
    Citations
    NaN
    KQI
    []