Synthesis of Shaking Video Using Motion Capture Data and Dynamic 3D Scene Modeling

2018 
Important video processing methods such as video stabilization and deblurring often do not have ground-truth data available. This poses a great challenge in the development and parameter tunning of such methods. Synthetic shaken video is very useful to generate well-defined ground-truth datasets. Existing shaking video synthesis methods simulate shaky camera motion by performing 2D view warping using only a single 2D video, which does not always correspond to realistic 3D motions. In this paper, we introduce a novel shaking video synthesis approach. The proposed framework constructs the camera motion trajectory by making use of human motion information that is captured in the real-world. Moreover, we render the shaken video from man-made dynamic 3D scenes with detailed camera pose information. Our novel approach provides both accurate 2D visual content and camera motion trajectory in the 3D scene, which allows for evaluating the visual distortion as well as the offsets of the recovered camera trajectory. The proposed synthesis method of shaking video will benefit and ease future research on 3D-aware video stabilization.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    1
    Citations
    NaN
    KQI
    []