VST3D-Net:Video-Based Spatio-Temporal Network for 3D Shape Reconstruction from a Video

2020 
In this paper, we propose the Video-based Spatio-Temporal 3D Network (VST3D-Net), which is a novel learning approach of viewpoint-invariant 3D shape reconstruction from monocular video. In our VST3D-Net, a spatial feature extraction subnetwork is designed to encode the local and global spatial relationships of the object in the image. The extracted latent spatial features have implicitly embedded both shape and pose information. Although a single view can also be used to recover a 3D shape, more rich shape information of the dynamic object can be explored and leveraged from video frames. To generate the viewpoint-free 3D shape, we design a temporal correlation feature extractor. It handles the temporal consistency of the shape and pose of the moving object simultaneously. Therefore, both the canonical 3D shape and the corresponding pose at different frame are recovered by the network. We validate our approach on the ShapeNet-based video dataset and ApolloCar3D dataset. The experimental results show the proposed VST3D-Net can outperform the state-of-the-art approaches both in accuracy and efficiency.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []