Point Cloud Deformation for Single Image 3d Reconstruction

2019 
We propose an approach to reconstruct a precise and dense 3d point cloud from a single image. Previous works employed reconstruction to complexity 3D shape or directly regression location from image. However, while the former requires overhead construction of 3D shape or is inefficient because of high computing cost, the latter does not scale well as the number of trainable parameters depends on the number of output points. In this paper, we explore a method to infer a point cloud representation given an input image. We extract shape information from an input image, and then we embed the two kinds of shape information into the point cloud: point-specific and global shape features. After that, we deform a randomly generated point cloud to the final representation based on the embedded point cloud feature. Our method does not require overhead construction, and is efficient and scalable because the number of trainable parameters is independent of the point cloud size, which is the first work to be able to do so according to our knowledge. Thorough experimental results suggest that our proposed method outperforms with other state-of-the-art methods in dense and precise point cloud generation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    6
    Citations
    NaN
    KQI
    []