Representation Learning via Parallel Subset Reconstruction for 3D Point Cloud Generation

2019 
Three-dimensional (3D) point cloud processing has attracted a great deal of attention in computer vision, robotics, and the machine learning community because of significant progress in deep neural networks on 3D data. Another trend in the community is learning of generative models based on generative adversarial networks. In this paper, we propose a framework for 3D point cloud generation based on a combination of auto-encoders and generative adversarial networks. The framework first trains auto-encoders to learn latent representations, and then trains generative adversarial networks in the learned latent space. We focus on improving the training method for auto-encoders in order to generate 3D point clouds with higher fidelity and coverage. We add parallel sub-decoders that reconstruct subsets of the input point cloud. In order to construct these subsets, we introduce a point sampling algorithm that imposes a method to sample spatially localized point sets. These local subsets are utilized to measure local reconstruction losses. We train auto-encoders to learn an effective latent representation for both global and local shape reconstruction based on the multi-task learning approach. Furthermore, we add global and local adversarial losses to generate more plausible point clouds. Quantitative and qualitative evaluations demonstrate that the proposed method outperforms state-of-the-art method on the task of 3D point cloud generation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    1
    Citations
    NaN
    KQI
    []