PVT: Point-Voxel Transformer for 3D Deep Learning.

2021 
In this paper, we present an efficient and high-performance neural architecture, termed Point-Voxel Transformer (PVT)for 3D deep learning, which deeply integrates both 3D voxel-based and point-based self-attention computation to learn more discriminative features from 3D data. Specifically, we conduct multi-head self-attention (MSA) computation in voxels to obtain the efficient learning pattern and the coarse-grained local features while performing self-attention in points to provide finer-grained information about the global context. In addition, to reduce the cost of MSA computation with high efficiency, we design a cyclic shifted boxing scheme by limiting the MSA computation to non-overlapping local box and also preserving cross-box connection. Evaluated on classification benchmark, our method not only achieves state-of-the-art accuracy of 94.0% (no voting) but outperforms previous Transformer-based models with 7x measured speedup on average. On part and semantic segmentation, our model also obtains strong performance(86.5% and 68.2% mIoU, respectively). For 3D object detection task, we replace the primitives in Frustrum PointNet with PVT block and achieve an improvement of 8.6% AP.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    2
    Citations
    NaN
    KQI
    []