Direction-induced convolution for point cloud analysis

2021 
Point cloud analysis becomes a fundamental but challenging problem in the field of 3D scene understanding. To deal with unstructured and unordered point clouds in the embedded 3D space, we propose a novel direction-induced convolution (DIConv) to obtain the hierarchical representations of point clouds and then boost the performance of point cloud analysis. Specifically, we first construct a direction set as the basis of spatial direction information, where its entries can denote these latent direction components of 3D points. For each neighbor point, we can project its direction information into the constructed direction set for achieving an array of direction-dependent weights, then transform its features into the canonical ordered direction set space. After that, the standard image-like convolution can be leveraged to encode the unordered neighborhood regions of point cloud data. We further develop a residual DIConv (Res_DIConv) module and a farthest point sampling residual DIConv (FPS_Res_DIConv) module for jointly capturing the hierarchical features of input point clouds. By alternately stacking Res_DIConv modules and FPS_Res_DIConv modules, a direction-induced convolution network (DICNet) can be built to perform point cloud analysis in an end-to-end fashion. Comprehensive experiments on three benchmark datasets (including ModelNet40, ShapeNet Part, and S3DIS) demonstrate that the proposed DIConv method achieves encouraging performance on both point cloud classification and semantic segmentation tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []