A method of generating depth images for view-based shape retrieval of 3D CAD models from partial point clouds

2021 
Laser scanners can easily acquire the geometric data of physical environments in the form of point clouds. Industrial 3D reconstruction processes generally recognize objects from point clouds, which should include both geometric and semantic data. However, the recognition process is often a bottleneck in 3D reconstruction because it is labor intensive and requires expertise in domain knowledge. To address this problem, various methods have been developed to recognize objects by retrieving their corresponding models from a database via input geometric queries. In recent years, geometric data conversion to images and view-based 3D shape retrieval applications have demonstrated high accuracies. Depth images that encode the depth values as pixel intensities are frequently used for view-based 3D shape retrieval. However, geometric data collected from objects are often incomplete owing to occlusions and line-of-sight limitations. Images generated by occluded point clouds lower the view-based 3D object retrieval performance owing to loss of information. In this paper, we propose a viewpoint and image-resolution estimation method for view-based 3D shape retrieval from point cloud queries. Further, automatic selection of viewpoint and image resolution are proposed using the data acquisition rate and density calculations from sampled viewpoints and image resolutions. The retrieval performances for images generated by the proposed method are investigated via experiments and compared for various datasets. Additionally, view-based 3D shape retrieval performance with a deep convolutional neural network was investigated using the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    3
    Citations
    NaN
    KQI
    []