Deep Learning Anthropomorphic 3D Point Clouds from a Single Depth Map Camera Viewpoint

2017 
In footwear, fit is highly dependent on foot shape, which is not fully captured by shoe size. Scanners can be used to acquire better sizing information and allow for more personalized footwear matching, however when scanning an object, many images are usually needed for reconstruction. Semantics such as knowing the kind of object in view can be leveraged to determine the full 3D shape given only one input view. Deep learning methods have been shown to be able to reconstruct 3D shape from limited inputs in highly symmetrical objects such as furniture and vehicles. We apply a deep learning approach to the domain of foot scanning, and present a method to reconstruct a 3D point cloud from a single input depth map. Anthropomorphic body parts can be challenging due to their irregular shapes, difficulty for parameterizing and limited symmetries. We train a view synthesis based network and show that our method can produce foot scans with accuracies of 1.55 mm from a single input depth map.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    2
    Citations
    NaN
    KQI
    []