A computational model of bounded developable surfaces with application to image‐based three‐dimensional reconstruction

2013 
Developable surfaces have been extensively studied in computer graphics because they are involved in a large body of applications. This type of surfaces has also been used in computer vision and document processing in the context of three-dimensional (3D) reconstruction for book digitization and augmented reality. Indeed, the shape of a smoothly deformed piece of paper can be very well modeled by a developable surface. Most of the existing developable surface parameterizations do not handle boundaries or are driven by overly large parameter sets. These two characteristics become issues in the context of developable surface reconstruction from real observations. Our main contribution is a generative model of bounded developable surfaces that solves these two issues. Our model is governed by intuitive parameters whose number depends on the actual deformation and including the “flat shape boundary”. A vast majority of the existing image-based paper 3D reconstruction methods either require a tightly controlled environment or restricts the set of possible deformations. We propose an algorithm for reconstructing our model's parameters from a general smooth 3D surface interpolating a sparse cloud of 3D points. The latter is assumed to be reconstructed from images of a static piece of paper or any other developable surface. Our 3D reconstruction method is well adapted to the use of keypoint matches over multiple images. In this context, the initial 3D point cloud is reconstructed by structure-from-motion for which mature and reliable algorithms now exist and the thin-plate spline is used as a general smooth surface model. After initialization, our model's parameters are refined with model-based bundle adjustment. We experimentally validated our model and 3D reconstruction algorithm for shape capture and augmented reality on seven real datasets. The first six datasets consist of multiple images or videos and a sparse set of 3D points obtained by structure-from-motion. The last dataset is a dense 3D point cloud acquired by structured light. Our implementation has been made publicly available on the authors' web home pages. Copyright © 2012 John Wiley & Sons, Ltd.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    26
    Citations
    NaN
    KQI
    []