language-icon Old Web
English
Sign In

Dense Semantic 3D Reconstruction

2017 
Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being ‘too noisy’. These priors generally yield overly smooth reconstructions and/or segmentations in certain regions while they fail to constrain the solution sufficiently in other areas. In this paper, we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other’s task. As a consequence, we propose a mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. On the one hand knowing about the semantic class of the geometry provides information about the likelihood of the surface direction. On the other hand the surface direction provides information about the likelihood of the semantic class. Experimental results on several data sets highlight the advantages of our joint formulation. We show how weakly observed surfaces are reconstructed more faithfully compared to a geometry only reconstruction. Thanks to the volumetric nature of our formulation we also infer surfaces which cannot be directly observed for example the surface between the ground and a building. Finally, our method returns a semantic segmentation which is consistent across the whole dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    61
    Citations
    NaN
    KQI
    []