Color-indoor: Incorporating Depth into Room Decoration Visualization

2021 
Combined with computer vision technology, we propose an system to automatically visualize the decoration effect of 3D complex indoor scenes, named Color-indoor. Given a preferred color and RGB-D images, the Color-indoor system can be used for color replacement, editing texture, and synthesizing 3D result for specified semantic regions of the input image. The key idea of the proposed Color-indoor is leveraging depth information to guide the entire segmentation process and 3D data synthesis. We propose an depth-fusion criss-cross attention semantic segmentation framework (DFCCN) for parsing the indoor semantic scene, and introduce a depth branch to better extracted geometry information from different semantic areas. We utilize DFCCN to extract and fuse features from RGB branch and depth branch, so that the segmentation network can obtain more geometry information and enrich the structural details of features. Located the specified semantic regions, a simple yet effective editing algorithm is proposed for color and texture replacement. Combined the camera parameters, the 3D data synthesis algorithm are used to generate 3D results from edited images and depth images. For training and testing, we set up a new RGB-D dataset upon NYUv2 including 6 semantic labels. The experimental and visual results are demonstrated that our proposed Color-indoor can generate harmonious 3D results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []