Color and Depth-Based Superpixels for Background and Object Segmentation

2012 
Abstract We present an approach to multimodal semantic segmentation based on both color and depth information. Our goal is to build a semantic map containing high-level information, namely objects and background categories (carpet, parquet, walls …). This approach was developed for the Panoramic and Active Camera for Object Mapping (PACOM) 1 project in order to participate in a French exploration and mapping contest called CAROTTE. Our method is based on a structured output prediction strategy to detect the various elements of the environment, using both color and depth images from the Kinect camera. The image is first over-segmented into small homogeneous regions named “superpixels” to be classified and characterized using a bag of features representation. For each superpixel, texture and color descriptors are computed from the color image and 3D descriptors are computed from the associated depth image. A Markov Random Field (MRF) model then fuses texture, color, depth and neighboring information to associate a label to each superpixel extracted from the image. We present an evaluation of different segmentation algorithms for the semantic labeling task and the interest of integrating depth information in the superpixel computation task.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    8
    Citations
    NaN
    KQI
    []