Saliency-based object discovery on RGB-D data with a late-fusion approach

2015 
We present a novel method based on saliency and segmentation to generate generic object candidates from RGB-D data. Our method uses saliency as a cue to roughly estimate the location and extent of the objects present in the scene. Salient regions are used to glue together the segments obtained from over-segmenting the scene by either color or depth segmentation algorithms, or by a combination of both. We suggest a late-fusion approach that first extracts segments from color and depth independently before fusing them to exploit that the data is complementary. Furthermore, we investigate several mechanisms for ranking the object candidates. We evaluate on one publicly available dataset and on one challenging sequence with a high degree of clutter. The results show that we are able to retrieve most objects in real-world indoor scenes and clearly outperform other state-of-the art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    20
    Citations
    NaN
    KQI
    []