Context-based global multi-class semantic image segmentation by wireless multimedia sensor networks

2015 
Using context to aid object detection is becoming more popular among computer vision researchers. Our physical world is structured, and our perception as human beings does not neglect contextual information. In this paper, we propose a framework that is able to simultaneously detect and segment objects of different classes under context. Context is incorporated into our model as long-range pairwise interactions between pixels, which impose a prior on the labeling. Long-range interactions have seen seldom use in the computer vision literature, and we show how to use them to encode contextual information in our segmentation. Our framework formulates the multi-class image segmentation task as an energy minimization problem and finds a globally optimal solution under certain conditions using a single graph cut. We experimentally evaluate performance of our model on two publicly available datasets: the MSRC-1 and the CorelB datasets. Our results show the applicability of our model to the multi-class segmentation problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    5
    Citations
    NaN
    KQI
    []