Maximum mean discrepancy regularized sparse reconstruction for robust salient regions detection

2017 
Sparse reconstruction based saliency detection methods are gaining popularity for object detection and content-based image recovery due to its simplicity and easy understandability. It helps in extracting concise representations of the stimuli and capture high-level semantics in visual information with a few active coefficients. Different from the conventional sparse representation techniques that highlight the borders of the salient objects only, we propose a novel regularized sparse coding method that preserves the similarity and locality to achieve smoothness in sparse representation to evenly highlight the entire salient object. In this study, we propose a novel Maximum Mean Discrepancy (MMD) regularized sparse representation method for salient region detection. Initially, the resemblance and locality of super pixels are conserved by constructing a graph regularization term to increase the fidelity of the salient coefficient score of the visual part. Secondly, the distributions divergence among the similar regions is alleviated by constructing a MMD regularized term. Furthermore, the re-constructive background dictionary is extracted from background pixels that are enriched with visual and geometrical information. The results computed through this dictionary are more accurate in terms of background suppression. We analyze our model on four largest benchmark datasets using five evaluation metrics that reveal the fact that the performance of the proposed model is satisfactory and favorably against the existing state-of-the-art schemes. HighlightsWe exploited visual and contextual information to precisely extract the salient objects.Our extracted background dictionary is very effective in removing background noises.We included a Laplacian term to preserve similarity and locality among salient regions.Our MMD regularized term transfers sparse coding to an effective representation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    3
    Citations
    NaN
    KQI
    []