Visual saliency on networks of neurosynaptic cores

2015 
Identifying interesting or salient regions in an image plays an important role for multimedia search, object tracking, active vision, segmentation, and classification. Existing saliency extraction algorithms are implemented using the conventional von Neumann computational model. We propose a bottom-up model of visual saliency, inspired by the primate visual cortex, which is compatible with TrueNorth-a low-power, brain-inspired neuromorphic substrate that runs large-scale spiking neural networks in real-time. Our model uses color, motion, luminance, and shape to identify salient regions in video sequences. For a three-color-channel video with 240 $\times$ 136 pixels per frame and 30 frames per second, we demonstrate a model utilizing $\sim$ 3 million neurons, which achieves competitive detection performance on a publicly available dataset while consuming $\sim$ 200 mW.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    11
    Citations
    NaN
    KQI
    []