Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

2015 
Since its birthdate in 1989, with the publication of Carver Mead’s book1, the field of neuromorphic engineering aims at embodying computational principles operating in the nervous system into analog VLSI electronic devices. In a way, this endeavour may be seen as one modern instance of an over three centuries-long attempt to map forms of intelligent behavior onto a physical substrate reflecting the best technology of the day2. The additional twist of neuromorphic engineering is a case for a direct mapping of the dynamics of neurons and synapses onto the physics of corresponding analog circuits. Initial success was mostly in emulating sensory functions (e.g. visual or auditory perception), and important developments are still ongoing in this area3,4,5. However, it soon became clear that the agenda should include serious efforts to emulate, along with such implementations, elements of information processing downstream sensory stages, with the ultimate goal of approaching cognitive functions. To make progress in this direction, beyond special-purpose solutions for specific functions, it seems important to identify neural circuitry implementing basic, and hopefully generic, dynamic building blocks, to provide reusable computational primitives, possibly subserving many types of information processing; in fact, this is both a theoretical quest and an item in the agenda of neuromorphic science. Steps in this direction have been taken recently in6, where ‘soft winner-take-allsubnetworks provide reliable generic elements to compose finite-states machine capable of context-dependent computation. A review of the electronic circuits involved in such implementations is given in7. In recurrent neural populations, synaptic self-excitation can support attractor dynamics, point attractors in the simplest instance on which our approach is based. Point attractors are stable configurations of the network dynamics; from any configuration inside the ‘basin’ of one attractor state, the dynamics brings the network towards that attractor, where it remains (possibly up to fluctuations if noise is present). In a system possessing several point attractor states, the dynamic correspondence between each attractor and its basin implements naturally an associative memory, the initial state within the basin being a metaphor of an initial stimulus, eliciting (even if removed afterwards) the retrieval of an associated prototypical information (memory). For a given network size and connectivity graph, the set of available attractor states is determined by the matrix of synaptic efficacies weighing the links of the graph; Learning memories is implemented through stimulus-specific changes in the synaptic matrix8,9. The attractor-basin correspondence implements dimensionality reduction. Besides, the stimulus-selective self-sustained neural activity following the removal of stimulus that elicited it, can act as a carrier of selective information across time intervals of unconstrained duration, only limited - in the absence of other intervening stimuli, by the stability of the attractor state against fluctuations8,9. In a previous paper10 we demonstrated attractor dynamics in a neuromorphic chip, where synaptic efficacies were chosen and fixed so as to support the desired attractor states If attractor dynamics is to be considered as an interesting generic element of computation and representation for neuromorphic systems, we must address the question of how it can autonomously emerge from the ongoing stimulus-driven neural dynamics and the ensuing synaptic plasticity; this we do in the present work. To date, sparse theoretical efforts have been devoted in this direction (see11,12,13), and to our knowledge this has been never undertaken in a neuromorphic chip. Here, in line with our previous papers13,14, and consistently with the above principles, we focus on the autonomous formation of attractor states as associative memories of simple visual objects. Our setting is simple, in that our VLSI network learns two relatively simple, and non-overlapped, visual objects. Still, it is complex, in that learning is effected autonomously (that is, without a supervised mechanism to monitor errors and instruct synaptic changes); synapses change under the local (in space and time) guidance of the spiking activities of the neurons they connect, which in turn change their response to stimuli and their average activity because of synaptic modifications. Such a dynamic loop makes the combined dynamics of neurons and synapses during learning quite complex, and controlling it a tricky business; even more so in a neuromorphic analog chip, with the implied heterogeneities, mismatches and the like. To gain predictive control on the chips’ learning dynamics, we first characterize the single-neuron input-output gain function. Then, we use the mean-field theory of recurrent neural networks as a compass to navigate the parameters space of a population of neurons endowed with massive positive feedback and predict attractor states. Finally we measure the rates of change (potentiation or depression) of the Hebbian, stochastic synapses as a function of the pre- and post-synaptic neural activities. These three characterization measures let us choose the correct settings for a successful learning trajectory. We then proceed with experiments on the autonomous learning capabilities of the system and finally, we test the attractor property of the developed internal representations of the learnt stimuli, by checking that when presented with a degraded version of such stimuli the network dynamically reconstructs the complete representation. To our knowledge this is the first demonstration of a VLSI neuromorphic system implementing online, autonomous learning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    8
    Citations
    NaN
    KQI
    []