In-memory Reservoir Computing: exploiting memristive devices in Spiking Neural Networks

2019 
Neuromorphic computing has often lent on analysis of biological systems to improve its performances. One of the key properties of the Nervous System is Plasticity, i.e. the capacity of its components (Neuron and Synapses) to modify themselves, functionally and structurally, in response to experience and injury. In the brain, plasticity is mainly attributed to Synapses, which control the flow of ions between Neurons through neurotransmitters with a certain, variable in time, weight. Neurons also play a role in plasticity mechanisms, since their excitability can be varied as well as the leakages of internalized ions to keep a healthy firing regime, minimizing energy consumption and maximizing information transfer. These plasticity mechanisms applied to Neural Networks not only increase the plausibility to biology but also contribute to improving the performances of the system in several tasks. This is particularly true for Liquid State Machines, a computing paradigm based on Spiking-Neural-Networks, in the framework of Reservoir Computing. Different forms of plasticity are present in the Brain, and in turn also in Brain-inspired Neural Networks: the most popular one is Spiking-Time-Dependent-Plasticity (STDP), a form of synaptic Hebbian plasticity; Synaptic-Normalization (SN) is also a common homeostatic feature; Intrinsic-Plasticity (IP) is instead a less investigated property for Neuromorphic systems, probably due to the difficulty of implementing it in hardware devices. The co-action of such mechanisms has been shown to boosts the performance in the framework of Reservoir Computing for Artificial-Neural-Networks (SORN, Lazar 2009), while it remains to be investigated for more biologically plausible Spiking-Neural-Networks (Liquid State Machines). From the hardware standpoint, conventional CMOS based Neuromorphic hardware struggles to implement such plasticity mechanisms, particularly Intrinsic Plasticity: to update the parameters of the network, external memory is required to be operated, transferring the information of such modified biases back and forth from the registers to the computational units. This, of course, leads to the well known Von Neumann bottleneck problem, for which the transfer of the information across the device limits of the device speed itself. The rise of Memristor allowed for new architectures able to store memory within the Neuron’s scheme, creating advanced circuits embedding both the Neuron's circuit and its biases. In this way, being able to modify the Neuron’s properties at the circuit level, learning is enabled in situ. Exploiting the programmability of memristors, the resistances of the Neuron can be set to a target value and automatically updated every cycle without the need of storing their state in external memories and allowing for a real Non-Von Neumann architecture to be conceived. The work aims to show that technologically plausible In-Memory Mixed Signals architectures allow for the development of algorithms, implementing plasticity mechanisms, able to improve the performance of Liquid-State-Machines in temporal tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []