Self-repairing Learning Rule for Spiking Astrocyte-Neuron Networks
Junxiu LiuLiam McDaidJim HarkinJohn WadeShvan KarimAnju P. JohnsonAlan G. MillardDavid M. HallidayAndy M. TyrrellJon Timmis
10
Citation
19
Reference
10
Related Paper
Citation Trend
Keywords:
Learning rule
Spike-timing-dependent plasticity
Spiking Neural Network is a network that operates with neurons that generate voltage or current spikes and are connected by synapses. Spike Time Dependent Plasticity (STDP) is a learning rule which governs how strong or weak the connections between two neurons will be based on temporal information of spike activity of two connected neurons. We propose a 32nm analog CMOS implementation of STDP based synapse in this work. Simulation results demonstrate that the circuit effectively emulates the behavior of STDP. The proposed circuit can be used as a synapse in the construction of a complete SNN architecture.
Spike-timing-dependent plasticity
Learning rule
Cite
Citations (0)
Spike Timing Dependent Plasticity (STDP), wherein synaptic weights are modified based on the temporal correlation between a pair of pre- and post-synaptic (post-neuronal) spikes, is widely used to implement unsupervised learning in Spiking Neural Networks (SNNs). In general, STDP-based learning models disregard the information embedded in post-neuronal spiking frequency. We observe that updating the synaptic weights at the instants of every post-neuronal spike while ignoring the spiking frequency could potentially cause them to learn overlapping representations of multiple input patterns sharing common features. We present STDP-based enhanced plasticity mechanisms that account for the spiking frequency to achieve efficient synaptic learning. First, we utilize low-pass filtered neuronal membrane potential to obtain an estimate of the spiking frequency. We perform STDP-driven weight updates in the event of a post-spike if the filtered potential exceeds a definite threshold. This ensures that plasticity is effected on the dominantly firing neuron that indicates a strong bias in learning the input pattern. Synaptic updates are restrained in the case of sporadic neuronal spiking activity, which implies a weak correlation with the input pattern. This enhances the quality of features encoded by the synapses, resulting in an improvement of 5.8% in the classification accuracy of an SNN of 100 neurons trained for digit recognition. Our simulations further show that the enhanced scheme provides a reduction of 2 χ in the number of weight updates, which leads to improved energy efficiency in event-driven SNN implementations. Second, we explore a neuronal spike-count based enhanced plasticity mechanism. The synapses are modified at the instant of a post-spike if the neuron had fired a certain number of spikes since the preceding update instant. This scheme performs delayed updates at suitable neuronal spiking instants to learn improved synaptic representations. Using this technique, the classification accuracy increased by 4% with 5.2× reduction in the number of weight updates.
Spike-timing-dependent plasticity
Learning rule
Synaptic weight
Cite
Citations (25)
Spiking neural networks (SNNs) offer many advantages over traditional artificial neural networks (ANNs) such as biological plausibility, fast information processing, and energy efficiency. Although SNNs have been used to solve a variety of control tasks using the modulated Spike-Timing-Dependent-Plasticity (STDP) learning rule, existing solutions usually involve hard-coded network architecture solving specified tasks rather than solving tasks in the decent style as traditional ANNs do. This results in neglecting one of the biggest advantages of ANNs, being general-purpose and easy-to-use due to their simple network architecture, which usually consists of an input layer, one or multiple hidden layers and an output layer. This paper addresses the problem by introducing an end-to-end learning approach of spiking neural networks constructed with one hidden layer and R-STDP synapses in an all-to-all fashion. We use the supervised reward-modulated Spike-Timing-Dependent-Plasticity (R-STDP) learning rule to train two different SNN-based sub-controllers to replicate a desired obstacle avoiding and goal approaching behavior, provided by pre-generated datasets. Together they make up a target-reaching controller and are used to control a simulated mobile robot to reach a target area while avoiding obstacles in its path. We demonstrate the performance and effectiveness of our trained SNNs to achieve target reaching tasks in different unknown scenarios.
Spike-timing-dependent plasticity
Learning rule
Supervised Learning
Cite
Citations (32)
Hardware implementation of Artificial Neural Network (ANN) algorithms, which are being currently used widely by the data sciences community, provides advantages of memory-computing intertwining, high speed and low energy dissipation which software implementation of the same does not have. In this paper, we simulate a spintronic hardware implementation of a third generation neural network - Spike Time Dependent Plasticity (STDP) learning enabled Spiking Neural Network (SNN), which is closer to functioning of the brain than most other ANN-s. Spin orbit torque driven skyrmionic device, driven by a transistor based circuit to enable STDP, is used as a synapse here. We use a combination of micromagnetic simulations, transistor circuit simulations and implementation of SNN algorithm in a numerical package to simulate our skyrmionic SNN. We train the skyrmionic SNN on different datasets under a supervised learning scheme and calculate the energy dissipated in updating the weights of the synapses in order to train the network.
Spike-timing-dependent plasticity
Cite
Citations (2)
Synapses plays an important role of learning in a neural network; the learning rules which modify the synaptic strength based on the timing difference between the pre- and post-synaptic spike occurrence is termed as Spike Time Dependent Plasticity (STDP). This paper describes the compact implementation of a synapse using single floating-gate (FG) transistor (and two additional high voltage transistors) that can store a weight in a non-volatile manner and demonstrate the triplet STDP (T-STDP) learning rule developed to explain biologically observed plasticity. We describe a mathematical procedure to obtain control voltages for the FG device for T-STDP and also show measurement results, from a FG synapse fabricated in TSMC 0.35μm CMOS process to support the theory.
Spike-timing-dependent plasticity
Learning rule
Synaptic weight
Cite
Citations (4)
Spiking neural network (SNN) is considered as one of the most promising candidates for designing neuromorphic hardware due to its low power computing capability. Since SNNs are made from imitating features of the human brain, bio-plausible spike-timing-dependent plasticity (STDP) learning rule can be adjusted to perform unsupervised learning of SNN. In this paper, we present a spike count based early termination technique for STDP learning in SNN. To reduce redundant timesteps and calculations, spike counts of output neurons can be used to terminate the training process beforehand, thus latency and energy can be decreased. The proposed scheme reduces 50.7% of timesteps and 51.1% of total weight update during training with 0.35% accuracy drop in MNIST application.
MNIST database
Spike-timing-dependent plasticity
Neuromorphic engineering
Learning rule
Cite
Citations (3)
Computational models called Spiking Neural Networks (SNNs) are modelled after the intricate information processing discovered in the brain. A key learning principle, spike-time dependent plasticity (STDP), controls how the temporal connection between pre- and post-synaptic spikes affects synaptic weight changes. STDP is a Hebbian learning rule used in training algorithms for SNNs. SNNs encode information in the precise timing of spikes. Modified variants of STDP have recently evolved to improve the learning and adaptation of SNNs. These versions incorporate extra elements like neuromodulators and dendritic processing. This review focuses on the underlying principles, experimental results, and computational models to provide an in-depth overview of the developments in modulated STDP-based learning for SNNs. It also deals with the difficulties of modified STDP, such as computational complexity, parameter optimisation, scalability, and the quest for biological plausibility. This review is invaluable for researchers and practitioners interested in creating practical and biologically plausible learning algorithms for modulated STDP.
Spike-timing-dependent plasticity
Cite
Citations (0)
<p>In this study, we solved a noisy spatiotemporal spike pattern detection task on an analog neuromorphic chip using an unsupervised learning rule. Spike-timing-dependent plasticity (STDP) is the most widespread unsupervised learning rule implemented in Spiking Neural Networks (SNNs) and neuromorphic chips. It has been shown to perform well in conventional benchmark tasks such as spike pattern classification and image classification in SNN simulations. However, a significant performance gap exists between its ideal model simulation and neuromorphic implementation. The learning rate of STDP learning depends on the resolution of synaptic efficacy, high resolution efficacy leads to a small learning rate and stable performance. In computer simulation, synaptic efficacy is configured using 64-bit floating-point precision whereas in low-power neuromorphic chips the resolution is generally restricted to under 5-bit fixed point precision due to silicon area and power constraints. This leads to a degradation in the performance. To solve this problem we proposed a bioinspired learning rule named adaptive STDP learning in a previous study and demonstrated via numerical simulation that the performance of adaptive STDP learning (using 4-bit fixed point synapses) is similar to STDP learning (using 64-bit floating-point precision) in a noisy spatiotemporal spike pattern detection task. In this study, we present the experimental results for the same. The experimental results are similar to those obtained in our simulation-based study. To our best knowledge, this is the first time that an unsupervised, noisy spatiotemporal spike pattern detection task has been demonstrated to perform well on a mixed-signal CMOS neuromorphic chip with low-resolution synaptic efficacy. </p>
Neuromorphic engineering
Spike-timing-dependent plasticity
Benchmark (surveying)
Learning rule
Cite
Citations (2)
Neuromorphic engineering is a promising computing paradigm in next-generation information and communication technology. In particular, spiking neural networks are expected to reduce power consumption drastically owing to their event-driven operation. The spike-timing-dependent plasticity (STDP) rule, which learns from local spike-timing differences between spiking neurons, is a biologically plausible learning rule for spiking neural networks (SNNs). In this study, we designed and simulated an analog circuit that reproduces the multiplicative STDP rule, which is more flexible and adaptive to external signals. We also derived analytical expressions for the behavior of the proposed circuit. These results provide important insights for designing energy efficient neuromorphic devices for applications including edge computing.
Neuromorphic engineering
Spike-timing-dependent plasticity
Learning rule
Cite
Citations (4)