logo
    Novel Spiking Neuron-Astrocyte Networks based on nonlinear transistor-like models of tripartite synapses
    18
    Citation
    15
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    In this paper a novel and efficient computational implementation of a Spiking Neuron-Astrocyte Network (SNAN) is reported. Neurons are modeled according to the Izhikevich formulation and the neuron-astrocyte interactions are intended as tripartite synapsis and modeled with the previously proposed nonlinear transistor-like model. Concerning the learning rules, the original spike-timing dependent plasticity is used for the neural part of the SNAN whereas an ad-hoc rule is proposed for the astrocyte part. SNAN performances are compared with a standard spiking neural network (SNN) and evaluated using the polychronization concept, i.e., number of co-existing groups that spontaneously generate patterns of polychronous activity. The astrocyte-neuron ratio is the biologically inspired value of 1.5. The proposed SNAN shows higher number of polychronous groups than SNN, remarkably achieved for the whole duration of simulation (24 hours).
    Keywords:
    Spike-timing-dependent plasticity
    In spiking neural networks (SNNs) the information processing is carried out by spike trains in a manner similar to the generic biological neurons. This paper presents a method for synthesis of neural oscillators by SNN's. We propose a learning method of SNN's such that they possess the desired periodic spike trains with specified spike emission times.
    Spike train
    Citations (4)
    Spiking neural networks (SNNs) are expected to be energy efficient when implemented on dedicated hardware. However, fully exploiting SNN's characteristics such as event-driven communications challenges on circuit designers and manufacturers. In this paper, inspired by the recent success of an artificial neural network (ANN) based system, known as charge-domain computing (CDC), we propose a novel framework for SNNs called "RC-Spike." As CDC, RC-Spike uses a two-phase system: input spikes are received in the accumulation phase, and a neuron produces a spike in the spike generation phase. In RC-Spike, synaptic currents are accumulated with resistively coupled synapses, with which circuit implementation can be simplified compared with CDC circuits. Because of this resistive coupling effect, a neuron in RC-Spike does not compute an exact dot product. However, RC-Spike can be successfully trained in the framework of SNNs, and we show that the learning performance of RC-Spike is as high as ANNs on the MNIST and Fashion-MNIST datasets.
    MNIST database
    Neuromorphic engineering
    Spike-timing-dependent plasticity
    Neural coding
    Recently, spiking neural networks have gained attention owing to their energy efficiency. All-to-all spike-time dependent plasticity is a popular learning algorithm for spiking neural networks because it is suitable for nondifferentiable spike event-based learning and requires fewer computations than back-propagation-based algorithms. However, the hardware implementation of all-to-all spike-time dependent plasticity is limited by the large storage area required for spike history and large energy consumption caused by frequent memory access. We propose a time-step scaled spike-time dependent plasticity to reduce the storage area required for spike history by reducing the area of the spike-time dependent plasticity learning circuit by 60% and a post-neuron spike-referred spike-time dependent plasticity to reduce the energy consumption by 99.1% by efficiently accessing the memory while learning. The accuracy of Modified National Institute of Standards and Technology image classification degraded by less than 2% when both time-step scaled spike-time dependent plasticity and post-neuron spike-referred spike-time dependent plasticity were applied. Thus, the proposed hardware-friendly spike-time dependent plasticity algorithms make all-to-all spike-time dependent plasticity implementable in more compact areas while reducing energy consumption and experiencing insignificant accuracy degradation.
    Spike-timing-dependent plasticity
    Citations (10)
    Spiking neural network (SNN) is considered as one of the most promising candidates for designing neuromorphic hardware due to its low power computing capability. Since SNNs are made from imitating features of the human brain, bio-plausible spike-timing-dependent plasticity (STDP) learning rule can be adjusted to perform unsupervised learning of SNN. In this paper, we present a spike count based early termination technique for STDP learning in SNN. To reduce redundant timesteps and calculations, spike counts of output neurons can be used to terminate the training process beforehand, thus latency and energy can be decreased. The proposed scheme reduces 50.7% of timesteps and 51.1% of total weight update during training with 0.35% accuracy drop in MNIST application.
    MNIST database
    Spike-timing-dependent plasticity
    Neuromorphic engineering
    Learning rule
    Spiking neural networks (SNNs), particularly the single-spike variant in which neurons spike at most once, are considerably more energy efficient than standard artificial neural networks (ANNs). However, single-spike SSNs are difficult to train due to their dynamic and non-differentiable nature, where current solutions are either slow or suffer from training instabilities. These networks have also been critiqued for their limited computational applicability such as being unsuitable for time-series datasets. We propose a new model for training single-spike SNNs which mitigates the aforementioned training issues and obtains competitive results across various image and neuromorphic datasets, with up to a $13.98\times$ training speedup and up to an $81\%$ reduction in spikes compared to the multi-spike SNN. Notably, our model performs on par with multi-spike SNNs in challenging tasks involving neuromorphic time-series datasets, demonstrating a broader computational role for single-spike SNNs than previously believed.
    Neuromorphic engineering
    Speedup
    Citations (2)
    Computational models called Spiking Neural Networks (SNNs) are modelled after the intricate information processing discovered in the brain. A key learning principle, spike-time dependent plasticity (STDP), controls how the temporal connection between pre- and post-synaptic spikes affects synaptic weight changes. STDP is a Hebbian learning rule used in training algorithms for SNNs. SNNs encode information in the precise timing of spikes. Modified variants of STDP have recently evolved to improve the learning and adaptation of SNNs. These versions incorporate extra elements like neuromodulators and dendritic processing. This review focuses on the underlying principles, experimental results, and computational models to provide an in-depth overview of the developments in modulated STDP-based learning for SNNs. It also deals with the difficulties of modified STDP, such as computational complexity, parameter optimisation, scalability, and the quest for biological plausibility. This review is invaluable for researchers and practitioners interested in creating practical and biologically plausible learning algorithms for modulated STDP.
    Spike-timing-dependent plasticity
    Citations (0)
    In the biological systems there are numerous examples of autonomously generated periodic activities. Several different periodic patterns are generated simultaneously in a living body. It is known that in biological systems there are specific neurons which generate such periodic patterns. In spiking neural networks the information processing is carried out by spike trains in a manner similar to the generic biological neurons. This paper presents a method for synthesis of neural oscillators by spiking neural networks. We propose a learning method for synthesizing spiking neural networks which generate desired periodic spike trains with specified spike emission times. A method of stability analysis of the generated periodic spike trains is also discussed.
    Spike train
    Biological neural network
    Physical neural network
    Neural system
    Citations (5)
    In this article, we propose a new paradigm for training spiking neural networks (SNNs), spike accumulation forwarding (SAF). It is known that SNNs are energy-efficient but difficult to train. Consequently, many researchers have proposed various methods to solve this problem, among which online training through time (OTTT) is a method that allows inferring at each time step while suppressing the memory cost. However, to compute efficiently on GPUs, OTTT requires operations with spike trains and weighted summation of spike trains during forwarding. In addition, OTTT has shown a relationship with the Spike Representation, an alternative training method, though theoretical agreement with Spike Representation has yet to be proven. Our proposed method can solve these problems; namely, SAF can halve the number of operations during the forward process, and it can be theoretically proven that SAF is consistent with the Spike Representation and OTTT, respectively. Furthermore, we confirmed the above contents through experiments and showed that it is possible to reduce memory and training time while maintaining accuracy.
    Representation
    Spike train
    Citations (0)
    One objective of Spiking Neural Networks is a very efficient computation in terms of energy consumption. To achieve this target, a small spike rate is of course very beneficial since the event-driven nature of such a computation. However, as the network becomes deeper, the spike rate tends to increase without any improvements in the final results. On the other hand, the introduction of a penalty on the excess of spikes can often lead the network to a configuration where many neurons are silent, resulting in a drop of the computational efficacy.
    Citations (2)
    In the biological systems there are numerous examples of autonomously generated periodic activities. Several different periodic patterns are generated simultaneously in a living body. It is known that in biological systems there are specific neurons which generate such periodic patterns. In spiking neural networks the information processing is carried out by spike trains in a manner similar to the generic biological neurons. This paper presents a method for synthesis of neural oscillators by spiking neural networks. We propose a learning method for synthesizing spiking neural networks which generate desired periodic spike trains with specified spike emission times. A method of stability analysis of the generated periodic spike trains is also discussed.
    Spike train
    Neural system