ARLIF: A Flexible and Efficient Recurrent Neuronal Model for Sequential Tasks

2021 
Spiking neural networks (SNNs), stem from neuroscience, are promising for energy-efficient information processing due to the “event-driven” characteristic, whereas, they are inferior to artificial neural networks (ANNs) in real complicated tasks. However, ANNs usually suffer from expensive processing costs and a large number of parameters. Likewise, constrained to convergence speed, stability, complicated training mechanism and preprocessing setting, which is an obstacle for the SNN practitioners to expand its application scope. Inspired by the operation mechanism of human brain neurons, a brain-inspired Adaptive firing threshold Recurrent Leaky Integrate-and-Fire (ARLIF) model proposed. ARLIF and his variant ConvARLIF2D, which fuses the calculation logic of ANNs and bio-dynamic behaviors of SNNs, has a low-power dissipation since its number of weights is far less than SimpleRNN, GRU or LSTM. In this work, we present a Keras-based implementation of the layer of ARLIF and ConvARLIF2D that seamlessly fits in the contemporary deep learning framework without writing complex boilerplate code. The experiments result indicating that our ARLIF performs favorably against the state-of-the-art architectures.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []