Information Processing and Coding in Spatiotemporal Dynamical Systems: Spiking Networks
0
Citation
0
Reference
10
Related Paper
Keywords:
Neural coding
Predictive coding
Information Theory
Sparse coding has long been thought of as a model of the biological visual system, yet previous approaches have not employed it as a method to model the activity of individual neurons in response to arbitrary images. Here, we present a novel model of primary cortical neurons based on a biologically-plausible sparse coding model termed the locally-competitive algorithm (LCA). Our hybrid LCA-CNN model, or LCANet, is trained on a self-supervised objective using a standard image dataset and regression models are trained to predict neural activity based on a modern neurophysiological dataset containing the responses of hundreds of neurons to natural image stimuli. Our novel sparse coding model better represents the computations performed by biological neurons and is significantly more interpretable than previous models.
Neural coding
Neurophysiology
Predictive coding
Cite
Citations (0)
Information Theory
Stimulus (psychology)
Neural coding
Neurophysiology
Predictive coding
Information transmission
Information Transfer
Cite
Citations (1,127)
The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks, but its generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as O(√dk/m) with respect to d, k, and the size m of the training sample. It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations.
Neural coding
Predictive coding
Lasso
K-SVD
Cite
Citations (31)
Information Theory
Information Transfer
Cite
Citations (1)
The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness.
Neural coding
Predictive coding
Neural Activity
Representation
Cite
Citations (30)
A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. Several theories have been proposed to this end. “Efficient coding” posits that neural circuits maximise information encoded about their inputs. “Sparse coding” posits that individual neurons respond selectively to specific, rarely occurring, features. Finally, “predictive coding” posits that neurons preferentially encode stimuli that are useful for making predictions. Except in special cases, it is unclear how these theories relate to each other, or what is expected if different coding objectives are combined. To address this question, we developed a unified framework that encompasses these previous theories and extends to new regimes, such as sparse predictive coding. We explore cases when different coding objectives exert conflicting or synergistic effects on neural response properties. We show that predictive coding can lead neurons to either correlate or decorrelate their inputs, depending on presented stimuli, while (at low-noise) efficient coding always predicts decorrelation. We compare predictive versus sparse coding of natural movies, showing that the two theories predict qualitatively different neural responses to visual motion. Our approach promises a way to explain the observed diversity of sensory neural responses, as due to a multiplicity of functional goals performed by different cell types and/or circuits.
Predictive coding
Neural coding
ENCODE
Decorrelation
Cite
Citations (4)
The activations of an analog neural network (ANN) are usually treated as representing an analog firing rate. When mapping the ANN onto an equivalent spiking neural network (SNN), this rate-based conversion can lead to undesired increases in computation cost and memory access, if firing rates are high. This work presents an efficient temporal encoding scheme, where the analog activation of a neuron in the ANN is treated as the instantaneous firing rate given by the time-to-first-spike (TTFS) in the converted SNN. By making use of temporal information carried by a single spike, we show a new spiking network model that uses 7-10× fewer operations than the original rate-based analog model on the MNIST handwritten dataset, with an accuracy loss of <; 1%.
MNIST database
Neural coding
Cite
Citations (160)
Neural decoding
Neural coding
Neural ensemble
Spike train
Cortical neurons
Cite
Citations (2)
In a spiking neural network (SNN), individual neurons operate autonomously and only communicate with other neurons sparingly and asynchronously via spike signals. These characteristics render a massively parallel hardware implementation of SNN a potentially powerful computer, albeit a non von Neumann one. But can one guarantee that a SNN computer solves some important problems reliably? In this paper, we formulate a mathematical model of one SNN that can be configured for a sparse coding problem for feature extraction. With a moderate but well-defined assumption, we prove that the SNN indeed solves sparse coding. To the best of our knowledge, this is the first rigorous result of this kind.
Neural coding
Cite
Citations (20)
In a spiking neural network (SNN), individual neurons operate autonomously and only communicate with other neurons sparingly and asynchronously via spike signals. These characteristics render a massively parallel hardware implementation of SNN a potentially powerful computer, albeit a non von Neumann one. But can one guarantee that a SNN computer solves some important problems reliably? In this paper, we formulate a mathematical model of one SNN that can be configured for a sparse coding problem for feature extraction. With a moderate but well-defined assumption, we prove that the SNN indeed solves sparse coding. To the best of our knowledge, this is the first rigorous result of this kind.
Neural coding
Cite
Citations (27)