We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.
Recent advances in neuroscience and psychology show that the brain has access to timelines of both the past and the future. Spiking across populations of neurons in many regions of the mammalian brain maintains a robust temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can estimate an extended temporal model of the future, suggesting that the neural timeline of the past could extend through the present into the future. This paper presents a mathematical framework for learning and expressing relationships between events in continuous time. We assume that the brain has access to a temporal memory in the form of the real Laplace transform of the recent past. Hebbian associations with a diversity of synaptic time scales are formed between the past and the present that record the temporal relationships between events. Knowing the temporal relationships between the past and the present allows one to predict relationships between the present and the future, thus constructing an extended temporal prediction for the future. Both memory for the past and the predicted future are represented as the real Laplace transform, expressed as the firing rate over populations of neurons indexed by different rate constants $s$. The diversity of synaptic timescales allows for a temporal record over the much larger time scale of trial history. In this framework, temporal credit assignment can be assessed via a Laplace temporal difference. The Laplace temporal difference compares the future that actually follows a stimulus to the future predicted just before the stimulus was observed. This computational framework makes a number of specific neurophysiological predictions and, taken together, could provide the basis for a future iteration of RL that incorporates temporal memory as a fundamental building block.
It is well-known that in free recall participants tend to recall words presented close together in time in sequence, reflecting a form of temporal binding in memory. This contiguity effect is robust, having been observed across many different experimental manipulations. In order to explore a potential boundary on the contiguity effect, participants performed a free recall task in which items were presented at rates ranging from 2 Hz to 8 Hz. Participants were still able to recall items even at the fastest presentation rate, though accuracy decreased. Importantly, the contiguity effect flattened as presentation rates increased. These findings illuminate possible constraints on the temporal encoding of episodic memories.
Recent advances in neuroscience and psychology show that the brain has access to timelines of both the past and the future. Spiking across populations of neurons in many regions of the mammalian brain maintains a robust temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can estimate an extended temporal model of the future, suggesting that the neural timeline of the past could extend through the present into the future. This paper presents a mathematical framework for learning and expressing relationships between events in continuous time. We assume that the brain has access to a temporal memory in the form of the real Laplace transform of the recent past. Hebbian associations with a diversity of synaptic time scales are formed between the past and the present that record the temporal relationships between events. Knowing the temporal relationships between the past and the present allows one to predict relationships between the present and the future, thus constructing an extended temporal prediction for the future. Both memory for the past and the predicted future are represented as the real Laplace transform, expressed as the firing rate over populations of neurons indexed by different rate constants s. The diversity of synaptic timescales allows for a temporal record over the much larger time scale of trial history. In this framework, temporal credit assignment can be assessed via a Laplace temporal difference. The Laplace temporal difference compares the future that actually follows a stimulus to the future predicted just before the stimulus was observed. This computational framework makes a number of specific neurophysiological predictions and, taken together, could provide the basis for a future iteration of RL that incorporates temporal memory as a fundamental building block.
Cognitive computation ought to be fast, efficient and flexible, reusing the same neural mechanisms to operate on many different forms of information. In order to develop neural models for cognitive computation we need to develop neurally-plausible implementations of fundamental operations. If the operations can be applied across sensory modalities, this requires a common form of neural coding. Weber-Fechner scaling is a general representational motif that is exploited by the brain not only in vision and audition, but also for efficient representations of time, space and numerosity. That is, for these variables, the brain appears to support functions f(x) by placing receptors at locations xi such that xi - xi-1 ∞ xi. The existence of a common form of neural representation suggests the possibility of a common form of cognitive computation across information domains. Efficient Weber-Fechner representations of time, space and number can be constructed using the Laplace transform, which can be inverted using a neurally-plausible matrix operation. Access to the Laplace domain allows for a range of efficient computations that can be performed on Weber-Fechner scaled representations. For instance, translation of a function f(x) by an amount δ to give f(x+δ) can be readily accomplished in the Laplace domain. We have worked out a neurally-plausible mapping hypothesis between translation and theta oscillations. Other operations, such as convolution and cross-correlation are extremely efficient in the Laplace domain, enabling the computation of addition and subtraction of neural representations. Implementation of neural circuits for these elemental computations would allow hybrid neural-symbolic architectures that exhibit properties such as compositionality and productivity.
Abstract Scale-invariant timing has been observed in a wide range of behavioral experiments. The firing properties of recently described time cells provide a possible neural substrate for scale-invariant behavior. Earlier neural circuit models do not produce scale-invariant neural sequences. In this paper we present a biologically detailed network model based on an earlier mathematical algorithm. The simulations incorporate exponentially decaying persistent firing maintained by the calcium-activated nonspecific (CAN) cationic current and a network structure given by the inverse Laplace transform to generate time cells with scale-invariant firing rates. This model provides the first biologically detailed neural circuit for generating scale-invariant time cells. The circuit that implements the inverse Laplace transform merely consists of off-center/on-surround receptive fields. Critically, rescaling temporal sequences can be accomplished simply via cortical gain control (changing the slope of the f-I curve).