skip to main content


Title: Nonequilibrium Statistical Mechanics of Continuous Attractors
Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood. Here, we find fundamental limits on how rapidly internal representations encoded along continuous attractors can be updated by an external signal. We apply these results to place cell networks to derive a velocity-dependent nonequilibrium memory capacity in neural networks.  more » « less
Award ID(s):
1734030
PAR ID:
10169879
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Neural Computation
Volume:
32
Issue:
6
ISSN:
0899-7667
Page Range / eLocation ID:
1033 to 1068
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Brain dynamics can exhibit narrow-band nonlinear oscillations and multistability. For a subset of disorders of consciousness and motor control, we hypothesized that some symptoms originate from the inability to spontaneously transition from one attractor to another. Using external perturbations, such as electrical pulses delivered by deep brain stimulation devices, it may be possible to induce such transition out of the pathological attractors. However, the induction of transition may be non-trivial, rendering the current open-loop stimulation strategies insufficient. In order to develop next-generation neural stimulators that can intelligently learn to induce attractor transitions, we require a platform to test the efficacy of such systems. To this end, we designed an analog circuit as a model for the multistable brain dynamics. The circuit spontaneously oscillates stably on two periods as an instantiation of a 3-dimensional continuous-time gated recurrent neural network. To discourage simple perturbation strategies, such as constant or random stimulation patterns from easily inducing transition between the stable limit cycles, we designed a state-dependent nonlinear circuit interface for external perturbation. We demonstrate the existence of nontrivial solutions to the transition problem in our circuit implementation. 
    more » « less
  2. Lay Summary

    Parts of the brain can work together by synchronizing the activity of the neurons. We recorded the electrical activity of the brain in adolescents with autism spectrum disorder and then compared the recording to that of their peers without the diagnosis. We found that in participants with autism, there were a lot of very short time periods of non‐synchronized activity between frontal and parietal parts of the brain. Mathematical models show that the brain system with this kind of activity is very sensitive to external events.

     
    more » « less
  3. Serre, Thomas (Ed.)
    Experience shapes our expectations and helps us learn the structure of the environment. Inference models render such learning as a gradual refinement of the observer’s estimate of the environmental prior. For instance, when retaining an estimate of an object’s features in working memory, learned priors may bias the estimate in the direction of common feature values. Humans display such biases when retaining color estimates on short time intervals. We propose that these systematic biases emerge from modulation of synaptic connectivity in a neural circuit based on the experienced stimulus history, shaping the persistent and collective neural activity that encodes the stimulus estimate. Resulting neural activity attractors are aligned to common stimulus values. Using recently published human response data from a delayed-estimation task in which stimuli (colors) were drawn from a heterogeneous distribution that did not necessarily correspond with reported population biases, we confirm that most subjects’ response distributions are better described by experience-dependent learning models than by models with fixed biases. This work suggests systematic limitations in working memory reflect efficient representations of inferred environmental structure, providing new insights into how humans integrate environmental knowledge into their cognitive strategies. 
    more » « less
  4. Neural activity underlying working memory is not a local phenomenon but distributed across multiple brain regions. To elucidate the circuit mechanism of such distributed activity, we developed an anatomically constrained computational model of large-scale macaque cortex. We found that mnemonic internal states may emerge from inter-areal reverberation, even in a regime where none of the isolated areas is capable of generating self-sustained activity. The mnemonic activity pattern along the cortical hierarchy indicates a transition in space, separating areas engaged in working memory and those which do not. A host of spatially distinct attractor states is found, potentially subserving various internal processes. The model yields testable predictions, including the idea of counterstream inhibitory bias, the role of prefrontal areas in controlling distributed attractors, and the resilience of distributed activity to lesions or inactivation. This work provides a theoretical framework for identifying large-scale brain mechanisms and computational principles of distributed cognitive processes. 
    more » « less
  5. We present a formal, mathematical foundation for modeling and reasoning about the behavior of synchronous, stochastic Spiking Neural Networks (SNNs), which have been widely used in studies of neural computation. Our approach follows paradigms established in the field of concurrency theory. Our SNN model is based on directed graphs of neurons, classified as input, output, and internal neurons. We focus here on basic SNNs, in which a neuron’s only state is a Boolean value indicating whether or not the neuron is currently firing. We also define the external behavior of an SNN, in terms of probability distributions on its external firing patterns. We define two operators on SNNs: a composition operator, which supports modeling of SNNs as combinations of smaller SNNs, and a hiding operator, which reclassifies some output behavior of an SNN as internal. We prove results showing how the external behavior of a network built using these operators is related to the external behavior of its component networks. Finally, we definition the notion of a problem to be solved by an SNN, and show how the composition and hiding operators affect the problems that are solved by the networks. We illustrate our definitions with three examples: a Boolean circuit constructed from gates, an Attention network constructed from a Winner-Take-All network and a Filter network, and a toy example involving combining two networks in a cyclic fashion. 
    more » « less