Abstract Success in many real-world tasks depends on our ability to dynamically track hidden states of the world. We hypothesized that neural populations estimate these states by processing sensory history through recurrent interactions which reflect the internal model of the world. To test this, we recorded brain activity in posterior parietal cortex (PPC) of monkeys navigating by optic flow to a hidden target location within a virtual environment, without explicit position cues. In addition to sequential neural dynamics and strong interneuronal interactions, we found that the hidden state - monkey’s displacement from the goal - was encoded in single neurons, and could be dynamically decoded from population activity. The decoded estimates predicted navigation performance on individual trials. Task manipulations that perturbed the world model induced substantial changes in neural interactions, and modified the neural representation of the hidden state, while representations of sensory and motor variables remained stable. The findings were recapitulated by a task-optimized recurrent neural network model, suggesting that task demands shape the neural interactions in PPC, leading them to embody a world model that consolidates information and tracks task-relevant hidden states.
more »
« less
Learning efficient task-dependent representations with synaptic plasticity
Neural populations encode the sensory world imperfectly: their capacity is limited by the number of neurons, availability of metabolic and other biophysical resources, and intrinsic noise. The brain is presumably shaped by these limitations, improving efficiency by discarding some aspects of incoming sensory streams, while preferentially preserving commonly occurring, behaviorally-relevant information. Here we construct a stochastic recurrent neural circuit model that can learn efficient, task-specific sensory codes using a novel form of reward-modulated Hebbian synaptic plasticity. We illustrate the flexibility of the model by training an initially unstructured neural network to solve two different tasks: stimulus estimation, and stimulus discrimination. The network achieves high performance in both tasks by appropriately allocating resources and using its recurrent circuitry to best compensate for different levels of noise. We also show how the interaction between stimulus priors and task structure dictates the emergent network representations.
more »
« less
- Award ID(s):
- 1922658
- PAR ID:
- 10217554
- Date Published:
- Journal Name:
- Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Despite many successful examples in which probabilistic inference can account for perception, we have little understanding of how the brain represents and uses structured priors that capture the complexity of natural input statistics. Here we construct a recurrent circuit model that can implicitly represent priors over latent variables, and combine them with sensory and contextual sources of information to encode task-specific posteriors. Inspired by the recent success of diffusion models as means of learning and using priors over images, our model uses dendritic nonlinearities optimized for denoising, and stochastic somatic integration with the degree of noise modulated by an oscillating global signal. Combining these elements into a recurrent network yields a stochastic dynamical system that samples from the prior at a rate prescribed by the period of the global oscillator. Additional inputs reflecting sensory or top-down contextual information alter these dynamics to generate samples from the corresponding posterior, with different input gating patterns selecting different inference tasks. We demonstrate that this architecture can sample from low dimensional nonlinear manifolds and multimodal posteriors. Overall, the model provides a new framework for circuit-level representation of probabilistic information, in a format that facilitates flexible inference.more » « less
-
AbstractDuring spatial exploration, neural circuits in the hippocampus store memories of sequences of sensory events encountered in the environment. When sensory information is absent during ‘offline’ resting periods, brief neuronal population bursts can ‘replay’ sequences of activity that resemble bouts of sensory experience. These sequences can occur in either forward or reverse order, and can even include spatial trajectories that have not been experienced, but are consistent with the topology of the environment. The neural circuit mechanisms underlying this variable and flexible sequence generation are unknown. Here we demonstrate in a recurrent spiking network model of hippocampal area CA3 that experimental constraints on network dynamics such as population sparsity, stimulus selectivity, rhythmicity and spike rate adaptation, as well as associative synaptic connectivity, enable additional emergent properties, including variable offline memory replay. In an online stimulus‐driven state, we observed the emergence of neuronal sequences that swept from representations of past to future stimuli on the timescale of the theta rhythm. In an offline state driven only by noise, the network generated both forward and reverse neuronal sequences, and recapitulated the experimental observation that offline memory replay events tend to include salient locations like the site of a reward. These results demonstrate that biological constraints on the dynamics of recurrent neural circuits are sufficient to enable memories of sensory events stored in the strengths of synaptic connections to be flexibly read out during rest and sleep, which is thought to be important for memory consolidation and planning of future behaviour.image Key pointsA recurrent spiking network model of hippocampal area CA3 was optimized to recapitulate experimentally observed network dynamics during simulated spatial exploration.During simulated offline rest, the network exhibited the emergent property of generating flexible forward, reverse and mixed direction memory replay events.Network perturbations and analysis of model diversity and degeneracy identified associative synaptic connectivity and key features of network dynamics as important for offline sequence generation.Network simulations demonstrate that population over‐representation of salient positions like the site of reward results in biased memory replay.more » « less
-
Changes in behavioral state, such as arousal and movements, strongly affect neural activity in sensory areas, and can be modeled as long-range projections regulating the mean and variance of baseline input currents. What are the computational benefits of these baseline modulations? We investigate this question within a brain-inspired framework for reservoir computing, where we vary the quenched baseline inputs to a recurrent neural network with random couplings. We found that baseline modulations control the dynamical phase of the reservoir network, unlocking a vast repertoire of network phases. We uncovered a number of bistable phases exhibiting the simultaneous coexistence of fixed points and chaos, of two fixed points, and of weak and strong chaos. We identified several phenomena, including noise-driven enhancement of chaos and ergodicity breaking; neural hysteresis, whereby transitions across a phase boundary retain the memory of the preceding phase. In each bistable phase, the reservoir performs a different binary decision-making task. Fast switching between different tasks can be controlled by adjusting the baseline input mean and variance. Moreover, we found that the reservoir network achieves optimal memory performance at any first-order phase boundary. In summary, baseline control enables multitasking without any optimization of the network couplings, opening directions for brain-inspired artificial intelligence and providing an interpretation for the ubiquitously observed behavioral modulations of cortical activity.more » « less
-
Graham, Lyle J (Ed.)Sensory neurons continually adapt their response characteristics according to recent stimulus history. However, it is unclear how such a reactive process can benefit the organism. Here, we test the hypothesis that adaptation actually acts proactively in the sense that it optimally adjusts sensory encoding for future stimuli. We first quantified human subjects’ ability to discriminate visual orientation under different adaptation conditions. Using an information theoretic analysis, we found that adaptation leads to a reallocation of coding resources such that encoding accuracy peaks at the mean orientation of the adaptor while total coding capacity remains constant. We then asked whether this characteristic change in encoding accuracy is predicted by the temporal statistics of natural visual input. Analyzing the retinal input of freely behaving human subjects showed that the distribution of local visual orientations in the retinal input stream indeed peaks at the mean orientation of the preceding input history (i.e., the adaptor). We further tested our hypothesis by analyzing the internal sensory representations of a recurrent neural network trained to predict the next frame of natural scene videos (PredNet). Simulating our human adaptation experiment with PredNet, we found that the network exhibited the same change in encoding accuracy as observed in human subjects. Taken together, our results suggest that adaptation-induced changes in encoding accuracy prepare the visual system for future stimuli.more » « less
An official website of the United States government

