skip to main content


Title: Error-correcting dynamics in visual working memory
Abstract

Working memory is critical to cognition, decoupling behavior from the immediate world. Yet, it is imperfect; internal noise introduces errors into memory representations. Such errors have been shown to accumulate over time and increase with the number of items simultaneously held in working memory. Here, we show that discrete attractor dynamics mitigate the impact of noise on working memory. These dynamics pull memories towards a few stable representations in mnemonic space, inducing a bias in memory representations but reducing the effect of random diffusion. Model-based and model-free analyses of human and monkey behavior show that discrete attractor dynamics account for the distribution, bias, and precision of working memory reports. Furthermore, attractor dynamics are adaptive. They increase in strength as noise increases with memory load and experiments in humans show these dynamics adapt to the statistics of the environment, such that memories drift towards contextually-predicted values. Together, our results suggest attractor dynamics mitigate errors in working memory by counteracting noise and integrating contextual information into memories.

 
more » « less
NSF-PAR ID:
10153937
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
10
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The behavioral and neural effects of the endogenous release of acetylcholine following stimulation of the Nucleus Basalis of Meynert (NB) have been recently examined in two male monkeys (Qi et al. 2021). Counterintuitively, NB stimulation enhanced behavioral performance while broadening neural tuning in the prefrontal cortex (PFC). The mechanism by which a weaker mnemonic neural code could lead to better performance remains unclear. Here, we show that increased neural excitability in a simple continuous bump attractor model can induce broader neural tuning and decrease bump diffusion, provided neural rates are saturated. Increased memory precision in the model overrides memory accuracy, improving overall task performance. Moreover, we show that bump attractor dynamics can account for the nonuniform impact of neuromodulation on distractibility, depending on distractor distance from the target. Finally, we delve into the conditions under which bump attractor tuning and diffusion balance in biologically plausible heterogeneous network models. In these discrete bump attractor networks, we show that reducing spatial correlations or enhancing excitatory transmission can improve memory precision. Altogether, we provide a mechanistic understanding of how cholinergic neuromodulation controls spatial working memory through perturbed attractor dynamics in PFC.

    Significance statementAcetylcholine has been thought to improve cognitive performance by sharpening neuronal tuning in prefrontal cortex. Recent work has shown that electrical stimulation of the cholinergic forebrain in awake-behaving monkeys induces a reduction in prefrontal neural tuning under stimulation conditions that improve performance. To reconcile these divergent observations, we provide network simulations showing that these derive consistently from specific conditions in prefrontal attractor dynamics: firing rate saturation leads to increased storage precision and reduced neural tuning upon cholinergic activation via an increase in neural excitability, a reduction in neural correlations, and an increase in excitatory transmission. Our study integrates previously reported data into a consistent mechanistic view of how acetylcholine controls spatial working memory via attractor network dynamics in prefrontal cortex.

     
    more » « less
  2. Cai, Ming Bo (Ed.)
    Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience. We optimize thousands of recurrent rate-based neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new hypotheses regarding how working memory function may be encoded within the dynamics of neural circuits. 
    more » « less
  3. Abstract

    During spatial exploration, neural circuits in the hippocampus store memories of sequences of sensory events encountered in the environment. When sensory information is absent during ‘offline’ resting periods, brief neuronal population bursts can ‘replay’ sequences of activity that resemble bouts of sensory experience. These sequences can occur in either forward or reverse order, and can even include spatial trajectories that have not been experienced, but are consistent with the topology of the environment. The neural circuit mechanisms underlying this variable and flexible sequence generation are unknown. Here we demonstrate in a recurrent spiking network model of hippocampal area CA3 that experimental constraints on network dynamics such as population sparsity, stimulus selectivity, rhythmicity and spike rate adaptation, as well as associative synaptic connectivity, enable additional emergent properties, including variable offline memory replay. In an online stimulus‐driven state, we observed the emergence of neuronal sequences that swept from representations of past to future stimuli on the timescale of the theta rhythm. In an offline state driven only by noise, the network generated both forward and reverse neuronal sequences, and recapitulated the experimental observation that offline memory replay events tend to include salient locations like the site of a reward. These results demonstrate that biological constraints on the dynamics of recurrent neural circuits are sufficient to enable memories of sensory events stored in the strengths of synaptic connections to be flexibly read out during rest and sleep, which is thought to be important for memory consolidation and planning of future behaviour.image

    Key points

    A recurrent spiking network model of hippocampal area CA3 was optimized to recapitulate experimentally observed network dynamics during simulated spatial exploration.

    During simulated offline rest, the network exhibited the emergent property of generating flexible forward, reverse and mixed direction memory replay events.

    Network perturbations and analysis of model diversity and degeneracy identified associative synaptic connectivity and key features of network dynamics as important for offline sequence generation.

    Network simulations demonstrate that population over‐representation of salient positions like the site of reward results in biased memory replay.

     
    more » « less
  4. Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices. 
    more » « less
  5. Visual scenes are often remembered as if they were observed from a different viewpoint. Some scenes are remembered as farther than they appeared, and others as closer. These memory distortions—also known as boundary extension and contraction—are strikingly consistent for a given scene, but their cause remains unknown. We tested whether these distortions can be explained by an inferential process that adjusts scene memories toward high-probability views, using viewing depth as a test case. We first carried out a large-scale analysis of depth maps of natural indoor scenes to quantify the statistical probability of views in depth. We then assessed human observers’ memory for these scenes at various depths and found that viewpoint judgments were consistently biased toward the modal depth, even when just a few seconds elapsed between viewing and reporting. Thus, scenes closer than the modal depth showed a boundary-extension bias (remembered as farther-away), and scenes farther than the modal depth showed a boundary-contraction bias (remembered as closer). By contrast, scenes at the modal depth did not elicit a consistent bias in either direction. This same pattern of results was observed in a follow-up experiment using tightly controlled stimuli from virtual environments. Together, these findings show that scene memories are biased toward statistically probable views, which may serve to increase the accuracy of noisy or incomplete scene representations. 
    more » « less