Abstract Sequence memory is an essential attribute of natural and artificial intelligence that enables agents to encode, store, and retrieve complex sequences of stimuli and actions. Computational models of sequence memory have been proposed where recurrent Hopfield-like neural networks are trained with temporally asymmetric Hebbian rules. However, these networks suffer from limited sequence capacity (maximal length of the stored sequence) due to interference between the memories. Inspired by recent work on Dense Associative Memories, we expand the sequence capacity of these models by introducing a nonlinear interaction term, enhancing separation between the patterns. We derive novel scaling laws for sequence capacity with respect to network size, significantly outperforming existing scaling laws for models based on traditional Hopfield networks, and verify these theoretical results with numerical simulation. Moreover, we introduce a generalized pseudoinverse rule to recall sequences of highly correlated patterns. Finally, we extend this model to store sequences with variable timing between states’ transitions and describe a biologically-plausible implementation, with connections to motor neuroscience.
more »
« less
Recall tempo of Hebbian sequences depends on the interplay of Hebbian kernel with tutor signal timing
Understanding how neural circuits generate sequential activity is a longstanding challenge. While foundational theoretical models have shown how sequences can be stored as memories in neural networks with Hebbian plasticity rules, these models considered only a narrow range of Hebbian rules. Here, we introduce a model for arbitrary Hebbian plasticity rules, capturing the diversity of spike-timing-dependent synaptic plasticity seen in experiments, and show how the choice of these rules and of neural activity patterns influences sequence memory formation and retrieval. In particular, we derive a general theory that predicts the tempo of sequence replay. This theory lays a foundation for explaining how cortical tutor signals might give rise to motor actions that eventually become “automatic.” Our theory also captures the impact of changing the tempo of the tutor signal. Beyond shedding light on biological circuits, this theory has relevance in artificial intelligence by laying a foundation for frameworks whereby slow and computationally expensive deliberation can be stored as memories and eventually replaced by inexpensive recall.
more »
« less
- PAR ID:
- 10540409
- Publisher / Repository:
- National Academy of Sciences
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 121
- Issue:
- 32
- ISSN:
- 0027-8424
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Artificial neural networks are known to suffer from catastrophic forgetting: when learning multiple tasks sequentially, they perform well on the most recent task at the expense of previously learned tasks. In the brain, sleep is known to play an important role in incremental learning by replaying recent and old conflicting memory traces. Here we tested the hypothesis that implementing a sleep-like phase in artificial neural networks can protect old memories during new training and alleviate catastrophic forgetting. Sleep was implemented as off-line training with local unsupervised Hebbian plasticity rules and noisy input. In an incremental learning framework, sleep was able to recover old tasks that were otherwise forgotten. Previously learned memories were replayed spontaneously during sleep, forming unique representations for each class of inputs. Representational sparseness and neuronal activity corresponding to the old tasks increased while new task related activity decreased. The study suggests that spontaneous replay simulating sleep-like dynamics can alleviate catastrophic forgetting in artificial neural networks.more » « less
-
Brain-computer interface (BCI) actively translates the brain signals into executable actions by establishing direct communication between the human brain and external devices. Recording brain activity through electroencephalography (EEG) is generally contaminated with both physiological and nonphysiological artifacts, which significantly hinders the BCI performance. Artifact subspace reconstruction (ASR) is a well-known statistical technique that automatically removes artifact components by determining the rejection threshold based on the initial reference EEG segment in multichannel EEG recordings. In real-world applications, the fixed threshold may limit the efficacy of the artifact correction, especially when the quality of the reference data is poor. This study proposes an adaptive online ASR technique by integrating the Hebbian/anti-Hebbian neural networks into the ASR algorithm, namely, principle subspace projection ASR (PSP-ASR) and principal subspace whitening ASR (PSW-ASR) that segmentwise self-organize the artifact subspace by updating the synaptic weights according to the Hebbian and anti-Hebbian learning rules. The effectiveness of the proposed algorithm is compared to the conventional ASR approaches on benchmark EEG dataset and three BCI frameworks, including steady-state visual evoked potential (SSVEP), rapid serial visual presentation (RSVP), and motor imagery (MI) by evaluating the root-mean-square error (RMSE), the signal-to-noise ratio (SNR), the Pearson correlation, and classification accuracy. The results demonstrated that the PSW-ASR algorithm effectively removed the EEG artifacts and retained the activity-specific brain signals compared to the PSP-ASR, standard ASR (Init-ASR), and moving-window ASR (MW-ASR) methods, thereby enhancing the SSVEP, RSVP, and MI BCI performances. Finally, our empirical results from the PSW-ASR algorithm suggested the choice of an aggressive cutoff range of c = 1-10 for activity-specific BCI applications and a moderate range of for the benchmark dataset and general BCI applications.more » « less
-
How do sensory systems optimize detection of behaviorally relevant stimuli when the sensory environment is constantly changing? We addressed the role of spike timing-dependent plasticity (STDP) in driving changes in synaptic strength in a sensory pathway and whether those changes in synaptic strength could alter sensory tuning. It is challenging to precisely control temporal patterns of synaptic activity in vivo and replicate those patterns in vitro in behaviorally relevant ways. This makes it difficult to make connections between STDP-induced changes in synaptic physiology and plasticity in sensory systems. Using the mormyrid species Brevimyrus niger and Brienomyrus brachyistius, which produce electric organ discharges for electrolocation and communication, we can precisely control the timing of synaptic input in vivo and replicate these same temporal patterns of synaptic input in vitro. In central electrosensory neurons in the electric communication pathway, using whole cell intracellular recordings in vitro, we paired presynaptic input with postsynaptic spiking at different delays. Using whole cell intracellular recordings in awake, behaving fish, we paired sensory stimulation with postsynaptic spiking using the same delays. We found that Hebbian STDP predictably alters sensory tuning in vitro and is mediated by NMDA receptors. However, the change in synaptic responses induced by sensory stimulation in vivo did not adhere to the direction predicted by the STDP observed in vitro. Further analysis suggests that this difference is influenced by polysynaptic activity, including inhibitory interneurons. Our findings suggest that STDP rules operating at identified synapses may not drive predictable changes in sensory responses at the circuit level. NEW & NOTEWORTHY We replicated behaviorally relevant temporal patterns of synaptic activity in vitro and used the same patterns during sensory stimulation in vivo. There was a Hebbian spike timing-dependent plasticity (STDP) pattern in vitro, but sensory responses in vivo did not shift according to STDP predictions. Analysis suggests that this disparity is influenced by differences in polysynaptic activity, including inhibitory interneurons. These results suggest that STDP rules at synapses in vitro do not necessarily apply to circuits in vivo.more » « less
-
Gjorgjieva, Julijana (Ed.)The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.more » « less
An official website of the United States government

