Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Artificial neural networks (ANNs) struggle with continual learning, sacrificing performance on previously learned tasks to acquire new task knowledge. Here we propose a new approach allowing to mitigate catastrophic forgetting during continuous task learning. Typically a new task is trained until it reaches maximal performance, causing complete catastrophic forgetting of the previous tasks. In our new approach, termed Optimal Stopping (OS), network training on each new task continues only while the mean validation accuracy across all the tasks (current and previous) increases. The stopping criterion creates an explicit balance: lower performance on new tasks is accepted in exchange for preserving knowledge of previous tasks, resulting in higher overall network performance. The overall performance is further improved when OS is combined with Sleep Replay Consolidation (SRC), wherein the network converts to a Spiking Neural Network (SNN) and undergoes unsupervised learning modulated by Hebbian plasticity. During the SRC, the network spontaneously replays activation patterns from previous tasks, helping to maintain and restore prior task performance. This combined approach offers a promising avenue for enhancing the robustness and longevity of learned representations in continual learning models, achieving over twice the mean accuracy of baseline continuous learning while maintaining stable performance across tasks.more » « lessFree, publicly-accessible full text available April 11, 2026
-
Abstract Distinguishing between nectar and non-nectar odors is challenging for animals due to shared compounds and varying ratios in complex mixtures. Changes in nectar production throughout the day and over the animal’s lifetime add to the complexity. The honeybee olfactory system, containing fewer than 1000 principal neurons in the early olfactory relay, the antennal lobe (AL), must learn to associate diverse volatile blends with rewards. Previous studies identified plasticity in the AL circuits, but its role in odor learning remains poorly understood. Using a biophysical computational model, tuned by in vivo electrophysiological data, and live imaging of the honeybee’s AL, we explored the neural mechanisms of plasticity in the AL. Our findings revealed that when trained with a set of rewarded and unrewarded odors, the AL inhibitory network suppresses responses to shared chemical compounds while enhancing responses to distinct compounds. This results in improved pattern separation and a more concise neural code. Our calcium imaging data support these predictions. Analysis of a graph convolutional neural network performing an odor categorization task revealed a similar mechanism for contrast enhancement. Our study provides insights into how inhibitory plasticity in the early olfactory network reshapes the coding for efficient learning of complex odors.more » « less
-
Understanding olfactory processing in insects requires characterizing the complex dynamics and connectivity of the first olfactory relay - antennal lobe (AL). We leverage in vivo electrophysiology to train recurrent neural network (RNN) model of the locust AL, inferring the underlying connectivity and temporal dynamics. The RNN comprises 830 projection neurons (PNs) and 300 local neurons (LNs), replicating the locust AL anatomy. The trained network reveals sparse connectivity, with different connection densities between LNs and PNs and no PN-PN connections, consistent with in vivo data. The learned time constants predict slower LN dynamics and diverse PN response patterns, with low and high time constants correlating with early and late odor-evoked activity, as reported in vivo. Our approach demonstrates the utility of biologically-constrained RNNs in inferring circuit properties from empirical data, providing insights into mechanisms of odor coding in the AL.more » « less
-
Understanding the brain requires studying its multiscale interactions from molecules to networks. The increasing availability of large-scale datasets detailing brain circuit composition, connectivity, and activity is transforming neuroscience. However, integrating and interpreting this data remains challenging. Concurrently, advances in supercomputing and sophisticated modeling tools now enable the development of highly detailed, large-scale biophysical circuit models. These mechanistic multiscale models offer a method to systematically integrate experimental data, facilitating investigations into brain structure, function, and disease. This review, based on a Society for Neuroscience 2024 MiniSymposium, aims to disseminate recent advances in large-scale mechanistic modeling to the broader community. It highlights (1) examples of current models for various brain regions developed through experimental data integration; (2) their predictive capabilities regarding cellular and circuit mechanisms underlying experimental recordings (e.g., membrane voltage, spikes, local-field potential, electroencephalography/magnetoencephalography) and brain function; and (3) their use in simulating biomarkers for brain diseases like epilepsy, depression, schizophrenia, and Parkinson's, aiding in understanding their biophysical underpinnings and developing novel treatments. The review showcases state-of-the-art models covering hippocampus, somatosensory, visual, motor, auditory cortical, and thalamic circuits across species. These models predict neural activity at multiple scales and provide insights into the biophysical mechanisms underlying sensation, motor behavior, brain signals, neural coding, disease, pharmacological interventions, and neural stimulation. Collaboration with experimental neuroscientists and clinicians is essential for the development and validation of these models, particularly as datasets grow. Hence, this review aims to foster interest in detailed brain circuit models, leading to cross-disciplinary collaborations that accelerate brain research.more » « less
-
Convolutional neural networks (CNNs) are a foundational model architecture utilized to perform a wide variety of visual tasks. On image classification tasks CNNs achieve high performance, however model accuracy degrades quickly when inputs are perturbed by distortions such as additive noise or blurring. This drop in performance partly arises from incorrect detection of local features by convolutional layers. In this work, we develop a neuroscience-inspired unsupervised Sleep Replay Consolidation (SRC) algorithm for improving convolutional filter’s robustness to perturbations. We demonstrate that sleep- based optimization improves the quality of convolutional layers by the selective modification of spatial gradients across filters. We further show that, compared to other approaches such as fine- tuning, a single sleep phase improves robustness across different types of distortions in a data efficient manner.more » « less
-
Abstract Brain rhythms of sleep reflect neuronal activity underlying sleep‐associated memory consolidation. The modulation of brain rhythms, such as the sleep slow oscillation (SO), is used both to investigate neurophysiological mechanisms as well as to measure the impact of sleep on presumed functional correlates. Previously, closed‐loop acoustic stimulation in humans targeted to the SO Up‐state successfully enhanced the slow oscillation rhythm and phase‐dependent spindle activity, although effects on memory retention have varied. Here, we aim to disclose relations between stimulation‐induced hippocampo‐thalamo‐cortical activity and retention performance on a hippocampus‐dependent object‐place recognition task in mice by applying acoustic stimulation at four estimated SO phases compared to sham condition. Across the 3‐h retention interval at the beginning of the light phase closed‐loop stimulation failed to improve retention significantly over sham. However, retention during SO Up‐state stimulation was significantly higher than for another SO phase. At all SO phases, acoustic stimulation was accompanied by a sharp increase in ripple activity followed by about a second‐long suppression of hippocampal sharp wave ripple and longer maintained suppression of thalamo‐cortical spindle activity. Importantly, dynamics of SO‐coupled hippocampal ripple activity distinguished SOUp‐state stimulation. Non‐rapid eye movement (NREM) sleep was not impacted by stimulation, yet preREM sleep duration was effected. Results reveal the complex effect of stimulation on the brain dynamics and support the use of closed‐loop acoustic stimulation in mice to investigate the inter‐regional mechanisms underlying memory consolidation.more » « less
-
Abstract Artificial neural networks are known to suffer from catastrophic forgetting: when learning multiple tasks sequentially, they perform well on the most recent task at the expense of previously learned tasks. In the brain, sleep is known to play an important role in incremental learning by replaying recent and old conflicting memory traces. Here we tested the hypothesis that implementing a sleep-like phase in artificial neural networks can protect old memories during new training and alleviate catastrophic forgetting. Sleep was implemented as off-line training with local unsupervised Hebbian plasticity rules and noisy input. In an incremental learning framework, sleep was able to recover old tasks that were otherwise forgotten. Previously learned memories were replayed spontaneously during sleep, forming unique representations for each class of inputs. Representational sparseness and neuronal activity corresponding to the old tasks increased while new task related activity decreased. The study suggests that spontaneous replay simulating sleep-like dynamics can alleviate catastrophic forgetting in artificial neural networks.more » « less
-
Differential thalamocortical interactions in slow and fast spindle generation: A computational modelCymbalyuk, Gennady S. (Ed.)Cortical slow oscillations (SOs) and thalamocortical sleep spindles are two prominent EEG rhythms of slow wave sleep. These EEG rhythms play an essential role in memory consolidation. In humans, sleep spindles are categorized into slow spindles (8–12 Hz) and fast spindles (12–16 Hz), with different properties. Slow spindles that couple with the up-to-down phase of the SO require more experimental and computational investigation to disclose their origin, functional relevance and most importantly their relation with SOs regarding memory consolidation. To examine slow spindles, we propose a biophysical thalamocortical model with two independent thalamic networks (one for slow and the other for fast spindles). Our modeling results show that fast spindles lead to faster cortical cell firing, and subsequently increase the amplitude of the cortical local field potential (LFP) during the SO down-to-up phase. Slow spindles also facilitate cortical cell firing, but the response is slower, thereby increasing the cortical LFP amplitude later, at the SO up-to-down phase of the SO cycle. Neither the SO rhythm nor the duration of the SO down state is affected by slow spindle activity. Furthermore, at a more hyperpolarized membrane potential level of fast thalamic subnetwork cells, the activity of fast spindles decreases, while the slow spindles activity increases. Together, our model results suggest that slow spindles may facilitate the initiation of the following SO cycle, without however affecting expression of the SO Up and Down states.more » « less
-
Bush, Daniel (Ed.)Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of sleep for memory consolidation. Here we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it. The network could be trained to learn a complex foraging task but exhibited catastrophic forgetting when trained sequentially on different tasks. In synaptic weight space, new task training moved the synaptic weight configuration away from the manifold representing old task leading to forgetting. Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning.more » « less
An official website of the United States government

Full Text Available