skip to main content


Search for: All records

Creators/Authors contains: "Krishnan, Giri P."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Artificial neural networks are known to suffer from catastrophic forgetting: when learning multiple tasks sequentially, they perform well on the most recent task at the expense of previously learned tasks. In the brain, sleep is known to play an important role in incremental learning by replaying recent and old conflicting memory traces. Here we tested the hypothesis that implementing a sleep-like phase in artificial neural networks can protect old memories during new training and alleviate catastrophic forgetting. Sleep was implemented as off-line training with local unsupervised Hebbian plasticity rules and noisy input. In an incremental learning framework, sleep was able to recover old tasks that were otherwise forgotten. Previously learned memories were replayed spontaneously during sleep, forming unique representations for each class of inputs. Representational sparseness and neuronal activity corresponding to the old tasks increased while new task related activity decreased. The study suggests that spontaneous replay simulating sleep-like dynamics can alleviate catastrophic forgetting in artificial neural networks. 
    more » « less
  2. Abstract Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks. 
    more » « less
  3. Central and autonomic nervous system activities are coupled during sleep. Cortical slow oscillations (SOs; <1 Hz) coincide with brief bursts in heart rate (HR), but the functional consequence of this coupling in cognition remains elusive. We measured SO–HR temporal coupling (i.e., the peak-to-peak interval between downstate of SO event and HR burst) during a daytime nap and asked whether this SO–HR timing measure was associated with temporal processing speed and learning on a texture discrimination task by testing participants before and after a nap. The coherence of SO–HR events during sleep strongly correlated with an individual's temporal processing speed in the morning and evening test sessions, but not with their change in performance after the nap (i.e., consolidation). We confirmed this result in two additional experimental visits and also discovered that this association was visit-specific, indicating a state (not trait) marker. Thus, we introduce a novel physiological index that may be a useful marker of state-dependent processing speed of an individual. 
    more » « less
  4. Battaglia, Francesco P. (Ed.)