skip to main content


Title: Neural learning rules for generating flexible predictions and computing the successor representation
Memories are an important part of how we think, understand the world around us, and plan out future actions. In the brain, memories are thought to be stored in a region called the hippocampus. When memories are formed, neurons store events that occur around the same time together. This might explain why often, in the brains of animals, the activity associated with retrieving memories is not just a snapshot of what happened at a specific moment-- it can also include information about what the animal might experience next. This can have a clear utility if animals use memories to predict what they might experience next and plan out future actions. Mathematically, this notion of predictiveness can be summarized by an algorithm known as the successor representation. This algorithm describes what the activity of neurons in the hippocampus looks like when retrieving memories and making predictions based on them. However, even though the successor representation can computationally reproduce the activity seen in the hippocampus when it is making predictions, it is unclear what biological mechanisms underpin this computation in the brain. Fang et al. approached this problem by trying to build a model that could generate the same activity patterns computed by the successor representation using only biological mechanisms known to exist in the hippocampus. First, they used computational methods to design a network of neurons that had the biological properties of neural networks in the hippocampus. They then used the network to simulate neural activity. The results show that the activity of the network they designed was able to exactly match the successor representation. Additionally, the data resulting from the simulated activity in the network fitted experimental observations of hippocampal activity in Tufted Titmice. One advantage of the network designed by Fang et al. is that it can generate predictions in flexible ways,. That is, it canmake both short and long-term predictions from what an individual is experiencing at the moment. This flexibility means that the network can be used to simulate how the hippocampus learns in a variety of cognitive tasks. Additionally, the network is robust to different conditions. Given that the brain has to be able to store memories in many different situations, this is a promising indication that this network may be a reasonable model of how the brain learns. The results of Fang et al. lay the groundwork for connecting biological mechanisms in the hippocampus at the cellular level to cognitive effects, an essential step to understanding the hippocampus, as well as its role in health and disease. For instance, their network may provide a concrete approach to studying how disruptions to the ways neurons make and break connections can impair memory formation. More generally, better models of the biological mechanisms involved in making computations in the hippocampus can help scientists better understand and test out theories about how memories are formed and stored in the brain.  more » « less
Award ID(s):
1707398
NSF-PAR ID:
10432381
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
eLife
Volume:
12
ISSN:
2050-084X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    During spatial exploration, neural circuits in the hippocampus store memories of sequences of sensory events encountered in the environment. When sensory information is absent during ‘offline’ resting periods, brief neuronal population bursts can ‘replay’ sequences of activity that resemble bouts of sensory experience. These sequences can occur in either forward or reverse order, and can even include spatial trajectories that have not been experienced, but are consistent with the topology of the environment. The neural circuit mechanisms underlying this variable and flexible sequence generation are unknown. Here we demonstrate in a recurrent spiking network model of hippocampal area CA3 that experimental constraints on network dynamics such as population sparsity, stimulus selectivity, rhythmicity and spike rate adaptation, as well as associative synaptic connectivity, enable additional emergent properties, including variable offline memory replay. In an online stimulus‐driven state, we observed the emergence of neuronal sequences that swept from representations of past to future stimuli on the timescale of the theta rhythm. In an offline state driven only by noise, the network generated both forward and reverse neuronal sequences, and recapitulated the experimental observation that offline memory replay events tend to include salient locations like the site of a reward. These results demonstrate that biological constraints on the dynamics of recurrent neural circuits are sufficient to enable memories of sensory events stored in the strengths of synaptic connections to be flexibly read out during rest and sleep, which is thought to be important for memory consolidation and planning of future behaviour.image

    Key points

    A recurrent spiking network model of hippocampal area CA3 was optimized to recapitulate experimentally observed network dynamics during simulated spatial exploration.

    During simulated offline rest, the network exhibited the emergent property of generating flexible forward, reverse and mixed direction memory replay events.

    Network perturbations and analysis of model diversity and degeneracy identified associative synaptic connectivity and key features of network dynamics as important for offline sequence generation.

    Network simulations demonstrate that population over‐representation of salient positions like the site of reward results in biased memory replay.

     
    more » « less
  2. null (Ed.)
    Sleep has many roles, from strengthening new memories to regulating mood and appetite. While we might instinctively think of sleep as a uniform state of reduced brain activity, the reality is more complex. First, over the course of the night, we cycle between a number of different sleep stages, which reflect different levels of sleep depth. Second, the amount of sleep depth is not necessarily even across the brain but can vary between regions. These sleep stages consist of either rapid eye movement (REM) sleep or non-REM (NREM) sleep. REM sleep is when most dreaming occurs, whereas NREM sleep is particularly important for learning and memory and can vary in duration and depth. During NREM sleep, large groups of neurons synchronize their firing to create rhythmic waves of activity known as slow waves. The more synchronous the activity, the deeper the sleep. Vaidyanathan et al. now show that brain cells called astrocytes help regulate NREM sleep. Astrocytes are not neurons but belong to a group of specialized cells called glia. They are the largest glia cell type in the brain and display an array of proteins on their surfaces called G-protein-coupled receptors (GPCRs). These enable them to sense sleep-wake signals from other parts of the brain and to generate their own signals. In fact, each astrocyte can communicate with thousands of neurons at once. They are therefore well-poised to coordinate brain activity during NREM sleep. Using innovative tools, Vaidyanathan et al. visualized astrocyte activity in mice as the animals woke up or fell asleep. The results showed that astrocytes change their activity just before each sleep–wake transition. They also revealed that astrocytes control both the depth and duration of NREM sleep via two different types of GPCR signals. Increasing one of these signals (Gi-GPCR) made the mice sleep more deeply but did not change sleep duration. Decreasing the other (Gq-GPCR) made the mice sleep for longer but did not affect sleep depth. Sleep problems affect many people at some point in their lives, and often co-exist with other conditions such as mental health disorders. Understanding how the brain regulates different features of sleep could help us develop better – and perhaps more specific – treatments for sleep disorders. The current study suggests that manipulating GPCRs on astrocytes might increase sleep depth, for example. But before work to test this idea can begin, we must first determine whether findings from sleeping mice also apply to people. 
    more » « less
  3. A major goal in neuroscience is to understand the relationship between an animal’s behavior and how this is encoded in the brain. Therefore, a typical experiment involves training an animal to perform a task and recording the activity of its neurons – brain cells – while the animal carries out the task. To complement these experimental results, researchers “train” artificial neural networks – simplified mathematical models of the brain that consist of simple neuron-like units – to simulate the same tasks on a computer. Unlike real brains, artificial neural networks provide complete access to the “neural circuits” responsible for a behavior, offering a way to study and manipulate the behavior in the circuit. One open issue about this approach has been the way in which the artificial networks are trained. In a process known as reinforcement learning, animals learn from rewards (such as juice) that they receive when they choose actions that lead to the successful completion of a task. By contrast, the artificial networks are explicitly told the correct action. In addition to differing from how animals learn, this limits the types of behavior that can be studied using artificial neural networks. Recent advances in the field of machine learning that combine reinforcement learning with artificial neural networks have now allowed Song et al. to train artificial networks to perform tasks in a way that mimics the way that animals learn. The networks consisted of two parts: a “decision network” that uses sensory information to select actions that lead to the greatest reward, and a “value network” that predicts how rewarding an action will be. Song et al. found that the resulting artificial “brain activity” closely resembled the activity found in the brains of animals, confirming that this method of training artificial neural networks may be a useful tool for neuroscientists who study the relationship between brains and behavior. The training method explored by Song et al. represents only one step forward in developing artificial neural networks that resemble the real brain. In particular, neural networks modify connections between units in a vastly different way to the methods used by biological brains to alter the connections between neurons. Future work will be needed to bridge this gap. 
    more » « less
  4. A new housing development in a familiar neighborhood, a wrong turn that ends up lengthening a Sunday stroll: our internal representation of the world requires constant updating, and we need to be able to associate events separated by long intervals of time to finetune future outcome. This often requires neural connections to be altered. A brain region known as the hippocampus is involved in building and maintaining a map of our environment. However, signals from other brain areas can activate silent neurons in the hippocampus when the body is in a specific location by triggering cellular events called dendritic calcium spikes. Milstein et al. explored whether dendritic calcium spikes in the hippocampus could also help the brain to update its map of the world by enabling neurons to stop being active at one location and to start responding at a new position. Experiments in mice showed that calcium spikes could change which features of the environment individual neurons respond to by strengthening or weaking connections between specific cells. Crucially, this mechanism allowed neurons to associate event sequences that unfold over a longer timescale that was more relevant to the ones encountered in day-to-day life. A computational model was then put together, and it demonstrated that dendritic calcium spikes in the hippocampus could enable the brain to make better spatial decisions in future. Indeed, these spikes are driven by inputs from brain regions involved in complex cognitive processes, potentially enabling the delayed outcomes of navigational choices to guide changes in the activity and wiring of neurons. Overall, the work by Milstein et al. advances the understanding of learning and memory in the brain and may inform the design of better systems for artificial learning. 
    more » « less
  5. Working memory, the brain’s ability to temporarily store and recall information, is a critical part of decision making – but it has its limits. The brain can only store so much information, for so long. Since decisions are not often acted on immediately, information held in working memory ‘degrades’ over time. However, it is unknown whether or not this degradation of information over time affects the accuracy of later decisions. The tactics that people use, knowingly or otherwise, to store information in working memory also remain unclear. Do people store pieces of information such as numbers, objects and particular details? Or do they tend to compute that information, make some preliminary judgement and recall their verdict later? Does the strategy chosen impact people’s decision-making? To investigate, Schapiro et al. devised a series of experiments to test whether the limitations of working memory, and how people store information, affect the accuracy of decisions they make. First, participants were shown an array of colored discs on a screen. Then, either immediately after seeing the disks or a few seconds later, the participants were asked to recall the position of one of the disks they had seen, or the average position of all the disks. This measured how much information degraded for a decision based on multiple items, and how much for a decision based on a single item. From this, the method of information storage used to make a decision could be inferred. Schapiro et al. found that the accuracy of people’s responses worsened over time, whether they remembered the position of each individual disk, or computed their average location before responding. The greater the delay between seeing the disks and reporting their location, the less accurate people’s responses tended to be. Similarly, the more disks a participant saw, the less accurate their response became. This suggests that however people store information, if working memory reaches capacity, decision-making suffers and that, over time, stored information decays. Schapiro et al. also noticed that participants remembered location information in different ways depending on the task and how many disks they were shown at once. This suggests people adopt different strategies to retain information momentarily. In summary, these findings help to explain how people process and store information to make decisions and how the limitations of working memory impact their decision-making ability. A better understanding of how people use working memory to make decisions may also shed light on situations or brain conditions where decision-making is impaired. 
    more » « less