A new housing development in a familiar neighborhood, a wrong turn that ends up lengthening a Sunday stroll: our internal representation of the world requires constant updating, and we need to be able to associate events separated by long intervals of time to finetune future outcome. This often requires neural connections to be altered. A brain region known as the hippocampus is involved in building and maintaining a map of our environment. However, signals from other brain areas can activate silent neurons in the hippocampus when the body is in a specific location by triggering cellular events called dendritic calcium spikes. Milstein et al. explored whether dendritic calcium spikes in the hippocampus could also help the brain to update its map of the world by enabling neurons to stop being active at one location and to start responding at a new position. Experiments in mice showed that calcium spikes could change which features of the environment individual neurons respond to by strengthening or weaking connections between specific cells. Crucially, this mechanism allowed neurons to associate event sequences that unfold over a longer timescale that was more relevant to the ones encountered in day-to-day life. A computational model was then put together, and it demonstrated that dendritic calcium spikes in the hippocampus could enable the brain to make better spatial decisions in future. Indeed, these spikes are driven by inputs from brain regions involved in complex cognitive processes, potentially enabling the delayed outcomes of navigational choices to guide changes in the activity and wiring of neurons. Overall, the work by Milstein et al. advances the understanding of learning and memory in the brain and may inform the design of better systems for artificial learning.
more »
« less
Neural learning rules for generating flexible predictions and computing the successor representation
Memories are an important part of how we think, understand the world around us, and plan out future actions. In the brain, memories are thought to be stored in a region called the hippocampus. When memories are formed, neurons store events that occur around the same time together. This might explain why often, in the brains of animals, the activity associated with retrieving memories is not just a snapshot of what happened at a specific moment-- it can also include information about what the animal might experience next. This can have a clear utility if animals use memories to predict what they might experience next and plan out future actions. Mathematically, this notion of predictiveness can be summarized by an algorithm known as the successor representation. This algorithm describes what the activity of neurons in the hippocampus looks like when retrieving memories and making predictions based on them. However, even though the successor representation can computationally reproduce the activity seen in the hippocampus when it is making predictions, it is unclear what biological mechanisms underpin this computation in the brain. Fang et al. approached this problem by trying to build a model that could generate the same activity patterns computed by the successor representation using only biological mechanisms known to exist in the hippocampus. First, they used computational methods to design a network of neurons that had the biological properties of neural networks in the hippocampus. They then used the network to simulate neural activity. The results show that the activity of the network they designed was able to exactly match the successor representation. Additionally, the data resulting from the simulated activity in the network fitted experimental observations of hippocampal activity in Tufted Titmice. One advantage of the network designed by Fang et al. is that it can generate predictions in flexible ways,. That is, it canmake both short and long-term predictions from what an individual is experiencing at the moment. This flexibility means that the network can be used to simulate how the hippocampus learns in a variety of cognitive tasks. Additionally, the network is robust to different conditions. Given that the brain has to be able to store memories in many different situations, this is a promising indication that this network may be a reasonable model of how the brain learns. The results of Fang et al. lay the groundwork for connecting biological mechanisms in the hippocampus at the cellular level to cognitive effects, an essential step to understanding the hippocampus, as well as its role in health and disease. For instance, their network may provide a concrete approach to studying how disruptions to the ways neurons make and break connections can impair memory formation. More generally, better models of the biological mechanisms involved in making computations in the hippocampus can help scientists better understand and test out theories about how memories are formed and stored in the brain.
more »
« less
- Award ID(s):
- 1707398
- PAR ID:
- 10432381
- Date Published:
- Journal Name:
- eLife
- Volume:
- 12
- ISSN:
- 2050-084X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Sleep has many roles, from strengthening new memories to regulating mood and appetite. While we might instinctively think of sleep as a uniform state of reduced brain activity, the reality is more complex. First, over the course of the night, we cycle between a number of different sleep stages, which reflect different levels of sleep depth. Second, the amount of sleep depth is not necessarily even across the brain but can vary between regions. These sleep stages consist of either rapid eye movement (REM) sleep or non-REM (NREM) sleep. REM sleep is when most dreaming occurs, whereas NREM sleep is particularly important for learning and memory and can vary in duration and depth. During NREM sleep, large groups of neurons synchronize their firing to create rhythmic waves of activity known as slow waves. The more synchronous the activity, the deeper the sleep. Vaidyanathan et al. now show that brain cells called astrocytes help regulate NREM sleep. Astrocytes are not neurons but belong to a group of specialized cells called glia. They are the largest glia cell type in the brain and display an array of proteins on their surfaces called G-protein-coupled receptors (GPCRs). These enable them to sense sleep-wake signals from other parts of the brain and to generate their own signals. In fact, each astrocyte can communicate with thousands of neurons at once. They are therefore well-poised to coordinate brain activity during NREM sleep. Using innovative tools, Vaidyanathan et al. visualized astrocyte activity in mice as the animals woke up or fell asleep. The results showed that astrocytes change their activity just before each sleep–wake transition. They also revealed that astrocytes control both the depth and duration of NREM sleep via two different types of GPCR signals. Increasing one of these signals (Gi-GPCR) made the mice sleep more deeply but did not change sleep duration. Decreasing the other (Gq-GPCR) made the mice sleep for longer but did not affect sleep depth. Sleep problems affect many people at some point in their lives, and often co-exist with other conditions such as mental health disorders. Understanding how the brain regulates different features of sleep could help us develop better – and perhaps more specific – treatments for sleep disorders. The current study suggests that manipulating GPCRs on astrocytes might increase sleep depth, for example. But before work to test this idea can begin, we must first determine whether findings from sleeping mice also apply to people.more » « less
-
The brain processes memories as we sleep, generating rhythms of electrical activity called ‘sleep spindles’. Sleep spindles were long thought to be a state where the entire brain was fully synchronized by this rhythm. This was based on EEG recordings, short for electroencephalogram, a technique that uses electrodes on the scalp to measure electrical activity in the outermost layer of the brain, the cortex. But more recent intracranial recordings of people undergoing brain surgery have challenged this idea and suggested that sleep spindles may not be a state of global brain synchronization, but rather localised to specific areas. Mofrad et al. sought to clarify the extent to which spindles co-occur at multiple sites in the brain, which could shed light on how networks of neurons coordinate memory storage during sleep. To analyse highly variable brain wave recordings, Mofrad et al. adapted deep learning algorithms initially developed for detecting earthquakes and gravitational waves. The resulting algorithm, designed to more sensitively detect spindles amongst other brain activity, was then applied to a range of sleep recordings from humans and macaque monkeys. The analyses revealed that widespread and complex patterns of spindle rhythms, spanning multiple areas in the cortex of the brain, actually appear much more frequently than previously thought. This finding was consistent across all the recordings analysed, even recordings under the skull, which provide the clearest window into brain circuits. Further analyses found that these multi-area spindles occurred more often in sleep after people had completed tasks that required holding many visual scenes in memory, as opposed to control conditions with fewer visual scenes. In summary, Mofrad et al. show that neuroscientists had previously not appreciated the complex and dynamic patterns in this sleep rhythm. These patterns in sleep spindles may be able to adapt based on the demands needed for memory storage, and this will be the subject of future work. Moreover, the findings support the idea that sleep spindles help coordinate the consolidation of memories in brain circuits that stretch across the cortex. Understanding this mechanism may provide insights into how memory falters in aging and sleep-related diseases, such as Alzheimer’s disease. Lastly, the algorithm developed by Mofrad et al. stands to be a useful tool for analysing other rhythmic waveforms in noisy recordings.more » « less
-
A major goal in neuroscience is to understand the relationship between an animal’s behavior and how this is encoded in the brain. Therefore, a typical experiment involves training an animal to perform a task and recording the activity of its neurons – brain cells – while the animal carries out the task. To complement these experimental results, researchers “train” artificial neural networks – simplified mathematical models of the brain that consist of simple neuron-like units – to simulate the same tasks on a computer. Unlike real brains, artificial neural networks provide complete access to the “neural circuits” responsible for a behavior, offering a way to study and manipulate the behavior in the circuit. One open issue about this approach has been the way in which the artificial networks are trained. In a process known as reinforcement learning, animals learn from rewards (such as juice) that they receive when they choose actions that lead to the successful completion of a task. By contrast, the artificial networks are explicitly told the correct action. In addition to differing from how animals learn, this limits the types of behavior that can be studied using artificial neural networks. Recent advances in the field of machine learning that combine reinforcement learning with artificial neural networks have now allowed Song et al. to train artificial networks to perform tasks in a way that mimics the way that animals learn. The networks consisted of two parts: a “decision network” that uses sensory information to select actions that lead to the greatest reward, and a “value network” that predicts how rewarding an action will be. Song et al. found that the resulting artificial “brain activity” closely resembled the activity found in the brains of animals, confirming that this method of training artificial neural networks may be a useful tool for neuroscientists who study the relationship between brains and behavior. The training method explored by Song et al. represents only one step forward in developing artificial neural networks that resemble the real brain. In particular, neural networks modify connections between units in a vastly different way to the methods used by biological brains to alter the connections between neurons. Future work will be needed to bridge this gap.more » « less
-
Abstract Daily experience suggests that we perceive distances near us linearly. However, the actual geometry of spatial representation in the brain is unknown. Here we report that neurons in the CA1 region of rat hippocampus that mediate spatial perception represent space according to a non-linear hyperbolic geometry. This geometry uses an exponential scale and yields greater positional information than a linear scale. We found that the size of the representation matches the optimal predictions for the number of CA1 neurons. The representations also dynamically expanded proportional to the logarithm of time that the animal spent exploring the environment, in correspondence with the maximal mutual information that can be received. The dynamic changes tracked even small variations due to changes in the running speed of the animal. These results demonstrate how neural circuits achieve efficient representations using dynamic hyperbolic geometry.more » « less