skip to main content


This content will become publicly available on August 27, 2024

Title: Recurrent neural circuits overcome partial inactivation by compensation and re-learning

Technical advances in artificial manipulation of neural activity have precipitated a surge in studying the causal contribution of brain circuits to cognition and behavior. However, complexities of neural circuits challenge interpretation of experimental results, necessitating new theoretical frameworks for reasoning about causal effects. Here, we take a step in this direction, through the lens of recurrent neural networks trained to perform perceptual decisions. We show that understanding the dynamical system structure that underlies network solutions provides a precise account for the magnitude of behavioral effects due to perturbations. Our framework explains past empirical observations by clarifying the most sensitive features of behavior, and how complex circuits compensate and adapt to perturbations. In the process, we also identify strategies that can improve the interpretability of inactivation experiments.

Significance StatementNeuroscientists heavily rely on artificial perturbation of the neural activity to understand the function of brain circuits. Current interpretations of experimental results often follow a simple logic, that the magnitude of a behavioral effect following a perturbation indicates the degree of involvement of the perturbed circuit in the behavior. We model a variety of neural networks with controlled levels of com­plexity, robustness, and plasticity, showing that perturbation experiments could yield counter-intuitive results when networks are complex enough-to allow unperturbed pathways to compensate for the per­turbed neurons-or plastic enough-to allow continued learning from feedback during perturbations. To rein in these complexities we develop a Functional Integrity Index that captures alterations in network computations and predicts disruptions of behavior with the perturbation.

 
more » « less
NSF-PAR ID:
10492787
Author(s) / Creator(s):
; ;
Publisher / Repository:
DOI PREFIX: 10.1523
Date Published:
Journal Name:
The Journal of Neuroscience
ISSN:
0270-6474
Format(s):
Medium: X Size: Article No. e1635232024
Size(s):
["Article No. e1635232024"]
Sponsoring Org:
National Science Foundation
More Like this
  1. INTRODUCTION A brainwide, synaptic-resolution connectivity map—a connectome—is essential for understanding how the brain generates behavior. However because of technological constraints imaging entire brains with electron microscopy (EM) and reconstructing circuits from such datasets has been challenging. To date, complete connectomes have been mapped for only three organisms, each with several hundred brain neurons: the nematode C. elegans , the larva of the sea squirt Ciona intestinalis , and of the marine annelid Platynereis dumerilii . Synapse-resolution circuit diagrams of larger brains, such as insects, fish, and mammals, have been approached by considering select subregions in isolation. However, neural computations span spatially dispersed but interconnected brain regions, and understanding any one computation requires the complete brain connectome with all its inputs and outputs. RATIONALE We therefore generated a connectome of an entire brain of a small insect, the larva of the fruit fly, Drosophila melanogaster. This animal displays a rich behavioral repertoire, including learning, value computation, and action selection, and shares homologous brain structures with adult Drosophila and larger insects. Powerful genetic tools are available for selective manipulation or recording of individual neuron types. In this tractable model system, hypotheses about the functional roles of specific neurons and circuit motifs revealed by the connectome can therefore be readily tested. RESULTS The complete synaptic-resolution connectome of the Drosophila larval brain comprises 3016 neurons and 548,000 synapses. We performed a detailed analysis of the brain circuit architecture, including connection and neuron types, network hubs, and circuit motifs. Most of the brain’s in-out hubs (73%) were postsynaptic to the learning center or presynaptic to the dopaminergic neurons that drive learning. We used graph spectral embedding to hierarchically cluster neurons based on synaptic connectivity into 93 neuron types, which were internally consistent based on other features, such as morphology and function. We developed an algorithm to track brainwide signal propagation across polysynaptic pathways and analyzed feedforward (from sensory to output) and feedback pathways, multisensory integration, and cross-hemisphere interactions. We found extensive multisensory integration throughout the brain and multiple interconnected pathways of varying depths from sensory neurons to output neurons forming a distributed processing network. The brain had a highly recurrent architecture, with 41% of neurons receiving long-range recurrent input. However, recurrence was not evenly distributed and was especially high in areas implicated in learning and action selection. Dopaminergic neurons that drive learning are amongst the most recurrent neurons in the brain. Many contralateral neurons, which projected across brain hemispheres, were in-out hubs and synapsed onto each other, facilitating extensive interhemispheric communication. We also analyzed interactions between the brain and nerve cord. We found that descending neurons targeted a small fraction of premotor elements that could play important roles in switching between locomotor states. A subset of descending neurons targeted low-order post-sensory interneurons likely modulating sensory processing. CONCLUSION The complete brain connectome of the Drosophila larva will be a lasting reference study, providing a basis for a multitude of theoretical and experimental studies of brain function. The approach and computational tools generated in this study will facilitate the analysis of future connectomes. Although the details of brain organization differ across the animal kingdom, many circuit architectures are conserved. As more brain connectomes of other organisms are mapped in the future, comparisons between them will reveal both common and therefore potentially optimal circuit architectures, as well as the idiosyncratic ones that underlie behavioral differences between organisms. Some of the architectural features observed in the Drosophila larval brain, including multilayer shortcuts and prominent nested recurrent loops, are found in state-of-the-art artificial neural networks, where they can compensate for a lack of network depth and support arbitrary, task-dependent computations. Such features could therefore increase the brain’s computational capacity, overcoming physiological constraints on the number of neurons. Future analysis of similarities and differences between brains and artificial neural networks may help in understanding brain computational principles and perhaps inspire new machine learning architectures. The connectome of the Drosophila larval brain. The morphologies of all brain neurons, reconstructed from a synapse-resolution EM volume, and the synaptic connectivity matrix of an entire brain. This connectivity information was used to hierarchically cluster all brains into 93 cell types, which were internally consistent based on morphology and known function. 
    more » « less
  2. A major goal in neuroscience is to understand the relationship between an animal’s behavior and how this is encoded in the brain. Therefore, a typical experiment involves training an animal to perform a task and recording the activity of its neurons – brain cells – while the animal carries out the task. To complement these experimental results, researchers “train” artificial neural networks – simplified mathematical models of the brain that consist of simple neuron-like units – to simulate the same tasks on a computer. Unlike real brains, artificial neural networks provide complete access to the “neural circuits” responsible for a behavior, offering a way to study and manipulate the behavior in the circuit. One open issue about this approach has been the way in which the artificial networks are trained. In a process known as reinforcement learning, animals learn from rewards (such as juice) that they receive when they choose actions that lead to the successful completion of a task. By contrast, the artificial networks are explicitly told the correct action. In addition to differing from how animals learn, this limits the types of behavior that can be studied using artificial neural networks. Recent advances in the field of machine learning that combine reinforcement learning with artificial neural networks have now allowed Song et al. to train artificial networks to perform tasks in a way that mimics the way that animals learn. The networks consisted of two parts: a “decision network” that uses sensory information to select actions that lead to the greatest reward, and a “value network” that predicts how rewarding an action will be. Song et al. found that the resulting artificial “brain activity” closely resembled the activity found in the brains of animals, confirming that this method of training artificial neural networks may be a useful tool for neuroscientists who study the relationship between brains and behavior. The training method explored by Song et al. represents only one step forward in developing artificial neural networks that resemble the real brain. In particular, neural networks modify connections between units in a vastly different way to the methods used by biological brains to alter the connections between neurons. Future work will be needed to bridge this gap. 
    more » « less
  3. Environmental stressors induce rapid physiological and behavioral shifts in vertebrate animals. However, the neurobiological mechanisms responsible for stress-induced changes in behavior are complex and not well understood. Similar to mammalian vertebrates, zebrafish adults display a preference for dark environments that is associated with predator avoidance, enhanced by stressors, and broadly used in assays for anxiety-like behavior. Although the larvae of zebrafish are a prominent model organism for understanding neural circuits, fewer studies have examined the effects of stressors on their behavior. This study examines the effects of noxious chemical and electric shock stressors on locomotion and light preference in zebrafish larvae. We found that both stressors elicited similar changes in behavior. Acute exposure induced increased swimming activity, while prolonged exposure depressed activity. Neither stressor produced a consistent shift in light/dark preference, but prolonged exposure to these stressors resulted in a pronounced decrease in exploration of different visual environments. We also examined the effects of exposure to a noxious chemical cue using whole-brain calcium imaging, and identified neural correlates in the area postrema, an area of the hindbrain containing noradrenergic and dopaminergic neurons. Pharmaceutical blockade experiments showed that ɑ-adrenergic receptors contribute to the behavioral response to an acute stressor but are not necessary for the response to a prolonged stressor. These results indicate that zebrafish larvae have complex behavioral responses to stressors comparable to those of adult animals, and also suggest that these responses are mediated by similar neural pathways. 
    more » « less
  4. Lay Summary

    Parts of the brain can work together by synchronizing the activity of the neurons. We recorded the electrical activity of the brain in adolescents with autism spectrum disorder and then compared the recording to that of their peers without the diagnosis. We found that in participants with autism, there were a lot of very short time periods of non‐synchronized activity between frontal and parietal parts of the brain. Mathematical models show that the brain system with this kind of activity is very sensitive to external events.

     
    more » « less
  5. Gutkin, Boris S. (Ed.)
    Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. 
    more » « less