skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Neural reuse in multifunctional neural networks for control tasks
Living organisms perform multiple tasks, often using the same or shared neural networks. Such multifunctional neural networks are composed of neurons that contribute to different degrees in the different behaviors. In this work, we take a computational modeling approach to evaluate the extent to which neural resources are specialized or shared across different behaviors. To this end, we develop multifunctional feed-forward neural networks that are capable of performing three control tasks: inverted pendulum, cartpole balancing and single-legged walker. We then perform information lesions of individual neurons to determine their contribution to each task. Following that, we investigate the ability of two commonly used methods to estimate a neuron's contribution from its activity: neural variability and mutual information. Our study reveals the following: First, the same feed-forward neural network is capable of reusing its hidden layer neurons to perform multiple behaviors; second, information lesions reveal that the same behaviors are performed with different levels of reuse in different neural networks; and finally, mutual information is a better estimator of a neuron's contribution to a task than neural variability.  more » « less
Award ID(s):
1845322
PAR ID:
10174172
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ALIFE 2020: The 2020 Conference on Artificial Life
Issue:
32
Page Range / eLocation ID:
210 - 218
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain's visual system. However, biological visual systems have two ubiquitous architectural features not shared with typical CNNs: local recurrence within cortical areas, and long-range feedback from downstream areas to upstream areas. Here we explored the role of recurrence in improving classification performance. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet task. In contrast, novel cells that incorporated two structural features, bypassing and gating, were able to boost task accuracy substantially. We extended these design principles in an automated search over thousands of model architectures, which identified novel local recurrent cells and long-range feedback connections useful for object recognition. Moreover, these task-optimized ConvRNNs matched the dynamics of neural activity in the primate visual system better than feedforward networks, suggesting a role for the brain's recurrent connections in performing difficult visual behaviors. 
    more » « less
  2. A major goal in neuroscience is to understand the relationship between an animal’s behavior and how this is encoded in the brain. Therefore, a typical experiment involves training an animal to perform a task and recording the activity of its neurons – brain cells – while the animal carries out the task. To complement these experimental results, researchers “train” artificial neural networks – simplified mathematical models of the brain that consist of simple neuron-like units – to simulate the same tasks on a computer. Unlike real brains, artificial neural networks provide complete access to the “neural circuits” responsible for a behavior, offering a way to study and manipulate the behavior in the circuit. One open issue about this approach has been the way in which the artificial networks are trained. In a process known as reinforcement learning, animals learn from rewards (such as juice) that they receive when they choose actions that lead to the successful completion of a task. By contrast, the artificial networks are explicitly told the correct action. In addition to differing from how animals learn, this limits the types of behavior that can be studied using artificial neural networks. Recent advances in the field of machine learning that combine reinforcement learning with artificial neural networks have now allowed Song et al. to train artificial networks to perform tasks in a way that mimics the way that animals learn. The networks consisted of two parts: a “decision network” that uses sensory information to select actions that lead to the greatest reward, and a “value network” that predicts how rewarding an action will be. Song et al. found that the resulting artificial “brain activity” closely resembled the activity found in the brains of animals, confirming that this method of training artificial neural networks may be a useful tool for neuroscientists who study the relationship between brains and behavior. The training method explored by Song et al. represents only one step forward in developing artificial neural networks that resemble the real brain. In particular, neural networks modify connections between units in a vastly different way to the methods used by biological brains to alter the connections between neurons. Future work will be needed to bridge this gap. 
    more » « less
  3. While the neural commonalities as subjects perform similar task-related behaviors has been previously examined, it is very difficult to ascertain the neural commonalities for spontaneous, task-unrelated behaviors such as grooming. As our ability to record high-dimensional naturalistic behavioral and corresponding neural data increases, we can now try to understand the relationship between different subjects performing spontaneous behaviors that occur rarely in time. Here, we first apply novel machine learning techniques to behavioral video data from four head-fixed mice as they perform a self-initiated decision-making task while their neural activity is recorded using widefield calcium imaging. Across mice, we automatically identify spontaneous behaviors such as grooming and task-related behaviors such as lever pulls. Next, we explore the commonalities between the neural activity of different mice as they perform these tasks by transforming the neural activity into a common subspace, using Multidimensional Canonical Correlation Analysis (MCCA). Finally, we compare the commonalities across different trials in the same subject to those across subjects for different types of behaviors, and find that many recorded brain regions display high levels of correlation for spontaneous behaviors such as grooming. The combined behavioral and neural analysis methods in this paper provide an understanding of how similarly different animals perform innate behaviors. 
    more » « less
  4. Gutkin, Boris S. (Ed.)
    Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. 
    more » « less
  5. A defining feature of the cortex is its laminar organization, which is likely critical for cortical information processing. For example, visual stimuli of different size evoke distinct patterns of laminar activity. Visual information processing is also influenced by the response variability of individual neurons and the degree to which this variability is correlated among neurons. To elucidate laminar processing, we studied how neural response variability across the layers of macaque primary visual cortex is modulated by visual stimulus size. Our laminar recordings revealed that single neuron response variability and the shared variability among neurons are tuned for stimulus size, and this size-tuning is layer-dependent. In all layers, stimulation of the receptive field (RF) reduced single neuron variability, and the shared variability among neurons, relative to their pre-stimulus values. As the stimulus was enlarged beyond the RF, both single neuron and shared variability increased in supragranular layers, but either did not change or decreased in other layers. Surprisingly, we also found that small visual stimuli could increase variability relative to baseline values. Our results suggest multiple circuits and mechanisms as the source of variability in different layers and call for the development of new models of neural response variability. 
    more » « less