skip to main content

Title: Decoding locomotion from population neural activity in moving C. elegans
We investigated the neural representation of locomotion in the nematode C. elegans by recording population calcium activity during movement. We report that population activity more accurately decodes locomotion than any single neuron. Relevant signals are distributed across neurons with diverse tunings to locomotion. Two largely distinct subpopulations are informative for decoding velocity and curvature, and different neurons’ activities contribute features relevant for different aspects of a behavior or different instances of a behavioral motif. To validate our measurements, we labeled neurons AVAL and AVAR and found that their activity exhibited expected transients during backward locomotion. Finally, we compared population activity during movement and immobilization. Immobilization alters the correlation structure of neural activity and its dynamics. Some neurons positively correlated with AVA during movement become negatively correlated during immobilization and vice versa. This work provides needed experimental measurements that inform and constrain ongoing efforts to understand population dynamics underlying locomotion in C. elegans .  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ahamed, Tosif (Ed.)
    Motile organisms actively detect environmental signals and migrate to a preferable environment. Especially, small animals convert subtle spatial difference in sensory input into orientation behavioral output for directly steering toward a destination, but the neural mechanisms underlying steering behavior remain elusive. Here, we analyze a C . elegans thermotactic behavior in which a small number of neurons are shown to mediate steering toward a destination temperature. We construct a neuroanatomical model and use an evolutionary algorithm to find configurations of the model that reproduce empirical thermotactic behavior. We find that, in all the evolved models, steering curvature are modulated by temporally persistent thermal signals sensed beyond the time scale of sinusoidal locomotion of C . elegans . Persistent rise in temperature decreases steering curvature resulting in straight movement of model worms, whereas fall in temperature increases curvature resulting in crooked movement. This relation between temperature change and steering curvature reproduces the empirical thermotactic migration up thermal gradients and steering bias toward higher temperature. Further, spectrum decomposition of neural activities in model worms show that thermal signals are transmitted from a sensory neuron to motor neurons on the longer time scale than sinusoidal locomotion of C . elegans . Our results suggest that employments of temporally persistent sensory signals enable small animals to steer toward a destination in natural environment with variable, noisy, and subtle cues. 
    more » « less
  2. Abstract

    Rhythmic neural network activity has been broadly linked to behavior. However, it is unclear how membrane potentials of individual neurons track behavioral rhythms, even though many neurons exhibit pace-making properties in isolated brain circuits. To examine whether single-cell voltage rhythmicity is coupled to behavioral rhythms, we focused on delta-frequencies (1–4 Hz) that are known to occur at both the neural network and behavioral levels. We performed membrane voltage imaging of individual striatal neurons simultaneously with network-level local field potential recordings in mice during voluntary movement. We report sustained delta oscillations in the membrane potentials of many striatal neurons, particularly cholinergic interneurons, which organize spikes and network oscillations at beta-frequencies (20–40 Hz) associated with locomotion. Furthermore, the delta-frequency patterned cellular dynamics are coupled to animals’ stepping cycles. Thus, delta-rhythmic cellular dynamics in cholinergic interneurons, known for their autonomous pace-making capabilities, play an important role in regulating network rhythmicity and movement patterning.

    more » « less
  3. Cortical computations emerge from the dynamics of neurons embedded in complex cortical circuits. Within these circuits, neuronal ensembles, which represent subnetworks with shared functional connectivity, emerge in an experience-dependent manner. Here we induced ensembles inex vivocortical circuits from mice of either sex by differentially activating subpopulations through chronic optogenetic stimulation. We observed a decrease in voltage correlation, and importantly a synaptic decoupling between the stimulated and nonstimulated populations. We also observed a decrease in firing rate during Up-states in the stimulated population. These ensemble-specific changes were accompanied by decreases in intrinsic excitability in the stimulated population, and a decrease in connectivity between stimulated and nonstimulated pyramidal neurons. By incorporating the empirically observed changes in intrinsic excitability and connectivity into a spiking neural network model, we were able to demonstrate that changes in both intrinsic excitability and connectivity accounted for the decreased firing rate, but only changes in connectivity accounted for the observed decorrelation. Our findings help ascertain the mechanisms underlying the ability of chronic patterned stimulation to create ensembles within cortical circuits and, importantly, show that while Up-states are a global network-wide phenomenon, functionally distinct ensembles can preserve their identity during Up-states through differential firing rates and correlations.

    SIGNIFICANCE STATEMENTThe connectivity and activity patterns of local cortical circuits are shaped by experience. This experience-dependent reorganization of cortical circuits is driven by complex interactions between different local learning rules, external input, and reciprocal feedback between many distinct brain areas. Here we used anex vivoapproach to demonstrate how simple forms of chronic external stimulation can shape local cortical circuits in terms of their correlated activity and functional connectivity. The absence of feedback between different brain areas and full control of external input allowed for a tractable system to study the underlying mechanisms and development of a computational model. Results show that differential stimulation of subpopulations of neurons significantly reshapes cortical circuits and forms subnetworks referred to as neuronal ensembles.

    more » « less
  4. Understanding the intricacies of the brain often requires spotting and tracking specific neurons over time and across different individuals. For instance, scientists may need to precisely monitor the activity of one neuron even as the brain moves and deforms; or they may want to find universal patterns by comparing signals from the same neuron across different individuals. Both tasks require matching which neuron is which in different images and amongst a constellation of cells. This is theoretically possible in certain ‘model’ animals where every single neuron is known and carefully mapped out. Still, it remains challenging: neurons move relative to one another as the animal changes posture, and the position of a cell is also slightly different between individuals. Sophisticated computer algorithms are increasingly used to tackle this problem, but they are far too slow to track neural signals as real-time experiments unfold. To address this issue, Yu et al. designed a new algorithm based on the Transformer, an artificial neural network originally used to spot relationships between words in sentences. To learn relationships between neurons, the algorithm was fed hundreds of thousands of ‘semi-synthetic’ examples of constellations of neurons. Instead of painfully collated actual experimental data, these datasets were created by a simulator based on a few simple measurements. Testing the new algorithm on the tiny worm Caenorhabditis elegans revealed that it was faster and more accurate, finding corresponding neurons in about 10ms. The work by Yu et al. demonstrates the power of using simulations rather than experimental data to train artificial networks. The resulting algorithm can be used immediately to help study how the brain of C. elegans makes decisions or controls movements. Ultimately, this research could allow brain-machine interfaces to be developed. 
    more » « less
  5. null (Ed.)
    Growth-transform (GT) neurons and their population models allow for independent control over the spiking statistics and the transient population dynamics while optimizing a physically plausible distributed energy functional involving continuous-valued neural variables. In this paper we describe a backpropagation-less learning approach to train a network of spiking GT neurons by enforcing sparsity constraints on the overall network spiking activity. The key features of the model and the proposed learning framework are: (a) spike responses are generated as a result of constraint violation and hence can be viewed as Lagrangian parameters; (b) the optimal parameters for a given task can be learned using neurally relevant local learning rules and in an online manner; (c) the network optimizes itself to encode the solution with as few spikes as possible (sparsity); (d) the network optimizes itself to operate at a solution with the maximum dynamic range and away from saturation; and (e) the framework is flexible enough to incorporate additional structural and connectivity constraints on the network. As a result, the proposed formulation is attractive for designing neuromorphic tinyML systems that are constrained in energy, resources, and network structure. In this paper, we show how the approach could be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the overall spiking activity across the network. We then build on this framework to implement three different multi-layer spiking network architectures with progressively increasing flexibility in training and consequently, sparsity. We demonstrate the applicability of the proposed algorithm for resource-efficient learning using a publicly available machine olfaction dataset with unique challenges like sensor drift and a wide range of stimulus concentrations. In all of these case studies we show that a GT network trained using the proposed learning approach is able to minimize the network-level spiking activity while producing classification accuracy that are comparable to standard approaches on the same dataset. 
    more » « less