skip to main content


Title: Multiscale low-dimensional motor cortical state dynamics predict naturalistic reach-and-grasp behavior
Abstract

Motor function depends on neural dynamics spanning multiple spatiotemporal scales of population activity, from spiking of neurons to larger-scale local field potentials (LFP). How multiple scales of low-dimensional population dynamics are related in control of movements remains unknown. Multiscale neural dynamics are especially important to study in naturalistic reach-and-grasp movements, which are relatively under-explored. We learn novel multiscale dynamical models for spike-LFP network activity in monkeys performing naturalistic reach-and-grasps. We show low-dimensional dynamics of spiking and LFP activity exhibited several principal modes, each with a unique decay-frequency characteristic. One principal mode dominantly predicted movements. Despite distinct principal modes existing at the two scales, this predictive mode was multiscale and shared between scales, and was shared across sessions and monkeys, yet did not simply replicate behavioral modes. Further, this multiscale mode’s decay-frequency explained behavior. We propose that multiscale, low-dimensional motor cortical state dynamics reflect the neural control of naturalistic reach-and-grasp behaviors.

 
more » « less
NSF-PAR ID:
10211454
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
12
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain–machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson–Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.

     
    more » « less
  2. Most sensorimotor studies investigating the covariation of populations of neurons in primary motor cortex (M1) have considered only a few trained movements made under highly constrained conditions. However, motor behaviors in daily living happen in a far more complex and varied contexts. It is unclear whether M1 neurons would have different population responses in a more naturalistic, unconstrained setting, including requirements to accommodate multiple limbs and body posture, and more extensive proprioceptive inputs. Here, we recorded M1 spiking signals while a monkey performed hand grasp movements in two different contexts: one in the typical constrained lab setting, and the other while moving freely in a large plastic cage. We compared the covariance patterns of the neural activity during movements across the two contexts. We found that the neural covariation patterns accompanying two different hand grasps in the unconstrained context were largely preserved, while they differed across contexts, even for the same type of grasp. We also found that the M1 population activity was confined to context-dependent neural manifolds, but these manifolds were not completely independent, as some dimensions appeared to be shared across the contexts. These results suggest that the coordinated activity of M1 neurons is strongly dependent on behavioral context, in ways that were not entirely anticipated. 
    more » « less
  3. Learned movements can be skillfully performed at different paces. What neural strategies produce this flexibility? Can they be predicted and understood by network modeling? We trained monkeys to perform a cycling task at different speeds, and trained artificial recurrent networks to generate the empirical muscle-activity patterns. Network solutions reflected the principle that smooth well-behaved dynamics require low trajectory tangling. Network solutions had a consistent form, which yielded quantitative and qualitative predictions. To evaluate predictions, we analyzed motor cortex activity recorded during the same task. Responses supported the hypothesis that the dominant neural signals reflect not muscle activity, but network-level strategies for generating muscle activity. Single-neuron responses were better accounted for by network activity than by muscle activity. Similarly, neural population trajectories shared their organization not with muscle trajectories, but with network solutions. Thus, cortical activity could be understood based on the need to generate muscle activity via dynamics that allow smooth, robust control over movement speed. 
    more » « less
  4. Abstract The motor cortex controls skilled arm movement by recruiting a variety of targets in the nervous system, and it is important to understand the emergent activity in these regions as refinement of a motor skill occurs. One fundamental projection of the motor cortex (M1) is to the cerebellum. However, the emergent activity in the motor cortex and the cerebellum that appears as a dexterous motor skill is consolidated is incompletely understood. Here, we report on low-frequency oscillatory (LFO) activity that emerges in cortico-cerebellar networks with learning the reach-to-grasp motor skill. We chronically recorded the motor and the cerebellar cortices in rats, which revealed the emergence of coordinated movement-related activity in the local-field potentials as the reaching skill consolidated. Interestingly, we found this emergent activity only in the rats that gained expertise in the task. We found that the local and cross-area spiking activity was coordinated with LFOs in proficient rats. Finally, we also found that these neural dynamics were more prominently expressed during accurate behavior in the M1. This work furthers our understanding on emergent dynamics in the cortico-cerebellar loop that underlie learning and execution of precise skilled movement. 
    more » « less
  5. Continuing advances in neural interfaces have enabled simultaneous monitoring of spiking activity from hundreds to thousands of neurons. To interpret these large-scale data, several methods have been proposed to infer latent dynamic structure from high-dimensional datasets. One recent line of work uses recurrent neural networks in a sequential autoencoder (SAE) framework to uncover dynamics. SAEs are an appealing option for modeling nonlinear dynamical systems, and enable a precise link between neural activity and behavior on a single-trial basis. However, the very large parameter count and complexity of SAEs relative to other models has caused concern that SAEs may only perform well on very large training sets. We hypothesized that with a method to systematically optimize hyperparameters (HPs), SAEs might perform well even in cases of limited training data. Such a breakthrough would greatly extend their applicability. However, we find that SAEs applied to spiking neural data are prone to a particular form of overfitting that cannot be detected using standard validation metrics, which prevents standard HP searches. We develop and test two potential solutions: an alternate validation method (“sample validation”) and a novel regularization method (“coordinated dropout”). These innovations prevent overfitting quite effectively, and allow us to test whether SAEs can achieve good performance on limited data through large-scale HP optimization. When applied to data from motor cortex recorded while monkeys made reaches in various directions, large-scale HP optimization allowed SAEs to better maintain performance for small dataset sizes. Our results should greatly extend the applicability of SAEs in extracting latent dynamics from sparse, multidimensional data, such as neural population spiking activity. 
    more » « less