Recent shifts in the understanding of how the mind and brain retain information in working memory (WM) call for revision to traditional theories. Evidence of dynamic, “activity-silent,” short-term retention processes diverges from conventional models positing that information is always retained in WM by sustained neural activity in buffers. Such evidence comes from machine-learning methods that can decode patterns of brain activity and the simultaneous administration of transcranial magnetic stimulation (TMS) to causally manipulate brain activity in specific areas and time points. TMS can “ping” brain areas to both reactivate latent representations retained in WM and affect memory performance. On the basis of these findings, I argue for a supplement to sustained retention mechanisms. Brain-decoding methods also reveal that dynamic levels of representational codes are retained in WM, and these vary according to task context, from perceptual (sensory) codes in posterior areas to abstract, recoded representations distributed across frontoparietal regions. A dynamic-processing model of WM is advanced to account for the overall pattern of results.
more »
« less
Prioritized learning of cross-population neural dynamics
Abstract Objective. Improvements in recording technology for multi-region simultaneous recordings enable the study of interactions among distinct brain regions. However, a major computational challenge in studying cross-regional, or cross-population dynamics in general, is that the cross-population dynamics can be confounded or masked by within-population dynamics. Approach. Here, we propose cross-population prioritized linear dynamical modeling (CroP-LDM) to tackle this challenge. CroP-LDM learns the cross-population dynamics in terms of a set of latent states using a prioritized learning approach, such that they are not confounded by within-population dynamics. Further, CroP-LDM can infer the latent states both causally in time using only past neural activity and non-causally in time, unlike some prior dynamic methods whose inference is non-causal. Results. First, through comparisons with various LDM methods, we show that the prioritized learning objective in CroP-LDM is key for accurate learning of cross-population dynamics. Second, using multi-regional bilateral motor and premotor cortical recording during a naturalistic movement task, we demonstrate that CroP-LDM better learns cross-population dynamics compared to recent static and dynamic methods, even when using a low dimensionality. Finally, we demonstrate how CroP-LDM can quantify dominant interaction pathways across brain regions in an interpretable manner. Significance. Overall, these results show that our approach can be a useful framework for addressing challenges associated with modeling dynamics across brain regions.
more »
« less
- Award ID(s):
- 2113271
- PAR ID:
- 10603373
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- Journal of Neural Engineering
- ISSN:
- 1741-2560
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Fundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas brain activity has been studied at the micro- to meso-scale to reveal the connections between the dynamical patterns and the behaviors, investigations of neural population dynamics are mainly limited to single-scale analysis. Our goal is to develop a cross-scale dynamical model for the collective activity of neuronal populations. Here we introduce a bio-inspired deep learning approach, termed NeuroBondGraph Network (NBGNet), to capture cross-scale dynamics that can infer and map the neural data from multiple scales. Our model not only exhibits more than an 11-fold improvement in reconstruction accuracy, but also predicts synchronous neural activity and preserves correlated low-dimensional latent dynamics. We also show that the NBGNet robustly predicts held-out data across a long time scale (2 weeks) without retraining. We further validate the effective connectivity defined from our model by demonstrating that neural connectivity during motor behaviour agrees with the established neuroanatomical hierarchy of motor control in the literature. The NBGNet approach opens the door to revealing a comprehensive understanding of brain computation, where network mechanisms of multi-scale activity are critical.more » « less
-
Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.F.; Lin, H. (Ed.)High-dimensional neural recordings across multiple brain regions can be used to establish functional connectivity with good spatial and temporal resolution. We designed and implemented a novel method, Latent Dynamic Factor Analysis of High-dimensional time series (LDFA-H), which combines (a) a new approach to estimating the covariance structure among high-dimensional time series (for the observed variables) and (b) a new extension of probabilistic CCA to dynamic time series (for the latent variables). Our interest is in the cross-correlations among the latent variables which, in neural recordings, may capture the flow of information from one brain region to another. Simulations show that LDFA-H outperforms existing methods in the sense that it captures target factors even when within-region correlation due to noise dominates cross-region correlation. We applied our method to local field potential (LFP) recordings from 192 electrodes in Prefrontal Cortex (PFC) and visual area V4 during a memory-guided saccade task. The results capture time-varying lead-lag dependencies between PFC and V4, and display the associated spatial distribution of the signals.more » « less
-
Diffusion probabilistic models (DPMs) have become the state-of-the-art in high-quality image generation. However, DPMs have an arbitrary noisy latent space with no interpretable or controllable semantics. Although there has been significant research effort to improve image sample quality, there is little work on representation-controlled generation using diffusion models. Specifically, causal modeling and controllable counterfactual generation using DPMs is an underexplored area. In this work, we propose CausalDiffAE, a diffusion-based causal representation learning framework to enable counterfactual generation according to a specified causal model. Our key idea is to use an encoder to extract high-level semantically meaningful causal variables from high-dimensional data and model stochastic variation using reverse diffusion. We propose a causal encoding mechanism that maps high-dimensional data to causally related latent factors and parameterize the causal mechanisms among latent factors using neural networks. To enforce the disentanglement of causal variables, we formulate a variational objective and leverage auxiliary label information in a prior to regularize the latent space. We propose a DDIM-based counterfactual generation procedure subject to do-interventions. Finally, to address the limited label supervision scenario, we also study the application of CausalDiffAE when a part of the training data is unlabeled, which also enables granular control over the strength of interventions in generating counterfactuals during inference. We empirically show that CausalDiffAE learns a disentangled latent space and is capable of generating high-quality counterfactual images.more » « less
-
Decoding auditory stimulus from neural activity can enable neuroprosthetics and direct communication with the brain. Some recent studies have shown successful speech decoding from intracranial recording using deep learning models. However, scarcity of training data leads to low quality speech reconstruction which prevents a complete brain-computer-interface (BCI) application. In this work, we propose a transfer learning approach with a pre-trained GAN to disentangle representation and generation layers for decoding. We first pre-train a generator to produce spectrograms from a representation space using a large corpus of natural speech data. With a small amount of paired data containing the stimulus speech and corresponding ECoG signals, we then transfer it to a bigger network with an encoder attached before, which maps the neural signal to the representation space. To further improve the network generalization ability, we introduce a Gaussian prior distribution regularizer on the latent representation during the transfer phase. With at most 150 training samples for each tested subject, we achieve a state-of-the-art decoding performance. By visualizing the attention mask embedded in the encoder, we observe brain dynamics that are consistent with findings from previous studies investigating dynamics in the superior temporal gyrus (STG), pre-central gyrus (motor) and inferior frontal gyrus (IFG). Our findings demonstrate a high reconstruction accuracy using deep learning networks together with the potential to elucidate interactions across different brain regions during a cognitive task.more » « less
An official website of the United States government
