Changes in behavioral state, such as arousal and movements, strongly affect neural activity in sensory areas, and can be modeled as long-range projections regulating the mean and variance of baseline input currents. What are the computational benefits of these baseline modulations? We investigate this question within a brain-inspired framework for reservoir computing, where we vary the quenched baseline inputs to a recurrent neural network with random couplings. We found that baseline modulations control the dynamical phase of the reservoir network, unlocking a vast repertoire of network phases. We uncovered a number of bistable phases exhibiting the simultaneous coexistence of fixed points and chaos, of two fixed points, and of weak and strong chaos. We identified several phenomena, including noise-driven enhancement of chaos and ergodicity breaking; neural hysteresis, whereby transitions across a phase boundary retain the memory of the preceding phase. In each bistable phase, the reservoir performs a different binary decision-making task. Fast switching between different tasks can be controlled by adjusting the baseline input mean and variance. Moreover, we found that the reservoir network achieves optimal memory performance at any first-order phase boundary. In summary, baseline control enables multitasking without any optimization of the network couplings, opening directions for brain-inspired artificial intelligence and providing an interpretation for the ubiquitously observed behavioral modulations of cortical activity.
more »
« less
Control of Linear-Threshold Brain Networks via Reservoir Computing
Learning is a key function in the brain to be able to achieve the activity patterns required to perform various activities. While specific behaviors are determined by activity in localized regions, the interconnections throughout the entire brain play a key role in enabling its ability to exhibit desired activity. To mimic this setup, this paper examines the use of reservoir computing to control a linear-threshold network brain model to a desired trajectory. We first formally design open- and closed-loop controllers that achieve reference tracking under suitable conditions on the synaptic connectivity. Given the impracticality of evaluating closed-form control signals, particularly with growing network complexity, we provide a framework where a reservoir of a larger size than the network is trained to drive the activity to the desired pattern. We illustrate the versatility of this setup in two applications: selective recruitment and inhibition of neuronal populations for goal-driven selective attention, and network intervention for the prevention of epileptic seizures.
more »
« less
- Award ID(s):
- 2308640
- PAR ID:
- 10545393
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Open Journal of Control Systems
- Volume:
- 3
- ISSN:
- 2694-085X
- Page Range / eLocation ID:
- 325 to 341
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Selective recruitment in hierarchical complex dynamical networks with linear-threshold rate dynamicsUnderstanding how the complex network dynamics of the brain support cognition constitutes one of the most challenging and impactful problems ahead of systems and control theory. In this paper, we study the problem of selective recruitment, namely, the simultaneous selective inhibition of activity in one subnetwork and top-down recruitment of another by a cognitively-higher level subnetwork, using the class of linear-threshold rate (LTR) models. We first use singular perturbation theory to provide a theoretical framework for selective recruitment in a bilayer hierarchical LTR network using both feedback and feedforward control. We then generalize this framework to arbitrary number of layers and provide conditions on the joint structure of subnetworks that guarantee simultaneous selective inhibition and top-down recruitment at all layers. We finally illustrate an application of this framework in a realistic scenario where simultaneous stabilization and control of a lower level excitatory subnetwork is achieved through proper oscillatory activity in a higher level inhibitory subnetwork.more » « less
-
We present a neural network approach for closed-loop deep brain stimulation (DBS). We cast the problem of finding an optimal neurostimulation strategy as a control problem. In this setting, control policies aim to optimize therapeutic outcomes by tailoring the parameters of a DBS system, typically via electrical stimulation, in real time based on the patient’s ongoing neuronal activity. We approximate the value function offline using a neural network to enable generating controls (stimuli) in real time via the feedback form. The neuronal activity is characterized by a nonlinear, stiff system of differential equations as dictated by the Hodgkin-Huxley model. Our training process leverages the relationship between Pontryagin’s maximum principle and Hamilton-Jacobi-Bellman equations to update the value function estimates simultaneously. Our numerical experiments illustrate the accuracy of our approach for out-of-distribution samples and the robustness to moderate shocks and disturbances in the system.more » « less
-
Transcranial electrical stimulation (tES) technology and neuroimaging are increasingly coupled in basic and applied science. This synergy has enabled individualized tES therapy and facilitated causal inferences in functional neuroimaging. However, traditional tES paradigms have been stymied by relatively small changes in neural activity and high inter-subject variability in cognitive effects. In this perspective, we propose a tES framework to treat these issues which is grounded in dynamical systems and control theory. The proposed paradigm involves a tight coupling of tES and neuroimaging in which M/EEG is used to parameterize generative brain models as well as control tES delivery in a hybrid closed-loop fashion. We also present a novel quantitative framework for cognitive enhancement driven by a new computational objective: shaping how the brain reacts to potential “inputs” (e.g., task contexts) rather than enforcing a fixed pattern of brain activity.more » « less
-
Kay, Kendrick (Ed.)A central goal of neuroscience is to understand how function-relevant brain activations are generated. Here we test the hypothesis that function-relevant brain activations are generated primarily by distributed network flows. We focused on visual processing in human cortex, given the long-standing literature supporting the functional relevance of brain activations in visual cortex regions exhibiting visual category selectivity. We began by using fMRI data from N = 352 human participants to identify category-specific responses in visual cortex for images of faces, places, body parts, and tools. We then systematically tested the hypothesis that distributed network flows can generate these localized visual category selective responses. This was accomplished using a recently developed approach for simulating – in a highly empirically constrained manner – the generation of task-evoked brain activations by modeling activity flowing over intrinsic brain connections. We next tested refinements to our hypothesis, focusing on how stimulus-driven network interactions initialized in V1 generate downstream visual category selectivity. We found evidence that network flows directly from V1 were sufficient for generating visual category selectivity, but that additional, globally distributed (whole-cortex) network flows increased category selectivity further. Using null network architectures we also found that each region’s unique intrinsic “connectivity fingerprint” was key to the generation of category selectivity. These results generalized across regions associated with all four visual categories tested (bodies, faces, places, and tools), and provide evidence that the human brain’s intrinsic network organization plays a prominent role in the generation of functionally relevant, localized responses.more » « less
An official website of the United States government

