skip to main content


Title: Birhythmic Analog Circuit Maze: A Nonlinear Neurostimulation Testbed
Brain dynamics can exhibit narrow-band nonlinear oscillations and multistability. For a subset of disorders of consciousness and motor control, we hypothesized that some symptoms originate from the inability to spontaneously transition from one attractor to another. Using external perturbations, such as electrical pulses delivered by deep brain stimulation devices, it may be possible to induce such transition out of the pathological attractors. However, the induction of transition may be non-trivial, rendering the current open-loop stimulation strategies insufficient. In order to develop next-generation neural stimulators that can intelligently learn to induce attractor transitions, we require a platform to test the efficacy of such systems. To this end, we designed an analog circuit as a model for the multistable brain dynamics. The circuit spontaneously oscillates stably on two periods as an instantiation of a 3-dimensional continuous-time gated recurrent neural network. To discourage simple perturbation strategies, such as constant or random stimulation patterns from easily inducing transition between the stable limit cycles, we designed a state-dependent nonlinear circuit interface for external perturbation. We demonstrate the existence of nontrivial solutions to the transition problem in our circuit implementation.  more » « less
Award ID(s):
1845836
PAR ID:
10172511
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Entropy
Volume:
22
Issue:
5
ISSN:
1099-4300
Page Range / eLocation ID:
537
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The behavioral and neural effects of the endogenous release of acetylcholine following stimulation of the nucleus basalis (NB) of Meynert have been recently examined in two male monkeys (Qi et al., 2021). Counterintuitively, NB stimulation enhanced behavioral performance while broadening neural tuning in the prefrontal cortex (PFC). The mechanism by which a weaker mnemonic neural code could lead to better performance remains unclear. Here, we show that increased neural excitability in a simple continuous bump attractor model can induce broader neural tuning and decrease bump diffusion, provided neural rates are saturated. Increased memory precision in the model overrides memory accuracy, improving overall task performance. Moreover, we show that bump attractor dynamics can account for the nonuniform impact of neuromodulation on distractibility, depending on distractor distance from the target. Finally, we delve into the conditions under which bump attractor tuning and diffusion balance in biologically plausible heterogeneous network models. In these discrete bump attractor networks, we show that reducing spatial correlations or enhancing excitatory transmission can improve memory precision. Altogether, we provide a mechanistic understanding of how cholinergic neuromodulation controls spatial working memory through perturbed attractor dynamics in the PFC.

     
    more » « less
  2. Neurostimulation - the practice of applying exogenous excitation, e.g., via electrical current, to the brain - has been used for decades in clinical applications such as the treatment of motor disorders and neuropsychiatric illnesses. Over the past several years, more emphasis has been placed on understanding and designing neurostimulation from a systems-theoretic perspective, so as to better optimize its use. Particular questions of interest have included designing stimulation waveforms that best induce certain patterns of brain activity while minimizing expenditure of stimulus power. The pursuit of these designs faces a fundamental conundrum, insofar as they presume that the desired pattern (e.g., desyn-chronization of a neural population) is known a priori. In this paper, we present an alternative paradigm wherein the goal of the stimulation is not to induce a prescribed pattern, but rather to simply improve the functionality of the stimulated circuit/system. Here, the notion of functionality is defined in terms of an information-theoretic objective. Specifically, we seek closed loop control designs that maximize the ability of a controlled circuit to encode an afferent `hidden input,' without prescription of dynamics or output. In this way, the control attempts only to make the system `effective' without knowing beforehand the dynamics that are needed to be induced. We devote most of our effort to defining this framework mathematically, providing algorithmic procedures that demonstrate its solution and interpreting the results of this procedure for simple, prototypical dynamical systems. Simulation results are provided for more complex models, including an example involving control of a canonical neural mass model. 
    more » « less
  3. Equivariant representation is necessary for the brain and artificial perceptual systems to faithfully represent the stimulus under some (Lie) group transformations. However, it remains unknown how recurrent neural circuits in the brain represent the stimulus equivariantly, nor the neural representation of abstract group operators. The present study uses a one-dimensional (1D) translation group as an example to explore the general recurrent neural circuit mechanism of the equivariant stimulus representation. We found that a continuous attractor network (CAN), a canonical neural circuit model, self-consistently generates a continuous family of stationary population responses (attractors) that represents the stimulus equivariantly. Inspired by the Drosophila’s compass circuit, we found that the 1D translation operators can be represented by extra speed neurons besides the CAN, where speed neurons’ responses represent the moving speed (1D translation group parameter), and their feedback connections to the CAN represent the translation generator (Lie algebra). We demonstrated that the network responses are consistent with experimental data. Our model for the first time demonstrates how recurrent neural circuitry in the brain achieves equivariant stimulus representation. 
    more » « less
  4. Neural activity underlying working memory is not a local phenomenon but distributed across multiple brain regions. To elucidate the circuit mechanism of such distributed activity, we developed an anatomically constrained computational model of large-scale macaque cortex. We found that mnemonic internal states may emerge from inter-areal reverberation, even in a regime where none of the isolated areas is capable of generating self-sustained activity. The mnemonic activity pattern along the cortical hierarchy indicates a transition in space, separating areas engaged in working memory and those which do not. A host of spatially distinct attractor states is found, potentially subserving various internal processes. The model yields testable predictions, including the idea of counterstream inhibitory bias, the role of prefrontal areas in controlling distributed attractors, and the resilience of distributed activity to lesions or inactivation. This work provides a theoretical framework for identifying large-scale brain mechanisms and computational principles of distributed cognitive processes. 
    more » « less
  5. Abstract

    Objective.A major challenge in designing closed-loop brain-computer interfaces is finding optimal stimulation patterns as a function of ongoing neural activity for different subjects and different objectives. Traditional approaches, such as those currently used for deep brain stimulation, have largely followed a manual trial-and-error strategy to search for effective open-loop stimulation parameters, a strategy that is inefficient and does not generalize to closed-loop activity-dependent stimulation.Approach.To achieve goal-directed closed-loop neurostimulation, we propose the use of brain co-processors, devices which exploit artificial intelligence to shape neural activity and bridge injured neural circuits for targeted repair and restoration of function. Here we investigate a specific type of co-processor called a ‘neural co-processor’ which uses artificial neural networks and deep learning to learn optimal closed-loop stimulation policies. The co-processor adapts the stimulation policy as the biological circuit itself adapts to the stimulation, achieving a form of brain-device co-adaptation. Here we use simulations to lay the groundwork for futurein vivotests of neural co-processors. We leverage a previously published cortical model of grasping, to which we applied various forms of simulated lesions. We used our simulations to develop the critical learning algorithms and study adaptations to non-stationarity in preparation for futurein vivotests.Main results.Our simulations show the ability of a neural co-processor to learn a stimulation policy using a supervised learning approach, and to adapt that policy as the underlying brain and sensors change. Our co-processor successfully co-adapted with the simulated brain to accomplish the reach-and-grasp task after a variety of lesions were applied, achieving recovery towards healthy function in the range 75%–90%.Significance.Our results provide the first proof-of-concept demonstration, using computer simulations, of a neural co-processor for adaptive activity-dependent closed-loop neurostimulation for optimizing a rehabilitation goal after injury. While a significant gap remains between simulations andin vivoapplications, our results provide insights on how such co-processors may eventually be developed for learning complex adaptive stimulation policies for a variety of neural rehabilitation and neuroprosthetic applications.

     
    more » « less