skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Sparse balance: Excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights
Cortical circuits generate excitatory currents that must be cancelled by strong inhibition to assure stability. The resulting excitatory-inhibitory (E-I) balance can generate spontaneous irregular activity but, in standard balanced E-I models, this requires that an extremely strong feedforward bias current be included along with the recurrent excitation and inhibition. The absence of experimental evidence for such large bias currents inspired us to examine an alternative regime that exhibits asynchronous activity without requiring unrealistically large feedforward input. In these networks, irregular spontaneous activity is supported by a continually changing sparse set of neurons. To support this activity, synaptic strengths must be drawn from high-variance distributions. Unlike standard balanced networks, these sparse balance networks exhibit robust nonlinear responses to uniform inputs and non-Gaussian input statistics. Interestingly, the speed, not the size, of synaptic fluctuations dictates the degree of sparsity in the model. In addition to simulations, we provide a mean-field analysis to illustrate the properties of these networks.  more » « less
Award ID(s):
1707398
PAR ID:
10338003
Author(s) / Creator(s):
; ;
Editor(s):
Soltani, Alireza
Date Published:
Journal Name:
PLOS Computational Biology
Volume:
18
Issue:
2
ISSN:
1553-7358
Page Range / eLocation ID:
e1008836
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Gjorgjieva, Julijana (Ed.)
    The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input. 
    more » « less
  2. null (Ed.)
    Biological neural networks face a formidable task: performing reliable computations in the face of intrinsic stochasticity in individual neurons, imprecisely specified synaptic connectivity, and nonnegligible delays in synaptic transmission. A common approach to combatting such biological heterogeneity involves averaging over large redundant networks of N neurons resulting in coding errors that decrease classically as the square root of N. Recent work demonstrated a novel mechanism whereby recurrent spiking networks could efficiently encode dynamic stimuli achieving a superclassical scaling in which coding errors decrease as 1/N. This specific mechanism involved two key ideas: predictive coding, and a tight balance, or cancellation between strong feedforward inputs and strong recurrent feedback. However, the theoretical principles governing the efficacy of balanced predictive coding and its robustness to noise, synaptic weight heterogeneity and communication delays remain poorly understood. To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean-field theory of coding accuracy. Overall, our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes, reveals the fundamental role that balance plays in achieving superclassical scaling, and unifies previously disparate models in theoretical neuroscience. 
    more » « less
  3. The notion that a neuron transmits the same set of neurotransmitters at all of its post-synaptic connections, typically known as Dale's law, is well supported throughout the majority of the brain and is assumed in almost all theoretical studies investigating the mechanisms for computation in neuronal networks. Dale's law has numerous functional implications in fundamental sensory processing and decision-making tasks, and it plays a key role in the current understanding of the structure-function relationship in the brain. However, since exceptions to Dale's law have been discovered for certain neurons and because other biological systems with complex network structure incorporate individual units that send both positive and negative feedback signals, we investigate the functional implications of network model dynamics that violate Dale's law by allowing each neuron to send out both excitatory and inhibitory signals to its neighbors. We show how balanced network dynamics, in which large excitatory and inhibitory inputs are dynamically adjusted such that input fluctuations produce irregular firing events, are theoretically preserved for a single population of neurons violating Dale's law. We further leverage this single-population network model in the context of two competing pools of neurons to demonstrate that effective decision-making dynamics are also produced, agreeing with experimental observations from honeybee dynamics in selecting a food source and artificial neural networks trained in optimal selection. Through direct comparison with the classical two-population balanced neuronal network, we argue that the one-population network demonstrates more robust balanced activity for systems with less computational units, such as honeybee colonies, whereas the two-population network exhibits a more rapid response to temporal variations in network inputs, as required by the brain. We expect this study will shed light on the role of neurons violating Dale's law found in experiment as well as shared design principles across biological systems that perform complex computations. 
    more » « less
  4. Abstract Neural activity coordinated across different scales from neuronal circuits to large-scale brain networks gives rise to complex cognitive functions. Bridging the gap between micro- and macro-scale processes, we present a novel framework based on the maximum entropy model to infer a hybrid resting state structural connectome, representing functional interactions constrained by structural connectivity. We demonstrate that the structurally informed network outperforms the unconstrained model in simulating brain dynamics; wherein by constraining the inference model with the network structure we may improve the estimation of pairwise BOLD signal interactions. Further, we simulate brain network dynamics using Monte Carlo simulations with the new hybrid connectome to probe connectome-level differences in excitation-inhibition balance between apolipoprotein E (APOE)-ε4 carriers and noncarriers. Our results reveal sex differences among APOE-ε4 carriers in functional dynamics at criticality; specifically, female carriers appear to exhibit a lower tolerance to network disruptions resulting from increased excitatory interactions. In sum, the new multimodal network explored here enables analysis of brain dynamics through the integration of structure and function, providing insight into the complex interactions underlying neural activity such as the balance of excitation and inhibition. 
    more » « less
  5. The goal of odor source separation and identification from real-world data presents a challenging problem. Both individual odors of potential interest and multisource odor scenes constitute linear combinations of analytes present at different concentrations. The mixing of these analytes can exert nonlinear and even nonmonotonic effects on cross-responsive chemosensors, effectively occluding diagnostic activity patterns across the array. Neuromorphic algorithms, inspired by specific computational strategies of the mammalian olfactory system, have been trained to rapidly learn and reconstruct arbitrary odor source signatures in the presence of background interference. However, such networks perform best when tuned to the statistics of well-behaved inputs, normalized and predictable in their activity distributions. Deployment of chemosensor arrays in the wild exposes these networks to disruptive effects that exceed these tolerances. To address the problems inherent to chemosensory signal conditioning and representation learning, the olfactory bulb deploys an array of strategies: (1) shunting inhibition in the glomerular layer implements divisive normalization, contributing to concentration-invariant representations; (2) feedforward gain diversification (synaptic weight heterogeneity) regularizes spiking activity in the external plexiform layer (mitral and granule cells), enabling the network to handle unregulated inputs; (3) gamma-band oscillations segment activity into packets, enabling a spike phase code and iterative denoising; (4) excitatory and inhibitory spike timing dependent learning rules induce hierarchical attraction basins, enabling the network to map its highly complex inputs to regions of a lower dimensional manifold; (5) neurogenesis in the granule cell layer enables lifelong learning and prevents order effects (regularizing the learned synaptic weight distribution over the span of training). Here, we integrate these motifs into a single neuromorphic model, bringing together prior OB-inspired model architectures. In a series of simulation experiments including real-world data from a chemosensor array, we demonstrate the network’s ability to learn and detect complex odorants in variable environments despite unpredictable noise distributions. 
    more » « less