skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 31, 2026

Title: Parametric models for predicting nonstationary spike-spike correlations with local field potentials
Correlations between the spiking of pairs of neurons are often used to study the brain’s representation of sensory or motor variables and neural circuit function and dysfunction. Previous statistical techniques have shown how time-averaged spike-spike correlations can be predicted by the time-averaged relationships between the individual neurons and the local field potential (LFP). However, spiking and LFP are both nonstationary, and spike-spike correlations have nonstationary structure that cannot be accounted for by time-averaged approaches. Here we develop parametric models that predict spike-spike correlations using a small number of LFP-based predictors, and we then apply these models to the problem of tracking changes in spike-spike correlations over time. Parametric models allow for flexibility in the choice of which LFP recording channels and frequency bands to use for prediction, and coefficients directly indicate which LFP features drive correlated spiking. Here we demonstrate our methods in simulation and test the models on experimental data from large-scale multi-electrode recordings in the mouse hippocampus and visual cortex. In single time windows, we find that our parametric models can be as accurate as previous nonparametric approaches, while also being flexible and interpretable. We then demonstrate how parametric models can be applied to describe nonstationary spike-spike correlations measured in sequential time windows. We find that although the patterns of both cortical and hippocampal spike-spike correlations vary over time, these changes are, at least partially, predicted by models that assume a fixed spike-field relationship. This approach may thus help to better understand how the dynamics of spike-spike correlations are related to functional brain states. Since spike-spike correlations are increasingly used as features for decoding external variables from neural activity, these models may also have the potential to improve the accuracy of adaptive decoders and brain machine interfaces.  more » « less
Award ID(s):
1931249
PAR ID:
10637532
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
bioRxiv
Date Published:
Format(s):
Medium: X
Institution:
bioRxiv
Sponsoring Org:
National Science Foundation
More Like this
  1. It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multi-neuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning. 
    more » « less
  2. Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices. 
    more » « less
  3. Abstract The representation of external stimuli in the form of action potentials or spikes constitutes the basis of energy efficient neural computation that emerging spiking neural networks (SNNs) aspire to imitate. With recent evidence suggesting that information in the brain is more often represented by explicit firing times of the neurons rather than mean firing rates, it is imperative to develop novel hardware that can accelerate sparse and spike‐timing‐based encoding. Here a medium‐scale integrated circuit composed of two cascaded three‐stage inverters and one XOR logic gate fabricated using a total of 21 memtransistors based on photosensitive 2D monolayer MoS2 for spike‐timing‐based encoding of visual information, is introduced. It is shown that different illumination intensities can be encoded into sparse spiking with time‐to‐first‐spike representing the illumination information, that is, higher intensities invoke earlier spikes and vice versa. In addition, non‐volatile and analog programmability in the photoencoder is exploited for adaptive photoencoding that allows expedited spiking under scotopic (low‐light) and deferred spiking under photopic (bright‐light) conditions, respectively. Finally, low energy expenditure of less than 1 µJ by the 2D‐memtransistor‐based photoencoder highlights the benefits of in‐sensor and bioinspired design that can be transformative for the acceleration of SNNs. 
    more » « less
  4. Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low energy consumption, SNNs are considered to be important candidates as co-processors to be implemented in mobile devices. In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons. A policy gradient-based algorithm is derived considering a Generalized Linear Model (GLM) for spiking neurons. Experimental results demonstrate the capability of online trained SNNs as stochastic policies to gracefully trade energy consumption, as measured by the number of spikes, and control performance. Significant gains are shown as compared to the standard approach of converting an offline trained ANN into an SNN. 
    more » « less
  5. Gjorgjieva, Julijana (Ed.)
    The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input. 
    more » « less