skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Capturing Spike Variability in Noisy Izhikevich Neurons Using Point Process Generalized Linear Models
To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.  more » « less
Award ID(s):
1451384
PAR ID:
10652469
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
MIT Press
Date Published:
Journal Name:
Neural Computation
Volume:
30
Issue:
1
ISSN:
0899-7667
Page Range / eLocation ID:
125 to 148
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for more than one time step. Moreover, the generated spike trains can fail to capture important features of the data and even show diverging firing rates. To alleviate this, we propose to directly minimize the divergence between neural recorded and model generated spike trains using spike train kernels. We develop a method that stochastically optimizes the maximum mean discrepancy induced by the kernel. Experiments performed on both real and synthetic neural data validate the proposed approach, showing that it leads to well-behaving models. Using different combinations of spike train kernels, we show that we can control the trade-off between different features which is critical for dealing with model-mismatch. 
    more » « less
  2. Spike train decoding is considered one of the grand challenges in reverse-engineering neural control systems as well as in the development of neuromorphic controllers. This paper presents a novel relative-time kernel design that accounts for not only individual spike train patterns, but also the relative spike timing between neuron pairs in the population. The new relative-time-kernel-based spike train decoding method proposed in this paper allows us to map the spike trains of a population of neurons onto a lower-dimensional manifold, in which continuous-time trajectories live. The effectiveness of our novel approach is demonstrated by comparing it with existing kernel-based and rate-based decoders, including the traditional reproducing kernel Hilbert space framework. In this paper, we use the data collected in hawk moth flower tracking experiments to test the importance of relative spike timing information for neural control, and focus on the problem of uncovering the mapping from the spike trains of ten primary flight muscles to the resulting forces and torques on the moth body. We show that our new relative-time-kernel-based decoder improves the prediction of the resulting forces and torques by up to 52.1 %. Our proposed relative-time-kernel-based decoder may be used to reverse-engineer neural control systems more accurately by incorporating precise relative spike timing information in spike trains. 
    more » « less
  3. Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors. However, existing SNN error backpropagation (BP) methods lack proper handling of spiking discontinuities and suffer from low performance compared with the BP methods for traditional artificial neural networks. In addition, a large number of time steps are typically required to achieve decent performance, leading to high latency and rendering spike based computation unscalable to deep architectures. We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down error backpropagation across two types of inter-neuron and intra-neuron dependencies and leads to improved temporal learning precision. It captures inter-neuron dependencies through presynaptic firing times by considering the all-or-none characteristics of firing activities, and captures intra-neuron dependencies by handling the internal evolution of each neuronal state in time. TSSL-BP efficiently trains deep SNNs within a much shortened temporal window of a few steps while improving the accuracy for various image classification datasets including CIFAR10. 
    more » « less
  4. Graham, Lyle J. (Ed.)
    Neurons exhibit diverse intrinsic dynamics, which govern how they integrate synaptic inputs to produce spikes. Intrinsic dynamics are often plastic during development and learning, but the effects of these changes on stimulus encoding properties are not well known. To examine this relationship, we simulated auditory responses to zebra finch song using a linear-dynamical cascade model, which combines a linear spectrotemporal receptive field with a dynamical, conductance-based neuron model, then used generalized linear models to estimate encoding properties from the resulting spike trains. We focused on the effects of a low-threshold potassium current (K LT ) that is present in a subset of cells in the zebra finch caudal mesopallium and is affected by early auditory experience. We found that K LT affects both spike adaptation and the temporal filtering properties of the receptive field. The direction of the effects depended on the temporal modulation tuning of the linear (input) stage of the cascade model, indicating a strongly nonlinear relationship. These results suggest that small changes in intrinsic dynamics in tandem with differences in synaptic connectivity can have dramatic effects on the tuning of auditory neurons. 
    more » « less
  5. Transmon qubits experience open-system effects that manifest as noise at a broad range of frequencies. We present a model of these effects using the Redfield master equation with a hybrid bath consisting of low- and high-frequency components. We use two-level fluctuators to simulate 1/f-like noise behavior, which is a dominant source of decoherence for superconducting qubits. By measuring quantum state fidelity under free evolution with and without dynamical decoupling (DD), we can fit the low- and high-frequency noise parameters in our model. We train and test our model using experiments on quantum devices available through IBM quantum experience. Our model accurately predicts the fidelity decay of random initial states, including the effect of DD pulse sequences. We compare our model with two simpler models and confirm the importance of including both high frequency and 1/f noise in order to accurately predict transmon behavior. 
    more » « less