It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multi-neuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning.
more »
« less
A neural network for online spike classification that improves decoding accuracy
Separating neural signals from noise can improve brain-computer interface performance and stability. However, most algorithms for separating neural action potentials from noise are not suitable for use in real time and have shown mixed effects on decoding performance. With the goal of removing noise that impedes online decoding, we sought to automate the intuition of human spike-sorters to operate in real time with an easily tunable parameter governing the stringency with which spike waveforms are classified. We trained an artificial neural network with one hidden layer on neural waveforms that were hand-labeled as either spikes or noise. The network output was a likelihood metric for each waveform it classified, and we tuned the network’s stringency by varying the minimum likelihood value for a waveform to be considered a spike. Using the network’s labels to exclude noise waveforms, we decoded remembered target location during a memory-guided saccade task from electrode arrays implanted in prefrontal cortex of rhesus macaque monkeys. The network classified waveforms in real time, and its classifications were qualitatively similar to those of a human spike-sorter. Compared with decoding with threshold crossings, in most sessions we improved decoding performance by removing waveforms with low spike likelihood values. Furthermore, decoding with our network’s classifications became more beneficial as time since array implantation increased. Our classifier serves as a feasible preprocessing step, with little risk of harm, that could be applied to both off-line neural data analyses and online decoding. NEW & NOTEWORTHY Although there are many spike-sorting methods that isolate well-defined single units, these methods typically involve human intervention and have inconsistent effects on decoding. We used human classified neural waveforms as training data to create an artificial neural network that could be tuned to separate spikes from noise that impaired decoding. We found that this network operated in real time and was suitable for both off-line data processing and online decoding.
more »
« less
- PAR ID:
- 10189142
- Date Published:
- Journal Name:
- Journal of Neurophysiology
- Volume:
- 123
- Issue:
- 4
- ISSN:
- 0022-3077
- Page Range / eLocation ID:
- 1472 to 1485
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
SUMMARY Seismograms contain multiple sources of seismic waves, from distinct transient signals such as earthquakes to continuous ambient seismic vibrations such as microseism. Ambient vibrations contaminate the earthquake signals, while the earthquake signals pollute the ambient noise’s statistical properties necessary for ambient-noise seismology analysis. Separating ambient noise from earthquake signals would thus benefit multiple seismological analyses. This work develops a multitask encoder–decoder network named WaveDecompNet to separate transient signals from ambient signals directly in the time domain for 3-component seismograms. We choose the active-volcanic Big Island in Hawai’i as a natural laboratory given its richness in transients (tectonic and volcanic earthquakes) and diffuse ambient noise (strong microseism). The approach takes a noisy 3-component seismogram as input and independently predicts the 3-component earthquake and noise waveforms. The model is trained on earthquake and noise waveforms from the STandford EArthquake Dataset (STEAD) and on the local noise of seismic station IU.POHA. We estimate the network’s performance by using the explained variance metric on both earthquake and noise waveforms. We explore different neural network designs for WaveDecompNet and find that the model with long-short-term memory (LSTM) performs best over other structures. Overall, we find that WaveDecompNet provides satisfactory performance down to a signal-to-noise ratio (SNR) of 0.1. The potential of the method is (1) to improve broad-band SNR of transient (earthquake) waveforms and (2) to improve local ambient noise to monitor the Earth’s structure using ambient noise signals. To test this, we apply a short-time average to a long-time average filter and improve the number of detected events. We also measure single-station cross-correlation functions of the recovered ambient noise and establish their improved coherence through time and over different frequency bands. We conclude that WaveDecompNet is a promising tool for a broad range of seismological research.more » « less
-
null (Ed.)The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for more than one time step. Moreover, the generated spike trains can fail to capture important features of the data and even show diverging firing rates. To alleviate this, we propose to directly minimize the divergence between neural recorded and model generated spike trains using spike train kernels. We develop a method that stochastically optimizes the maximum mean discrepancy induced by the kernel. Experiments performed on both real and synthetic neural data validate the proposed approach, showing that it leads to well-behaving models. Using different combinations of spike train kernels, we show that we can control the trade-off between different features which is critical for dealing with model-mismatch.more » « less
-
In this paper, we analyze applicability of singleand two-hidden-layer feed-forward artificial neural networks, SLFNs and TLFNs, respectively, in decoding linear block codes. Based on the provable capability of SLFNs and TLFNs to approximate discrete functions, we discuss sizes of the network capable to perform maximum likelihood decoding. Furthermore, we propose a decoding scheme, which use artificial neural networks (ANNs) to lower the error-floors of low-density parity-check (LDPC) codes. By learning a small number of error patterns, uncorrectable with typical decoders of LDPC codes, ANN can lower the error-floor by an order of magnitude, with only marginal average complexity incense.more » « less
-
Spiking neural network (SNN) has attracted more and more research attention due to its event-based property. SNNs are more power efficient with such property than a conventional artificial neural network. For transferring the information to spikes, SNNs need an encoding process. With the temporal encoding schemes, SNN can extract the temporal patterns from the original information. A more advanced encoding scheme is a multiplexing temporal encoding which combines several encoding schemes with different timescales to have a larger information density and dynamic range. After that, the spike timing dependence plasticity (STDP) learning algorithm is utilized for training the SNN since the SNN can not be trained with regular training algorithms like backpropagation. In this work, a spiking domain feature extraction neural network with temporal multiplexing encoding is designed on EAGLE and fabricated on the PCB board. The testbench’s power consumption is 400mW. From the test result, a conclusion can be drawn that the network on PCB can transfer the input information to multiplexing temporal encoded spikes and then utilize the spikes to adjust the synaptic weight voltage.more » « less
An official website of the United States government

