The Liquid State Machine (LSM) is a promising model of recurrent spiking neural networks. It consists of a fixed recurrent network, or the reservoir, which projects to a readout layer through plastic readout synapses. The classification performance is highly dependent on the training of readout synapses which tend to be very dense and contribute significantly to the overall network complexity. We present a unifying biologically inspired calcium-modulated supervised spike-timing dependent plasticity (STDP) approach to training and sparsification of readout synapses, where supervised temporal learning is modulated by the post-synaptic firing level characterized by the post-synaptic calcium concentration. The proposed approach prevents synaptic weight saturation, boosts learning performance, and sparsifies the connectivity between the reservoir and readout layer. Using the recognition rate of spoken English letters adopted from the TI46 speech corpus as a measure of performance, we demonstrate that the proposed approach outperforms a baseline supervised STDP mechanism by up to 25%, and a competitive non-STDP spike-dependent training algorithm by up to 2.7%. Furthermore, it can prune out up to 30% of readout synapses without causing significant performance degradation.
more »
« less
Synaptic balancing: A biologically plausible local learning rule that provably increases neural network noise robustness without sacrificing task performance
We introduce a novel, biologically plausible local learning rule that provably increases the robustness of neural dynamics to noise in nonlinear recurrent neural networks with homogeneous nonlinearities. Our learning rule achieves higher noise robustness without sacrificing performance on the task and without requiring any knowledge of the particular task. The plasticity dynamics—an integrable dynamical system operating on the weights of the network—maintains a multiplicity of conserved quantities, most notably the network’s entire temporal map of input to output trajectories. The outcome of our learning rule is a synaptic balancing between the incoming and outgoing synapses of every neuron. This synaptic balancing rule is consistent with many known aspects of experimentally observed heterosynaptic plasticity, and moreover makes new experimentally testable predictions relating plasticity at the incoming and outgoing synapses of individual neurons. Overall, this work provides a novel, practical local learning rule that exactly preserves overall network function and, in doing so, provides new conceptual bridges between the disparate worlds of the neurobiology of heterosynaptic plasticity, the engineering of regularized noise-robust networks, and the mathematics of integrable Lax dynamical systems.
more »
« less
- Award ID(s):
- 1845166
- PAR ID:
- 10513477
- Editor(s):
- Richards, Blake A
- Publisher / Repository:
- PLOS
- Date Published:
- Journal Name:
- PLOS Computational Biology
- Volume:
- 18
- Issue:
- 9
- ISSN:
- 1553-7358
- Page Range / eLocation ID:
- e1010418
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Synaptic plasticity refers to activity-dependent synaptic strengthening or weakening between neurons. It is usually associated with homosynaptic plasticity, which refers to a synaptic junction controlled by interactions between specific neurons. Heterosynaptic plasticity, on the other hand, lacks this specificity. It involves much larger populations of synapses and neurons and can be associated with changes in synaptic strength due to nonlocal alterations in the ambient electrochemical environment. This paper presents specific examples demonstrating how variations in the ambient electrochemical environment of lipid membranes can impact the nonlinear dynamical behaviors of memristive and memcapacitive systems in droplet interface bilayers (DIBs). Examples include the use of pH as a modulatory factor that alters the voltage-dependent memristive behavior of alamethicin ion channels in DIB lipid bilayers, and the discovery of long-term potentiation (LTP) in a lipid bilayer-only system after application of electrical stimulation protocols.more » « less
-
Network features found in the brain may help implement more efficient and robust neural networks. Spiking neural networks (SNNs) process spikes in the spatiotemporal domain and can offer better energy efficiency than deep neural networks. However, most SNN implementations rely on simple point neurons that neglect the rich neuronal and dendritic dynamics. Herein, a bio‐inspired columnar learning network (CLN) structure that employs feedforward, lateral, and feedback connections to make robust classification with sparse data is proposed. CLN is inspired by the mammalian neocortex, comprising cortical columns each containing multiple minicolumns formed by interacting pyramidal neurons. A column continuously processes spatiotemporal signals from its sensor, while learning spatial and temporal correlations between features in different regions of an object along with the sensor's movement through sensorimotor interaction. CLN can be implemented using memristor crossbars with a local learning rule, spiking timing‐dependent plasticity (STDP), which can be natively obtained in second‐order memristors. CLN allows inputs from multiple sensors to be simultaneously processed by different columns, resulting in higher classification accuracy and better noise tolerance. Analysis of networks implemented on memristor crossbars shows that the system can operate at very low power and high throughput, with high accuracy and robustness to noise.more » « less
-
An approach combining signal detection theory and precise 3D reconstructions from serial section electron microscopy (3DEM) was used to investigate synaptic plasticity and information storage capacity at medial perforant path synapses in adult hippocampal dentate gyrus in vivo. Induction of long-term potentiation (LTP) markedly increased the frequencies of both small and large spines measured 30 minutes later. This bidirectional expansion resulted in heterosynaptic counterbalancing of total synaptic area per unit length of granule cell dendrite. Control hemispheres exhibited 6.5 distinct spine sizes for 2.7 bits of storage capacity while LTP resulted in 12.9 distinct spine sizes (3.7 bits). In contrast, control hippocampal CA1 synapses exhibited 4.7 bits with much greater synaptic precision than either control or potentiated dentate gyrus synapses. Thus, synaptic plasticity altered total capacity, yet hippocampal subregions differed dramatically in their synaptic information storage capacity, reflecting their diverse functions and activation histories.more » « less
-
In recent years, many researchers have proposed new models for synaptic plasticity in the brain based on principles of machine learning. The central motivation has been the development of learning algorithms that are able to learn difficult tasks while qualifying as "biologically plausible". However, the concept of a biologically plausible learning algorithm is only heuristically defined as an algorithm that is potentially implementable by biological neural networks. Further, claims that neural circuits could implement any given algorithm typically rest on an amorphous concept of "locality" (both in space and time). As a result, it is unclear what many proposed local learning algorithms actually predict biologically, and which of these are consequently good candidates for experimental investigation. Here, we address this lack of clarity by proposing formal and operational definitions of locality. Specifically, we define different classes of locality, each of which makes clear what quantities cannot be included in a learning rule if an algorithm is to qualify as local with respect to a given (biological) constraint. We subsequently use this framework to distill testable predictions from various classes of biologically plausible synaptic plasticity models that are robust to arbitrary choices about neural network architecture. Therefore, our framework can be used to guide claims of biological plausibility and to identify potential means of experimentally falsifying a proposed learning algorithm for the brain.more » « less