Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors. However, existing SNN error backpropagation (BP) methods lack proper handling of spiking discontinuities and suffer from low performance compared with the BP methods for traditional artificial neural networks. In addition, a large number of time steps are typically required to achieve decent performance, leading to high latency and rendering spike based computation unscalable to deep architectures. We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down error backpropagation across two types of inter-neuron and intra-neuron dependencies and leads to improved temporal learning precision. It captures inter-neuron dependencies through presynaptic firing times by considering the all-or-none characteristics of firing activities, and captures intra-neuron dependencies by handling the internal evolution of each neuronal state in time. TSSL-BP efficiently trains deep SNNs within a much shortened temporal window of a few steps while improving the accuracy for various image classification datasets including CIFAR10.
more »
« less
Low‐Voltage Electrochemical Li x WO 3 Synapses with Temporal Dynamics for Spiking Neural Networks
Neuromorphic computing has the great potential to enable faster and more energy‐efficient computing by overcoming the von Neumann bottleneck. However, most emerging nonvolatile memory (NVM)‐based artificial synapses suffer from insufficient precision, nonlinear synaptic weight update, high write voltage, and high switching latency. Moreover, the spatiotemporal dynamics, an important temporal component for cognitive computing in spiking neural networks (SNNs), are hard to generate with existing complementary metal–oxide–semiconductor (CMOS) devices or emerging NVM. Herein, a three‐terminal, LixWO3‐based electrochemical synapse (LiWES) is developed with low programming voltage (0.2 V), fast programming speed (500 ns), and high precision (1024 states) that is ideal for artificial neural networks applications. Time‐dependent synaptic functions such as paired‐pulse facilitation (PPF) and temporal filtering that are critical for SNNs are also demonstrated. In addition, by leveraging the spike‐encoded timing information extracted from the short‐term plasticity (STP) behavior in the LiWES, an SNNs model is built to benchmark the pattern classification performance of the LiWES, and the result indicates a large boost in classification performance (up to 128×), compared with those NO‐STP synapses.
more »
« less
- PAR ID:
- 10252073
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Advanced Intelligent Systems
- Volume:
- 3
- Issue:
- 9
- ISSN:
- 2640-4567
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep neural networks (DNNs) consist of layers of neurons interconnected by synaptic weights. A high bit-precision in weights is generally required to guarantee high accuracy in many applications. Minimizing error accumulation between layers is also essential when building large-scale networks. Recent demonstrations of photonic neural networks are limited in bit-precision due to cross talk and the high sensitivity of optical components (e.g., resonators). Here, we experimentally demonstrate a record-high precision of 9 bits with a dithering control scheme for photonic synapses. We then numerically simulated the impact with increased synaptic precision on a wireless signal classification application. This work could help realize the potential of photonic neural networks for many practical, real-world tasks.more » « less
-
Abstract Memristors for neuromorphic computing have gained prominence over the years for implementing synapses and neurons due to their nano-scale footprint and reduced complexity. Several demonstrations show two-dimensional (2D) materials as a promising platform for the realization of transparent, flexible, ultra-thin memristive synapses. However, unsupervised learning in a spiking neural network (SNN) facilitated by linearity and symmetry in synaptic weight update has not been explored thoroughly using the 2D materials platform. Here, we demonstrate that graphene/MoS2/SiOx/Ni synapses exhibit ideal linearity and symmetry when subjected to identical input pulses, which is essential for their role in online training of neural networks. The linearity in weight update holds for a range of pulse width, amplitude and number of applied pulses. Our work illustrates that the mechanism of switching in MoS2-based synapses is through conductive filaments governed by Poole-Frenkel emission. We demonstrate that the graphene/MoS2/SiOx/Ni synapses, when integrated with a MoS2-based leaky integrate-and-fire neuron, can control the spiking of the neuron efficiently. This work establishes 2D MoS2as a viable platform for all-memristive SNNs.more » « less
-
Abstract CMOS-based computing systems that employ the von Neumann architecture are relatively limited when it comes to parallel data storage and processing. In contrast, the human brain is a living computational signal processing unit that operates with extreme parallelism and energy efficiency. Although numerous neuromorphic electronic devices have emerged in the last decade, most of them are rigid or contain materials that are toxic to biological systems. In this work, we report on biocompatible bilayer graphene-based artificial synaptic transistors (BLAST) capable of mimicking synaptic behavior. The BLAST devices leverage a dry ion-selective membrane, enabling long-term potentiation, with ~50 aJ/µm2switching energy efficiency, at least an order of magnitude lower than previous reports on two-dimensional material-based artificial synapses. The devices show unique metaplasticity, a useful feature for generalizable deep neural networks, and we demonstrate that metaplastic BLASTs outperform ideal linear synapses in classic image classification tasks. With switching energy well below the 1 fJ energy estimated per biological synapse, the proposed devices are powerful candidates for bio-interfaced online learning, bridging the gap between artificial and biological neural networks.more » « less
-
Abstract Artificial neural networks have demonstrated superiority over traditional computing architectures in tasks such as pattern classification and learning. However, they do not measure uncertainty in predictions, and hence they can make wrong predictions with high confidence, which can be detrimental for many mission-critical applications. In contrast, Bayesian neural networks (BNNs) naturally include such uncertainty in their model, as the weights are represented by probability distributions (e.g. Gaussian distribution). Here we introduce three-terminal memtransistors based on two-dimensional (2D) materials, which can emulate both probabilistic synapses as well as reconfigurable neurons. The cycle-to-cycle variation in the programming of the 2D memtransistor is exploited to achieve Gaussian random number generator-based synapses, whereas 2D memtransistor based integrated circuits are used to obtain neurons with hyperbolic tangent and sigmoid activation functions. Finally, memtransistor-based synapses and neurons are combined in a crossbar array architecture to realize a BNN accelerator for a data classification task.more » « less
An official website of the United States government
