skip to main content


This content will become publicly available on May 21, 2024

Title: Enhancing SNN Training Performance: A Mixed-Signal Triplet Reconfigurable STDP Circuit with Multiplexing Encoding
In spike-timing-dependent plasticity (STDP), synap-tic weights are modified according to the relative time difference between pre and post-synaptic spikes of spiking neural network (SNN). A triplet STDP model was proposed since this model can better take account of a series of spikes and thus more closely mimic the activity in biological neural systems. Circuit that can switch between different STDP rules was also introduced to improve the range of STDP applications. To apply the advantages of triplet STDP to various tasks, a mixed-signal triplet reconfigurable STDP circuit and its hardware prototype are proposed in this paper. The performance analysis of the STDP training algorithm is carried out with a hardware testbench as well as Pytorch-based SNN. This triplet STDP design achieves 3.28% and 3.63% higher accuracy than the pair STDP learning rule through datasets such as MNIST and CIFAR-10. Our design shows one of the best reconfigurability while keeping a relatively low energy per spike operation (SOP) through the performance comparison with the state of the arts.  more » « less
Award ID(s):
1750450 1731928 1937487
NSF-PAR ID:
10439791
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2023 IEEE International Symposium on Circuits and Systems (ISCAS)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Asynchronous event-driven computation and communication using spikes facilitate the realization of spiking neural networks (SNN) to be massively parallel, extremely energy efficient and highly robust on specialized neuromorphic hardware. However, the lack of a unified robust learning algorithm limits the SNN to shallow networks with low accuracies. Artificial neural networks (ANN), however, have the backpropagation algorithm which can utilize gradient descent to train networks which are locally robust universal function approximators. But backpropagation algorithm is neither biologically plausible nor neuromorphic implementation friendly because it requires: 1) separate backward and forward passes, 2) differentiable neurons, 3) high-precision propagated errors, 4) coherent copy of weight matrices at feedforward weights and the backward pass, and 5) non-local weight update. Thus, we propose an approximation of the backpropagation algorithm completely with spiking neurons and extend it to a local weight update rule which resembles a biologically plausible learning rule spike-timing-dependent plasticity (STDP). This will enable error propagation through spiking neurons for a more biologically plausible and neuromorphic implementation friendly backpropagation algorithm for SNNs. We test the proposed algorithm on various traditional and non-traditional benchmarks with competitive results. 
    more » « less
  2. We present a new back propagation based training algorithm for discrete-time spiking neural networks (SNN). Inspired by recent deep learning algorithms on binarized neural networks, binary activation with a straight-through gradient estimator is used to model the leaky integrate-fire spiking neuron, overcoming the difficulty in training SNNs using back propagation. Two SNN training algorithms are proposed: (1) SNN with discontinuous integration, which is suitable for rate-coded input spikes, and (2) SNN with continuous integration, which is more general and can handle input spikes with temporal information. Neuromorphic hardware designed in 28nm CMOS exploits the spike sparsity and demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4–773 nJ/image). 
    more » « less
  3. Spiking neural network (SNN) has attracted more and more research attention due to its event-based property. SNNs are more power efficient with such property than a conventional artificial neural network. For transferring the information to spikes, SNNs need an encoding process. With the temporal encoding schemes, SNN can extract the temporal patterns from the original information. A more advanced encoding scheme is a multiplexing temporal encoding which combines several encoding schemes with different timescales to have a larger information density and dynamic range. After that, the spike timing dependence plasticity (STDP) learning algorithm is utilized for training the SNN since the SNN can not be trained with regular training algorithms like backpropagation. In this work, a spiking domain feature extraction neural network with temporal multiplexing encoding is designed on EAGLE and fabricated on the PCB board. The testbench’s power consumption is 400mW. From the test result, a conclusion can be drawn that the network on PCB can transfer the input information to multiplexing temporal encoded spikes and then utilize the spikes to adjust the synaptic weight voltage. 
    more » « less
  4. Abstract Spiking neural network (SNN) in future neuromorphic architectures requires hardware devices to be not only capable of emulating fundamental functionalities of biological synapse such as spike-timing dependent plasticity (STDP) and spike-rate dependent plasticity (SRDP), but also biodegradable to address current ecological challenges of electronic waste. Among different device technologies and materials, memristive synaptic devices based on natural organic materials have emerged as the favourable candidate to meet these demands. The metal–insulator-metal structure is analogous to biological synapse with low power consumption, fast switching speed and simulation of synaptic plasticity, while natural organic materials are water soluble, renewable and environmental friendly. In this study, the potential of a natural organic material—honey-based memristor for SNNs was demonstrated. The device exhibited forming-free bipolar resistive switching, a high switching speed of 100 ns set time and 500 ns reset time, STDP and SRDP learning behaviours, and dissolving in water. The intuitive conduction models for STDP and SRDP were proposed. These results testified that honey-based memristive synaptic devices are promising for SNN implementation in green electronics and biodegradable neuromorphic systems. 
    more » « less
  5. null (Ed.)
    Brain-inspired cognitive computing has so far followed two major approaches - one uses multi-layered artificial neural networks (ANNs) to perform pattern-recognition-related tasks, whereas the other uses spiking neural networks (SNNs) to emulate biological neurons in an attempt to be as efficient and fault-tolerant as the brain. While there has been considerable progress in the former area due to a combination of effective training algorithms and acceleration platforms, the latter is still in its infancy due to the lack of both. SNNs have a distinct advantage over their ANN counterparts in that they are capable of operating in an event-driven manner, thus consuming very low power. Several recent efforts have proposed various SNN hardware design alternatives, however, these designs still incur considerable energy overheads.In this context, this paper proposes a comprehensive design spanning across the device, circuit, architecture and algorithm levels to build an ultra low-power architecture for SNN and ANN inference. For this, we use spintronics-based magnetic tunnel junction (MTJ) devices that have been shown to function as both neuro-synaptic crossbars as well as thresholding neurons and can operate at ultra low voltage and current levels. Using this MTJ-based neuron model and synaptic connections, we design a low power chip that has the flexibility to be deployed for inference of SNNs, ANNs as well as a combination of SNN-ANN hybrid networks - a distinct advantage compared to prior works. We demonstrate the competitive performance and energy efficiency of the SNNs as well as hybrid models on a suite of workloads. Our evaluations show that the proposed design, NEBULA, is up to 7.9× more energy efficient than a state-of-the-art design, ISAAC, in the ANN mode. In the SNN mode, our design is about 45× more energy-efficient than a contemporary SNN architecture, INXS. Power comparison between NEBULA ANN and SNN modes indicates that the latter is at least 6.25× more power-efficient for the observed benchmarks. 
    more » « less