skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: When in-memory computing meets spiking neural networks—A perspective on device-circuit-system-and-algorithm co-design
This review explores the intersection of bio-plausible artificial intelligence in the form of spiking neural networks (SNNs) with the analog in-memory computing (IMC) domain, highlighting their collective potential for low-power edge computing environments. Through detailed investigation at the device, circuit, and system levels, we highlight the pivotal synergies between SNNs and IMC architectures. Additionally, we emphasize the critical need for comprehensive system-level analyses, considering the inter-dependencies among algorithms, devices, circuit, and system parameters, crucial for optimal performance. An in-depth analysis leads to the identification of key system-level bottlenecks arising from device limitations, which can be addressed using SNN-specific algorithm–hardware co-design techniques. This review underscores the imperative for holistic device to system design-space co-exploration, highlighting the critical aspects of hardware and algorithm research endeavors for low-power neuromorphic solutions.  more » « less
Award ID(s):
2238227
PAR ID:
10613293
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Applied Physics Reviews
Date Published:
Journal Name:
Applied Physics Reviews
Volume:
11
Issue:
3
ISSN:
1931-9401
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Due to the separate memory and computation units in traditional Von-Neumann architecture, massive data transfer dominates the overall computing system’s power and latency, known as the ‘Memory-Wall’ issue. Especially with ever-increasing deep learning-based AI model size and computing complexity, it becomes the bottleneck for state-of-the-art AI computing systems. To address this challenge, In-Memory Computing (IMC) based Neural Network accelerators have been widely investigated to support AI computing within memory. However, most of those works focus only on inference. The on-device training and continual learning have not been well explored yet. In this work, for the first time, we introduce on-device continual learning with STT-assisted-SOT (SAS) Magnetic Random Access Memory (MRAM) based IMC system. On the hardware side, we have fabricated a SAS-MRAM device prototype with 4 Magnetic Tunnel Junctions (MTJ, each at 100nm × 50nm) sharing a common heavy metal layer, achieving significantly improved memory writing and area efficiency compared to traditional SOT-MRAM. Next, we designed fully digital IMC circuits with our SAS-MRAM to support both neural network inference and on-device learning. To enable efficient on-device continual learning for new task data, we present an 8-bit integer (INT8) based continual learning algorithm that utilizes our SAS-MRAM IMC-supported bit-serial digital in-memory convolution operations to train a small parallel reprogramming Network (Rep-Net) while freezing the major backbone model. Extensive studies have been presented based on our fabricated SAS-MRAM device prototype, cross-layer device-circuit benchmarking and simulation, as well as the on-device continual learning system evaluation. 
    more » « less
  2. This paper presents a design approach for the modeling and simulation of ultra-low power (ULP) analog computing machine learning (ML) circuits for seizure detection using EEG signals in wearable health monitoring applications. In this paper, we describe a new analog system modeling and simulation technique to associate power consumption, noise, linearity, and other critical performance parameters of analog circuits with the classification accuracy of a given ML network, which allows to realize a power and performance optimized analog ML hardware implementation based on diverse application-specific needs. We carried out circuit simulations to obtain non-idealities, which are then mathematically modeled for an accurate mapping. We have modeled noise, non-linearity, resolution, and process variations such that the model can accurately obtain the classification accuracy of the analog computing based seizure detection system. Noise has been modeled as an input-referred white noise that can be directly added at the input. Device process and temperature variations were modeled as random fluctuations in circuit parameters such as gain and cut-off frequency. Nonlinearity was mathematically modeled as a power series. The combined system level model was then simulated for classification accuracy assessments. The design approach helps to optimize power and area during the development of tailored analog circuits for ML networks with the ability to potentially trade power and performance goals while still ensuring the required classification accuracy. The simulation technique also enables to determine target specifications for each circuit block in the analog computing hardware. This is achieved by developing the ML hardware model, and investigating the effect of circuit nonidealities on classification accuracy. Simulation of an analog computing EEG seizure detection block shows a classification accuracy of 91%. The proposed modeling approach will significantly reduce design time and complexity of large analog computing systems. Two feature extraction approaches are also compared for an analog computing architecture. 
    more » « less
  3. null (Ed.)
    Brain-inspired cognitive computing has so far followed two major approaches - one uses multi-layered artificial neural networks (ANNs) to perform pattern-recognition-related tasks, whereas the other uses spiking neural networks (SNNs) to emulate biological neurons in an attempt to be as efficient and fault-tolerant as the brain. While there has been considerable progress in the former area due to a combination of effective training algorithms and acceleration platforms, the latter is still in its infancy due to the lack of both. SNNs have a distinct advantage over their ANN counterparts in that they are capable of operating in an event-driven manner, thus consuming very low power. Several recent efforts have proposed various SNN hardware design alternatives, however, these designs still incur considerable energy overheads.In this context, this paper proposes a comprehensive design spanning across the device, circuit, architecture and algorithm levels to build an ultra low-power architecture for SNN and ANN inference. For this, we use spintronics-based magnetic tunnel junction (MTJ) devices that have been shown to function as both neuro-synaptic crossbars as well as thresholding neurons and can operate at ultra low voltage and current levels. Using this MTJ-based neuron model and synaptic connections, we design a low power chip that has the flexibility to be deployed for inference of SNNs, ANNs as well as a combination of SNN-ANN hybrid networks - a distinct advantage compared to prior works. We demonstrate the competitive performance and energy efficiency of the SNNs as well as hybrid models on a suite of workloads. Our evaluations show that the proposed design, NEBULA, is up to 7.9× more energy efficient than a state-of-the-art design, ISAAC, in the ANN mode. In the SNN mode, our design is about 45× more energy-efficient than a contemporary SNN architecture, INXS. Power comparison between NEBULA ANN and SNN modes indicates that the latter is at least 6.25× more power-efficient for the observed benchmarks. 
    more » « less
  4. Artificial intelligence (AI) provides versatile capabilities in applications such as image classification and voice recognition that are most useful in edge or mobile computing settings. Shrinking these sophisticated algorithms into small form factors with minimal computing resources and power budgets requires innovation at several layers of abstraction: software, algorithmic, architectural, circuit, and device-level innovations. However, improvements to system efficiency may impact robustness and vice-versa. Therefore, a co-design framework is often necessary to customize a system for its given application. A system that prioritizes efficiency might use circuit-level innovations that introduce process variations or signal noise into the system, which may use software-level redundancy in order to compensate. In this tutorial, we will first examine various methods of improving efficiency and robustness in edge AI and their tradeoffs at each level of abstraction.Then, we will outline co-design techniques for designing efficient and robust edge AI systems, using federated learning as a specific example to illustrate the effectiveness of co-design. 
    more » « less
  5. RRAM-based in-memory computing (IMC) effectively accelerates deep neural networks (DNNs) and other machine learning algorithms. On the other hand, in the presence of RRAM device variations and lower precision, the mapping of DNNs to RRAM-based IMC suffers from severe accuracy loss. In this work, we propose a novel hybrid IMC architecture that integrates an RRAM-based IMC macro with a digital SRAM macro using a programmable shifter to compensate for the RRAM variations and recover the accuracy. The digital SRAM macro consists of a small SRAM memory array and an array of multiply-and-accumulate (MAC) units. The non-ideal output from the RRAM macro, due to device and circuit non-idealities, is compensated by adding the precise output from the SRAM macro. In addition, the programmable shifter allows for different scales of compensation by shifting the SRAM macro output relative to the RRAM macro output. On the algorithm side, we develop a framework for the training of DNNs to support the hybrid IMC architecture through ensemble learning. The proposed framework performs quantization (weights and activations), pruning, RRAM IMC-aware training, and employs ensemble learning through different compensation scales by utilizing the programmable shifter. Finally, we design a silicon prototype of the proposed hybrid IMC architecture in the 65nm SUNY process to demonstrate its efficacy. Experimental evaluation of the hybrid IMC architecture shows that the SRAM compensation allows for a realistic IMC architecture with multi-level RRAM cells (MLC) even though they suffer from high variations. The hybrid IMC architecture achieves up to 21.9%, 12.65%, and 6.52% improvement in post-mapping accuracy over state-of-the-art techniques, at minimal overhead, for ResNet-20 on CIFAR-10, VGG-16 on CIFAR-10, and ResNet-18 on ImageNet, respectively. 
    more » « less