skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Measurement-driven neural-network training for integrated magnetic tunnel junction arrays
The increasing scale of neural networks needed to support more complex applications has led to an increasing requirement for area- and energy-efficient hardware. One route to meeting the budget for these applications is to circumvent the von Neumann bottleneck by performing computation in or near memory. However, an inevitability of transferring neural networks onto hardware is the fact that nonidealities, such as device-to-device variations or poor device yield impact performance. Methods, such as hardware-aware training, where substrate nonidealities are incorporated during network training, are one way to recover performance at the cost of solution generality. In this work, we demonstrate inference on hardware-based neural networks consisting of 20 000 magnetic tunnel junction (MTJ) arrays integrated on CMOS chips in a form that closely resembles scalable and market-ready spin transfer-torque magnetoresistive random access memory (STT-MRAM) technology. Using 36 dies, each containing a MTJ-CMOS crossbar array with its own nonidealities, we show that even a small number of defects in physically mapped networks significantly degrades the performance of networks trained without defects and show that, at the cost of generality, hardware-aware training accounting for specific defects on each die can recover to comparable performance with ideal networks. We then demonstrate a robust training method that extends hardware-aware training to statistics-aware training, producing network weights that perform well on most defective dies regardless of their specific defect locations. When evaluated on the 36 physical dies, statistics-aware trained solutions can achieve a mean misclassification error on the MNIST dataset that differs from the software-baseline by only 2%. This statistics-aware training method could be generalized to networks with many layers that are mapped to hardware suited for industry-ready applications.  more » « less
Award ID(s):
2121957
PAR ID:
10545857
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Publisher / Repository:
American Physical Society
Date Published:
Journal Name:
Physical Review Applied
Volume:
21
Issue:
5
ISSN:
2331-7019
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Hardware accelerators based on emerging device technologies are gaining traction for inference workloads, but effective methods for their training remain an open area of research. We propose an efficient hardware-aware methodology for training neural networks with ternary weights that are mappable to emerging memory device arrays. We study device-network interactions across a variety of scenarios using simulated and experimentally measured datasets from ferroelectric field-effect transistor (FeFET) devices with varying characteristics. We quantify the impact of device non-idealities on network training by investigating device-level metrics, network-level metrics, loss landscapes, as well as parameter optimization trajectories. We validate our approach by mapping a hardware-aware solution to an emulated system with parameters calibrated to experimental measurements, highlighting several trade-offs. Hardware-aware training results on FeFET-based multi-layer perceptron networks, long short-term memory networks, and deep convolutional networks demonstrate competitive performance at lower overheads compared to existing schemes, indicating architectural and computational scalability. It is found that devices with low variability, non-linearity, and high dynamic range exhibit training characteristics closest to a software baseline. We provide evidence that device non-idealities inject noise during backpropagation, leading to sharper loss landscapes and higher-dimensional optimization trajectories, which make device networks more difficult to train than software counterparts. We also identify optimal operating voltages for investigated devices by utilizing our hardware-aware training and inference methodologies. 
    more » « less
  2. Advances in machine intelligence have sparked interest in hardware accelerators to implement these algorithms, yet embedded electronics have stringent power, area budgets, and speed requirements that may limit nonvolatile memory (NVM) integration. In this context, the development of fast nanomagnetic neural networks using minimal training data is attractive. Here, we extend an inference-only proposal using the intrinsic physics of domain-wall MTJ (DW-MTJ) neurons for online learning to implement fully unsupervised pattern recognition operation, using winner-take-all networks that contain either random or plastic synapses (weights). Meanwhile, a read-out layer trains in a supervised fashion. We find our proposed design can approach state-of-the-art success on the task relative to competing memristive neural network proposals, while eliminating much of the area and energy overhead that would typically be required to build the neuronal layers with CMOS devices. 
    more » « less
  3. The spatiotemporal nature of neuronal behavior in spiking neural networks (SNNs) makes SNNs promising for edge applications that require high energy efficiency. To realize SNNs in hardware, spintronic neuron implementations can bring advantages of scalability and energy efficiency. Domain wall (DW)-based magnetic tunnel junction (MTJ) devices are well suited for probabilistic neural networks given their intrinsic integrate-and-fire behavior with tunable stochasticity. Here, we present a scaled DW-MTJ neuron with voltage-dependent firing probability. The measured behavior was used to simulate a SNN that attains accuracy during learning compared to an equivalent, but more complicated, multi-weight DW-MTJ device. The validation accuracy during training was also shown to be comparable to an ideal leaky integrate and fire device. However, during inference, the binary DW-MTJ neuron outperformed the other devices after Gaussian noise was introduced to the Fashion-MNIST classification task. This work shows that DW-MTJ devices can be used to construct noise-resilient networks suitable for neuromorphic computing on the edge. 
    more » « less
  4. Abstract The constant drive to achieve higher performance in deep neural networks (DNNs) has led to the proliferation of very large models. Model training, however, requires intensive computation time and energy. Memristor‐based compute‐in‐memory (CIM) modules can perform vector‐matrix multiplication (VMM) in place and in parallel, and have shown great promises in DNN inference applications. However, CIM‐based model training faces challenges due to non‐linear weight updates, device variations, and low‐precision. In this work, a mixed‐precision training scheme is experimentally implemented to mitigate these effects using a bulk‐switching memristor‐based CIM module. Low‐precision CIM modules are used to accelerate the expensive VMM operations, with high‐precision weight updates accumulated in digital units. Memristor devices are only changed when the accumulated weight update value exceeds a pre‐defined threshold. The proposed scheme is implemented with a system‐onchip of fully integrated analog CIM modules and digital sub‐systems, showing fast convergence of LeNet training to 97.73%. The efficacy of training larger models is evaluated using realistic hardware parameters and verifies that CIM modules can enable efficient mix‐precision DNN training with accuracy comparable to full‐precision software‐trained models. Additionally, models trained on chip are inherently robust to hardware variations, allowing direct mapping to CIM inference chips without additional re‐training. 
    more » « less
  5. Biological memory structures impart enormous retention capacity while automatically providing vital functions for chronological information management and update resolution of domain and episodic knowledge. A crucial requirement for hardware realization of such cortical operations found in biology is to first design both Short-Term Memory (STM) and Long-Term Memory (LTM). Herein, these memory features are realized via a beyond-CMOS based learning approach derived from the repeated input information and retrieval of the encoded data. We first propose a new binary STM-LTM architecture with composite synapse of Spin Hall Effect-driven Magnetic Tunnel Junction (SHE-MTJ) and capacitive memory bit-cell to mimic the behavior of biological synapses. This STM-LTM platform realizes the memory potentiation through a continual update process using STM-to-LTM transfer, which is applied to Neural Networks based on the established capacitive crossbar. We then propose a hardware-enabled and customized STM-LTM transition algorithm for the platform considering the real hardware parameters. We validate the functionality of the design using SPICE simulations that show the proposed synapse has the potential of reaching ~30.2pJ energy consumption for STM-to-LTM transfer and 65pJ during STM programming. We further analyze the correlation between energy, array size, and STM-to-LTM threshold utilizing the MNIST dataset. 
    more » « less