skip to main content


Title: Lateral inhibition in magnetic domain wall racetrack arrays for neuromorphic computing
Neuromorphic computing captures the quintessential neural behaviors of the brain and is a promising candidate for the beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. The neuronal lateral inhibition feature, closely associated with the biological receptive eld, is crucial to neuronal competition in the nervous system as well as its neuromorphic hardware counterpart. The domain wall - magnetic tunnel junction (DW-MTJ) neuron is an emerging spintronic arti cial neuron device exhibiting intrinsic lateral inhibition. This work discusses lateral inhibition mechanism of the DW-MTJ neuron and shows by micromagnetic simulation that lateral inhibition is eciently enhanced by the Dzyaloshinskii-Moriya interaction (DMI).  more » « less
Award ID(s):
1940788 1910800
NSF-PAR ID:
10212813
Author(s) / Creator(s):
; ; ; ; ; ;
Editor(s):
Drouhin, Henri-Jean M.; Wegrowe, Jean-Eric; Razeghi, Manijeh
Date Published:
Journal Name:
Proc. SPIE, Spintronics XIII
Volume:
11470
Issue:
1147011-1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neuromorphic computing is a promising candidate for beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. Lateral inhibition and winner-take-all (WTA) features play a crucial role in neuronal competition of the nervous system as well as neuromorphic hardwares. The domain wall - magnetic tunnel junction (DWMTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. In this paper we show that lateral inhibition parameters modulate the neuron firing statistics in a DW-MTJ neuron array, thus emulating soft-winner-take-all (WTA) and firing group selection. 
    more » « less
  2. The challenge of developing an efficient artificial neuron is impeded by the use of external CMOS circuits to perform leaking and lateral inhibition. The proposed leaky integrate-and-fire neuron based on the three terminal magnetic tunnel junction (3T-MTJ) performs integration by pushing its domain wall (DW) with spin-transfer or spin-orbit torque. The leaking capability is achieved by pushing the neurons’ DWs in the direction opposite of integration using a stray field from a hard ferromagnet or a non-uniform energy landscape resulting from shape or anisotropy variation. Firing is performed by the MTJ stack. Finally, analog lateral inhibition is achieved by dipolar field repulsive coupling from each neuron. An integrating neuron thus pushes slower neighboring neurons’ DWs in the direction opposite of integration. Applying this lateral inhibition to a ten-neuron output layer within a neuromorphic crossbar structure enables the identification of handwritten digits with 94% accuracy. 
    more » « less
  3. The spatiotemporal nature of neuronal behavior in spiking neural networks (SNNs) makes SNNs promising for edge applications that require high energy efficiency. To realize SNNs in hardware, spintronic neuron implementations can bring advantages of scalability and energy efficiency. Domain wall (DW)-based magnetic tunnel junction (MTJ) devices are well suited for probabilistic neural networks given their intrinsic integrate-and-fire behavior with tunable stochasticity. Here, we present a scaled DW-MTJ neuron with voltage-dependent firing probability. The measured behavior was used to simulate a SNN that attains accuracy during learning compared to an equivalent, but more complicated, multi-weight DW-MTJ device. The validation accuracy during training was also shown to be comparable to an ideal leaky integrate and fire device. However, during inference, the binary DW-MTJ neuron outperformed the other devices after Gaussian noise was introduced to the Fashion-MNIST classification task. This work shows that DW-MTJ devices can be used to construct noise-resilient networks suitable for neuromorphic computing on the edge. 
    more » « less
  4. null (Ed.)
    In biological brains, recurrent connections play a crucial role in cortical computation, modulation of network dynamics, and communication. However, in recurrent spiking neural networks (SNNs), recurrence is mostly constructed by random connections. How excitatory and inhibitory recurrent connections affect network responses and what kinds of connectivity benefit learning performance is still obscure. In this work, we propose a novel recurrent structure called the Laterally-Inhibited Self-Recurrent Unit (LISR), which consists of one excitatory neuron with a self-recurrent connection wired together with an inhibitory neuron through excitatory and inhibitory synapses. The self-recurrent connection of the excitatory neuron mitigates the information loss caused by the firing-and-resetting mechanism and maintains the long-term neuronal memory. The lateral inhibition from the inhibitory neuron to the corresponding excitatory neuron, on the one hand, adjusts the firing activity of the latter. On the other hand, it plays as a forget gate to clear the memory of the excitatory neuron. Based on speech and image datasets commonly used in neuromorphic computing, RSNNs based on the proposed LISR improve performance significantly by up to 9.26% over feedforward SNNs trained by a state-of-the-art backpropagation method with similar computational costs. 
    more » « less
  5. Due to their non-volatility and intrinsic current integration capabilities, spintronic devices that rely on domain wall (DW) motion through a free ferromagnetic track have garnered significant interest in the field of neuromorphic computing. Although a number of such devices have already been proposed, they require the use of external circuitry to implement several important neuronal behaviors. As such, they are likely to result in either a decrease in energy efficiency, an increase in fabrication complexity, or even both. To resolve this issue, we have proposed three individual neurons that are capable of performing these functionalities without the use of any external circuitry. To implement leaking, the first neuron uses a dipolar coupling field, the second uses an anisotropy gradient, and the third uses shape variations of the DW track. 
    more » « less