skip to main content


Title: Intrinsic Lateral Inhibition Facilitates Winner-Take-All in Domain Wall Racetrack Arrays for Neuromorphic Computing
Neuromorphic computing is a promising candidate for beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. Lateral inhibition and winner-take-all (WTA) features play a crucial role in neuronal competition of the nervous system as well as neuromorphic hardwares. The domain wall - magnetic tunnel junction (DWMTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. In this paper we show that lateral inhibition parameters modulate the neuron firing statistics in a DW-MTJ neuron array, thus emulating soft-winner-take-all (WTA) and firing group selection.  more » « less
Award ID(s):
1940788
NSF-PAR ID:
10400582
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
2022 IEEE International Symposium on Circuits and Systems (ISCAS)
Page Range / eLocation ID:
316 to 320
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Winner-take-all (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain’s fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains.In this work, we consider a spike-based k–WTA model wherein n randomly generated input spike trains compete with each other based on their underlying firing rates and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1 ms and model then input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the waiting time. We show that to guarantee a (minimax) decision error≤δ(whereδ∈(0,1)), the waiting time of any WTA circuit is at least ((1−δ)log(k(n−k)+1)−1)TR,where R⊆(0,1)is a finite set of rates and TR is a difficulty parameter of a WTA task with respect to set R for independent input spike trains. Additionally,TR is independent of δ,n,and k. We then design a simple WTA circuit whose waiting time is 2524L. Su, C.-J. Chang, and N. Lynch O((log(1δ)+logk(n−k))TR),provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is order-optimal (i.e., it matches the above lower bound up to a multiplicative constant factor) in terms of its scaling inn,k, and TR. 
    more » « less
  2. Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain’s fundamental computational abilities.However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based k–WTA model where in n randomly generated input spike trains compete with each other based on their underlying firing rates, and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1ms, and model then input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached.We first derive an information-theoretic lower bound on the decision time. We show that to guarantee a (minimax) decision error≤δ(whereδ∈(0,1)), the waiting time of any WTA circuit is at least((1−δ) log(k(n−k) + 1)−1)TR,whereR ⊆(0,1) is a finite set of rates, and TR is a difficulty parameter of a WTA task with respect to setRfor independent input spike trains.Additionally,TR is independent ofδ,n, andk. We then design a simple WTA circuit whose waiting time isO((log(1δ)+ logk(n−k))TR), provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is order-optimal (i.e., it 2 matches the above lower bound up to a multiplicative constant factor) in terms of its scaling inn,k, and TR. 
    more » « less
  3. The challenge of developing an efficient artificial neuron is impeded by the use of external CMOS circuits to perform leaking and lateral inhibition. The proposed leaky integrate-and-fire neuron based on the three terminal magnetic tunnel junction (3T-MTJ) performs integration by pushing its domain wall (DW) with spin-transfer or spin-orbit torque. The leaking capability is achieved by pushing the neurons’ DWs in the direction opposite of integration using a stray field from a hard ferromagnet or a non-uniform energy landscape resulting from shape or anisotropy variation. Firing is performed by the MTJ stack. Finally, analog lateral inhibition is achieved by dipolar field repulsive coupling from each neuron. An integrating neuron thus pushes slower neighboring neurons’ DWs in the direction opposite of integration. Applying this lateral inhibition to a ten-neuron output layer within a neuromorphic crossbar structure enables the identification of handwritten digits with 94% accuracy. 
    more » « less
  4. null (Ed.)
    In biological brains, recurrent connections play a crucial role in cortical computation, modulation of network dynamics, and communication. However, in recurrent spiking neural networks (SNNs), recurrence is mostly constructed by random connections. How excitatory and inhibitory recurrent connections affect network responses and what kinds of connectivity benefit learning performance is still obscure. In this work, we propose a novel recurrent structure called the Laterally-Inhibited Self-Recurrent Unit (LISR), which consists of one excitatory neuron with a self-recurrent connection wired together with an inhibitory neuron through excitatory and inhibitory synapses. The self-recurrent connection of the excitatory neuron mitigates the information loss caused by the firing-and-resetting mechanism and maintains the long-term neuronal memory. The lateral inhibition from the inhibitory neuron to the corresponding excitatory neuron, on the one hand, adjusts the firing activity of the latter. On the other hand, it plays as a forget gate to clear the memory of the excitatory neuron. Based on speech and image datasets commonly used in neuromorphic computing, RSNNs based on the proposed LISR improve performance significantly by up to 9.26% over feedforward SNNs trained by a state-of-the-art backpropagation method with similar computational costs. 
    more » « less
  5. null (Ed.)
    An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes ∼ N ⁡ log ( N ) time for N noisy candidate options) by a factor of N, the benchmark for parallel computation. Biologically plausible architectures for this task are winner-take-all (WTA) networks, where individual neurons inhibit each other so only those with the largest input remain active. We show that conventional WTA networks fail the parallelism benchmark and, worse, in the presence of noise, altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or rescaling as N varies, the nWTA network achieves the parallelism benchmark. The network reproduces experimentally observed phenomena like Hick’s law without needing an additional readout stage or adaptive N-dependent thresholds. Our work bridges scales by linking cellular nonlinearities to circuit-level decision-making, establishes that distributed computation saturating the parallelism benchmark is possible in networks of noisy, finite-memory neurons, and shows that Hick’s law may be a symptom of near-optimal parallel decision-making with noisy input. 
    more » « less