The challenge of developing an efficient artificial neuron is impeded by the use of external CMOS circuits to perform leaking and lateral inhibition. The proposed leaky integrate-and-fire neuron based on the three terminal magnetic tunnel junction (3T-MTJ) performs integration by pushing its domain wall (DW) with spin-transfer or spin-orbit torque. The leaking capability is achieved by pushing the neurons’ DWs in the direction opposite of integration using a stray field from a hard ferromagnet or a non-uniform energy landscape resulting from shape or anisotropy variation. Firing is performed by the MTJ stack. Finally, analog lateral inhibition is achieved by dipolar field repulsive coupling from each neuron. An integrating neuron thus pushes slower neighboring neurons’ DWs in the direction opposite of integration. Applying this lateral inhibition to a ten-neuron output layer within a neuromorphic crossbar structure enables the identification of handwritten digits with 94% accuracy.
more »
« less
Intrinsic Lateral Inhibition Facilitates Winner-Take-All in Domain Wall Racetrack Arrays for Neuromorphic Computing
Neuromorphic computing is a promising candidate for beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. Lateral inhibition and winner-take-all (WTA) features play a crucial role in neuronal competition of the nervous system as well as neuromorphic hardwares. The domain wall - magnetic tunnel junction (DWMTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. In this paper we show that lateral inhibition parameters modulate the neuron firing statistics in a DW-MTJ neuron array, thus emulating soft-winner-take-all (WTA) and firing group selection.
more »
« less
- Award ID(s):
- 1940788
- PAR ID:
- 10400582
- Date Published:
- Journal Name:
- 2022 IEEE International Symposium on Circuits and Systems (ISCAS)
- Page Range / eLocation ID:
- 316 to 320
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain’s fundamental computational abilities.However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based k–WTA model where in n randomly generated input spike trains compete with each other based on their underlying firing rates, and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1ms, and model then input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached.We first derive an information-theoretic lower bound on the decision time. We show that to guarantee a (minimax) decision error≤δ(whereδ∈(0,1)), the waiting time of any WTA circuit is at least((1−δ) log(k(n−k) + 1)−1)TR,whereR ⊆(0,1) is a finite set of rates, and TR is a difficulty parameter of a WTA task with respect to setRfor independent input spike trains.Additionally,TR is independent ofδ,n, andk. We then design a simple WTA circuit whose waiting time isO((log(1δ)+ logk(n−k))TR), provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is order-optimal (i.e., it 2 matches the above lower bound up to a multiplicative constant factor) in terms of its scaling inn,k, and TR.more » « less
-
Winner-take-all (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain’s fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains.In this work, we consider a spike-based k–WTA model wherein n randomly generated input spike trains compete with each other based on their underlying firing rates and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1 ms and model then input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the waiting time. We show that to guarantee a (minimax) decision error≤δ(whereδ∈(0,1)), the waiting time of any WTA circuit is at least ((1−δ)log(k(n−k)+1)−1)TR,where R⊆(0,1)is a finite set of rates and TR is a difficulty parameter of a WTA task with respect to set R for independent input spike trains. Additionally,TR is independent of δ,n,and k. We then design a simple WTA circuit whose waiting time is 2524L. Su, C.-J. Chang, and N. Lynch O((log(1δ)+logk(n−k))TR),provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is order-optimal (i.e., it matches the above lower bound up to a multiplicative constant factor) in terms of its scaling inn,k, and TR.more » « less
-
Drouhin, Henri-Jean M.; Wegrowe, Jean-Eric; Razeghi, Manijeh (Ed.)Neuromorphic computing captures the quintessential neural behaviors of the brain and is a promising candidate for the beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. The neuronal lateral inhibition feature, closely associated with the biological receptive eld, is crucial to neuronal competition in the nervous system as well as its neuromorphic hardware counterpart. The domain wall - magnetic tunnel junction (DW-MTJ) neuron is an emerging spintronic arti cial neuron device exhibiting intrinsic lateral inhibition. This work discusses lateral inhibition mechanism of the DW-MTJ neuron and shows by micromagnetic simulation that lateral inhibition is eciently enhanced by the Dzyaloshinskii-Moriya interaction (DMI).more » « less
-
null (Ed.)In biological brains, recurrent connections play a crucial role in cortical computation, modulation of network dynamics, and communication. However, in recurrent spiking neural networks (SNNs), recurrence is mostly constructed by random connections. How excitatory and inhibitory recurrent connections affect network responses and what kinds of connectivity benefit learning performance is still obscure. In this work, we propose a novel recurrent structure called the Laterally-Inhibited Self-Recurrent Unit (LISR), which consists of one excitatory neuron with a self-recurrent connection wired together with an inhibitory neuron through excitatory and inhibitory synapses. The self-recurrent connection of the excitatory neuron mitigates the information loss caused by the firing-and-resetting mechanism and maintains the long-term neuronal memory. The lateral inhibition from the inhibitory neuron to the corresponding excitatory neuron, on the one hand, adjusts the firing activity of the latter. On the other hand, it plays as a forget gate to clear the memory of the excitatory neuron. Based on speech and image datasets commonly used in neuromorphic computing, RSNNs based on the proposed LISR improve performance significantly by up to 9.26% over feedforward SNNs trained by a state-of-the-art backpropagation method with similar computational costs.more » « less