skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: CMOS-Free Magnetic Domain Wall Leaky Integrate-and-Fire Neurons with Intrinsic Lateral Inhibition
Spintronic devices, especially those based on motion of a domain wall (DW) through a ferromagnetic track, have received a significant amount of interest in the field of neuromorphic computing because of their non-volatility and intrinsic current integration capabilities. Many spintronic neurons using this technology have already been proposed, but they also require external circuitry or additional device layers to implement other important neuronal behaviors. Therefore, they result in an increase in fabrication complexity and/or energy consumption. In this work, we discuss three neurons that implement these functions without the use of additional circuitry or material layers.  more » « less
Award ID(s):
1910800
NSF-PAR ID:
10231016
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
IEEE International Symposium on Circuits and Systems
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Due to their non-volatility and intrinsic current integration capabilities, spintronic devices that rely on domain wall (DW) motion through a free ferromagnetic track have garnered significant interest in the field of neuromorphic computing. Although a number of such devices have already been proposed, they require the use of external circuitry to implement several important neuronal behaviors. As such, they are likely to result in either a decrease in energy efficiency, an increase in fabrication complexity, or even both. To resolve this issue, we have proposed three individual neurons that are capable of performing these functionalities without the use of any external circuitry. To implement leaking, the first neuron uses a dipolar coupling field, the second uses an anisotropy gradient, and the third uses shape variations of the DW track. 
    more » « less
  2. The spintronic stochastic spiking neuron (S3N) developed herein realizes biologically mimetic stochastic spiking characteristics observed within in vivo cortical neurons, while operating several orders of magnitude more rapidly and exhibiting a favorable energy profile. This work leverages a novel probabilistic spintronic switching element device that provides thermally-driven and current-controlled tunable stochasticity in a compact, low-energy, and high-speed package. Simulation program with integrated circuit emphasis (SPICE) simulation results indicate that the equivalent of 1 second of in vivo neuronal spiking characteristics can be generated on the order of nanoseconds, enabling the feasibility of extremely rapid emulation of in vivo neuronal behaviors for future statistical models of cortical information processing. Their results also indicate that the S3N can generate spikes on the order of ten picoseconds while dissipating only 0.6–9.6 μW, depending on the spiking rate. Additionally, they demonstrate that an S3N can implement perceptron functionality, such as AND-gate- and OR-gate-based logic processing, and provide future extensions of the work to more advanced stochastic neuromorphic architectures. 
    more » « less
  3. Abstract

    We present a functionally relevant, quantitative characterization of the neural circuitry of Drosophila melanogaster at the mesoscopic level of neuron types as classified exclusively based on potential network connectivity. Starting from a large neuron-to-neuron brain-wide connectome of the fruit fly, we use stochastic block modeling and spectral graph clustering to group neurons together into a common “cell class” if they connect to neurons of other classes according to the same probability distributions. We then characterize the connectivity-based cell classes with standard neuronal biomarkers, including neurotransmitters, developmental birthtimes, morphological features, spatial embedding, and functional anatomy. Mutual information indicates that connectivity-based classification reveals aspects of neurons that are not adequately captured by traditional classification schemes. Next, using graph theoretic and random walk analyses to identify neuron classes as hubs, sources, or destinations, we detect pathways and patterns of directional connectivity that potentially underpin specific functional interactions in the Drosophila brain. We uncover a core of highly interconnected dopaminergic cell classes functioning as the backbone communication pathway for multisensory integration. Additional predicted pathways pertain to the facilitation of circadian rhythmic activity, spatial orientation, fight-or-flight response, and olfactory learning. Our analysis provides experimentally testable hypotheses critically deconstructing complex brain function from organized connectomic architecture.

     
    more » « less
  4. This work proposes a domain-informed neural network architecture for experimental particle physics, using particle interaction localization with the time-projection chamber (TPC) technology for dark matter research as an example application. A key feature of the signals generated within the TPC is that they allow localization of particle interactions through a process called reconstruction (i.e., inverse-problem regression). While multilayer perceptrons (MLPs) have emerged as a leading contender for reconstruction in TPCs, such a black-box approach does not reflect prior knowledge of the underlying scientific processes. This paper looks anew at neural network-based interaction localization and encodes prior detector knowledge, in terms of both signal characteristics and detector geometry, into the feature encoding and the output layers of a multilayer (deep) neural network. The resulting neural network, termed Domain-informed Neural Network (DiNN), limits the receptive fields of the neurons in the initial feature encoding layers in order to account for the spatially localized nature of the signals produced within the TPC. This aspect of the DiNN, which has similarities with the emerging area of graph neural networks in that the neurons in the initial layers only connect to a handful of neurons in their succeeding layer, significantly reduces the number of parameters in the network in comparison to an MLP. In addition, in order to account for the detector geometry, the output layers of the network are modified using two geometric transformations to ensure the DiNN produces localizations within the interior of the detector. The end result is a neural network architecture that has 60% fewer parameters than an MLP, but that still achieves similar localization performance and provides a path to future architectural developments with improved performance because of their ability to encode additional domain knowledge into the architecture. 
    more » « less
  5. Advances in machine intelligence have sparked interest in hardware accelerators to implement these algorithms, yet embedded electronics have stringent power, area budgets, and speed requirements that may limit nonvolatile memory (NVM) integration. In this context, the development of fast nanomagnetic neural networks using minimal training data is attractive. Here, we extend an inference-only proposal using the intrinsic physics of domain-wall MTJ (DW-MTJ) neurons for online learning to implement fully unsupervised pattern recognition operation, using winner-take-all networks that contain either random or plastic synapses (weights). Meanwhile, a read-out layer trains in a supervised fashion. We find our proposed design can approach state-of-the-art success on the task relative to competing memristive neural network proposals, while eliminating much of the area and energy overhead that would typically be required to build the neuronal layers with CMOS devices. 
    more » « less