skip to main content


Title: Voltage-Controlled Energy-Efficient Domain Wall Synapses With Stochastic Distribution of Quantized Weights in the Presence of Thermal Noise and Edge Roughness
We propose energy-efficient voltage-induced strain control of a domain wall (DW) in a perpendicularly magnetized nanoscale racetrack on a piezoelectric substrate that can implement a multistate synapse to be utilized in neuromorphic computing platforms. Here, strain generated in the piezoelectric is mechanically transferred to the racetrack and modulates the perpendicular magnetic anisotropy (PMA) in a system that has significant interfacial Dzyaloshinskii-Moriya interaction (DMI). When different voltages are applied (i.e., different strains are generated) in conjunction with spin-orbit torque (SOT) due to a fixed current flowing in the heavy metal layer for a fixed time, DWs are translated to different distances and implement different synaptic weights. We have shown using micromagnetic simulations that five-state and three-state synapses can be implemented in a racetrack that is modeled with the inclusion of natural edge roughness and room temperature thermal noise. These simulations show interesting dynamics of DWs due to interaction with roughness-induced pinning sites. Thus, notches need not be fabricated to implement multistate nonvolatile synapses. Such a strain-controlled synapse has an energy consumption of ~1 fJ and could thus be very attractive to implement energy-efficient quantized neural networks, which has been shown recently to achieve near equivalent classification accuracy to the full-precision neural networks.  more » « less
Award ID(s):
1815033 1954589
NSF-PAR ID:
10300352
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Electron Devices
ISSN:
0018-9383
Page Range / eLocation ID:
1 to 9
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In in-sensor image preprocessing, the sensed image undergoes low level processing like denoising at the sensor end, similar to the retina of human eye. Optoelectronic synapse devices are potential contenders for this purpose, and subsequent applications in artificial neural networks (ANNs). The optoelectronic synapses can offer image pre-processing functionalities at the pixel itself—termed as in-pixel computing. Denoising is an important problem in image preprocessing and several approaches have been used to denoise the input images. While most of those approaches require external circuitry, others are efficient only when the noisy pixels have significantly lower intensity compared to the actual pattern pixels. In this work, we present the innate ability of an optoelectronic synapse array to perform denoising at the pixel itself once it is trained to memorize an image. The synapses consist of phototransistors with bilayer MoS2channel and p-Si/PtTe2buried gate electrode. Our 7 × 7 array shows excellent robustness to noise due to the interplay between long-term potentiation and short-term potentiation. This bio-inspired strategy enables denoising of noise with higher intensity than the memorized pattern, without the use of any external circuitry. Specifically, due to the ability of these synapses to respond distinctively to wavelengths from 300 nm in ultraviolet to 2 µm in infrared, the pixel array also denoises mixed-color interferences. The “self-denoising” capability of such an artificial visual array has the capacity to eliminate the need for raw data transmission and thus, reduce subsequent image processing steps for supervised learning.

     
    more » « less
  2. INTRODUCTION A brainwide, synaptic-resolution connectivity map—a connectome—is essential for understanding how the brain generates behavior. However because of technological constraints imaging entire brains with electron microscopy (EM) and reconstructing circuits from such datasets has been challenging. To date, complete connectomes have been mapped for only three organisms, each with several hundred brain neurons: the nematode C. elegans , the larva of the sea squirt Ciona intestinalis , and of the marine annelid Platynereis dumerilii . Synapse-resolution circuit diagrams of larger brains, such as insects, fish, and mammals, have been approached by considering select subregions in isolation. However, neural computations span spatially dispersed but interconnected brain regions, and understanding any one computation requires the complete brain connectome with all its inputs and outputs. RATIONALE We therefore generated a connectome of an entire brain of a small insect, the larva of the fruit fly, Drosophila melanogaster. This animal displays a rich behavioral repertoire, including learning, value computation, and action selection, and shares homologous brain structures with adult Drosophila and larger insects. Powerful genetic tools are available for selective manipulation or recording of individual neuron types. In this tractable model system, hypotheses about the functional roles of specific neurons and circuit motifs revealed by the connectome can therefore be readily tested. RESULTS The complete synaptic-resolution connectome of the Drosophila larval brain comprises 3016 neurons and 548,000 synapses. We performed a detailed analysis of the brain circuit architecture, including connection and neuron types, network hubs, and circuit motifs. Most of the brain’s in-out hubs (73%) were postsynaptic to the learning center or presynaptic to the dopaminergic neurons that drive learning. We used graph spectral embedding to hierarchically cluster neurons based on synaptic connectivity into 93 neuron types, which were internally consistent based on other features, such as morphology and function. We developed an algorithm to track brainwide signal propagation across polysynaptic pathways and analyzed feedforward (from sensory to output) and feedback pathways, multisensory integration, and cross-hemisphere interactions. We found extensive multisensory integration throughout the brain and multiple interconnected pathways of varying depths from sensory neurons to output neurons forming a distributed processing network. The brain had a highly recurrent architecture, with 41% of neurons receiving long-range recurrent input. However, recurrence was not evenly distributed and was especially high in areas implicated in learning and action selection. Dopaminergic neurons that drive learning are amongst the most recurrent neurons in the brain. Many contralateral neurons, which projected across brain hemispheres, were in-out hubs and synapsed onto each other, facilitating extensive interhemispheric communication. We also analyzed interactions between the brain and nerve cord. We found that descending neurons targeted a small fraction of premotor elements that could play important roles in switching between locomotor states. A subset of descending neurons targeted low-order post-sensory interneurons likely modulating sensory processing. CONCLUSION The complete brain connectome of the Drosophila larva will be a lasting reference study, providing a basis for a multitude of theoretical and experimental studies of brain function. The approach and computational tools generated in this study will facilitate the analysis of future connectomes. Although the details of brain organization differ across the animal kingdom, many circuit architectures are conserved. As more brain connectomes of other organisms are mapped in the future, comparisons between them will reveal both common and therefore potentially optimal circuit architectures, as well as the idiosyncratic ones that underlie behavioral differences between organisms. Some of the architectural features observed in the Drosophila larval brain, including multilayer shortcuts and prominent nested recurrent loops, are found in state-of-the-art artificial neural networks, where they can compensate for a lack of network depth and support arbitrary, task-dependent computations. Such features could therefore increase the brain’s computational capacity, overcoming physiological constraints on the number of neurons. Future analysis of similarities and differences between brains and artificial neural networks may help in understanding brain computational principles and perhaps inspire new machine learning architectures. The connectome of the Drosophila larval brain. The morphologies of all brain neurons, reconstructed from a synapse-resolution EM volume, and the synaptic connectivity matrix of an entire brain. This connectivity information was used to hierarchically cluster all brains into 93 cell types, which were internally consistent based on morphology and known function. 
    more » « less
  3. Abstract

    In neuromorphic computing, artificial synapses provide a multi‐weight (MW) conductance state that is set based on inputs from neurons, analogous to the brain. Herein, artificial synapses based on magnetic materials that use a magnetic tunnel junction (MTJ) and a magnetic domain wall (DW) are explored. By fabricating lithographic notches in a DW track underneath a single MTJ, 3–5 stable resistance states that can be repeatably controlled electrically using spin‐orbit torque are achieved. The effect of geometry on the synapse behavior is explored, showing that a trapezoidal device has asymmetric weight updates with high controllability, while a rectangular device has higher stochasticity, but with stable resistance levels. The device data is input into neuromorphic computing simulators to show the usefulness of application‐specific synaptic functions. Implementing an artificial neural network (NN) applied to streamed Fashion‐MNIST data, the trapezoidal magnetic synapse can be used as a metaplastic function for efficient online learning. Implementing a convolutional NN for CIFAR‐100 image recognition, the rectangular magnetic synapse achieves near‐ideal inference accuracy, due to the stability of its resistance levels. This work shows MW magnetic synapses are a feasible technology for neuromorphic computing and provides design guidelines for emerging artificial synapse technologies.

     
    more » « less
  4. Neuromorphic computing systems execute machine learning tasks designed with spiking neural networks. These systems are embracing non-volatile memory to implement high-density and low-energy synaptic storage. Elevated voltages and currents needed to operate non-volatile memories cause aging of CMOS-based transistors in each neuron and synapse circuit in the hardware, drifting the transistor’s parameters from their nominal values. If these circuits are used continuously for too long, the parameter drifts cannot be reversed, resulting in permanent degradation of circuit performance over time, eventually leading to hardware faults. Aggressive device scaling increases power density and temperature, which further accelerates the aging, challenging the reliable operation of neuromorphic systems. Existing reliability-oriented techniques periodically de-stress all neuron and synapse circuits in the hardware at fixed intervals, assuming worst-case operating conditions, without actually tracking their aging at run-time. To de-stress these circuits, normal operation must be interrupted, which introduces latency in spike generation and propagation, impacting the inter-spike interval and hence, performance (e.g., accuracy). We observe that in contrast to long-term aging, which permanently damages the hardware, short-term aging in scaled CMOS transistors is mostly due to bias temperature instability. The latter is heavily workload-dependent and, more importantly, partially reversible. We propose a new architectural technique to mitigate the aging-related reliability problems in neuromorphic systems by designing an intelligent run-time manager (NCRTM), which dynamically de-stresses neuron and synapse circuits in response to the short-term aging in their CMOS transistors during the execution of machine learning workloads, with the objective of meeting a reliability target. NCRTM de-stresses these circuits only when it is absolutely necessary to do so, otherwise reducing the performance impact by scheduling de-stress operations off the critical path. We evaluate NCRTM with state-of-the-art machine learning workloads on a neuromorphic hardware. Our results demonstrate that NCRTM significantly improves the reliability of neuromorphic hardware, with marginal impact on performance. 
    more » « less
  5. Hydrogen tunneling plays a critical role in many biologically and chemically important processes. The nuclear–electronic orbital multistate density functional theory (NEO-MSDFT) method was developed to describe hydrogen transfer systems. In this approach, the transferring proton is treated quantum mechanically on the same level as the electrons within multicomponent DFT, and a nonorthogonal configuration interaction scheme is used to produce delocalized vibronic states from localized vibronic states. The NEO-MSDFT method has been shown to provide accurate hydrogen tunneling splittings for fixed molecular systems. Herein, the NEO-MSDFT analytical gradients for both ground and excited vibronic states are derived and implemented. The analytical gradients and semi-numerical Hessians are used to optimize and characterize equilibrium and transition state geometries and to generate minimum energy paths (MEPs), for proton transfer in the deprotonated acetylene dimer and malonaldehyde. The barriers along the resulting MEPs are lower when the transferring proton is quantized because the NEO-MSDFT method inherently includes the zero-point energy of the transferring proton. Analysis of the proton densities along the MEPs illustrates that the proton density can exhibit symmetric or asymmetric bilobal character associated with symmetric or slightly asymmetric double-well potential energy surfaces and hydrogen tunneling. Analysis of the contributions to the intrinsic reaction coordinate reveals that changes in the C–O bond lengths drive proton transfer in malonaldehyde. This work provides the foundation for future reaction path studies and direct nonadiabatic dynamics simulations of a wide range of hydrogen transfer reactions.

     
    more » « less