Physical Reservoir Computing (PRC) is an unconventional computing paradigm that exploits the nonlinear dynamics of reservoir blocks to perform temporal data classification and prediction tasks. Here, we show with simulations that patterned thin films hosting skyrmion can implement energy-efficient straintronic reservoir computing (RC) in the presence of room-temperature thermal perturbation. This RC block is based on strain-induced nonlinear breathing dynamics of skyrmions, which are coupled to each other through dipole and spin-wave interaction. The nonlinear and coupled magnetization dynamics were exploited to perform temporal data classification and prediction. Two performance metrics, namely Short-Term Memory (STM) and Parity Check (PC) capacity are studied and shown to be promising (4.39 and 4.62 respectively), in addition to showing it can classify sine and square waves with 100% accuracy. These demonstrate the potential of such skyrmion based PRC. Furthermore, our study shows that nonlinear magnetization dynamics and interaction through spin-wave and dipole coupling have a strong influence on STM and PC capacity, thus explaining the role of physical interaction in a dynamical system on its ability to perform RC.
more »
« less
Voltage-Controlled Energy-Efficient Domain Wall Synapses With Stochastic Distribution of Quantized Weights in the Presence of Thermal Noise and Edge Roughness
We propose energy-efficient voltage-induced strain control of a domain wall (DW) in a perpendicularly magnetized nanoscale racetrack on a piezoelectric substrate that can implement a multistate synapse to be utilized in neuromorphic computing platforms. Here, strain generated in the piezoelectric is mechanically transferred to the racetrack and modulates the perpendicular magnetic anisotropy (PMA) in a system that has significant interfacial Dzyaloshinskii-Moriya interaction (DMI). When different voltages are applied (i.e., different strains are generated) in conjunction with spin-orbit torque (SOT) due to a fixed current flowing in the heavy metal layer for a fixed time, DWs are translated to different distances and implement different synaptic weights. We have shown using micromagnetic simulations that five-state and three-state synapses can be implemented in a racetrack that is modeled with the inclusion of natural edge roughness and room temperature thermal noise. These simulations show interesting dynamics of DWs due to interaction with roughness-induced pinning sites. Thus, notches need not be fabricated to implement multistate nonvolatile synapses. Such a strain-controlled synapse has an energy consumption of ~1 fJ and could thus be very attractive to implement energy-efficient quantized neural networks, which has been shown recently to achieve near equivalent classification accuracy to the full-precision neural networks.
more »
« less
- PAR ID:
- 10300352
- Date Published:
- Journal Name:
- IEEE Transactions on Electron Devices
- ISSN:
- 0018-9383
- Page Range / eLocation ID:
- 1 to 9
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Neuronal synapses transmit electrochemical signals between cells through the coordinated action of presynaptic vesicles, ion channels, scaffolding and adapter proteins, and membrane receptors. In situ structural characterization of numerous synaptic proteins simultaneously through multiplexed imaging facilitates a bottom-up approach to synapse classification and phenotypic description. Objective automation of efficient and reliable synapse detection within these datasets is essential for the high-throughput investigation of synaptic features. Convolutional neural networks can solve this generalized problem of synapse detection, however, these architectures require large numbers of training examples to optimize their thousands of parameters. We propose DoGNet, a neural network architecture that closes the gap between classical computer vision blob detectors, such as Difference of Gaussians (DoG) filters, and modern convolutional networks. DoGNet is optimized to analyze highly multiplexed microscopy data. Its small number of training parameters allows DoGNet to be trained with few examples, which facilitates its application to new datasets without overfitting. We evaluate the method on multiplexed fluorescence imaging data from both primary mouse neuronal cultures and mouse cortex tissue slices. We show that DoGNet outperforms convolutional networks with a low-to-moderate number of training examples, and DoGNet is efficiently transferred between datasets collected from separate research groups. DoGNet synapse localizations can then be used to guide the segmentation of individual synaptic protein locations and spatial extents, revealing their spatial organization and relative abundances within individual synapses. The source code is publicly available: https://github.com/kulikovv/dognet.more » « less
-
Implementing scalable and effective synaptic networks will enable neuromorphic computing to deliver on its promise of revolutionizing computing. RRAM represents the most promising technology for realizing the fully connected synapse network: By using programmable resistive elements as weights, RRAM can modulate the strength of synapses in a neural network architecture. Oscillatory Neural Networks (ONNs) that are based on phase-locked loop (PLL) neurons are compatible with the resistive synapses but otherwise rather impractical. In this paper, A PLL-free ONN is implemented in 28 nm CMOS and compared to its PLL-based counterpart. Our silicon results show that the PLL-free architecture is compatible with resistive synapses, addresses practical implementation issues for improved robustness, and demonstrates favorable energy consumption compared to state-of-the-art NNs.more » « less
-
Driven by the expanse of Internet of Things (IoT) and Cyber-Physical Systems (CPS), there is an increasing demand to process streams of temporal data on embedded devices with limited energy and power resources. Among all potential solutions, neuromorphic computing with spiking neural networks (SNN) that mimic the behavior of brain, have recently been placed at the forefront. Encoding information into sparse and distributed spike events enables low-power implementations, and the complex spatial temporal dynamics of synapses and neurons enable SNNs to detect temporal pattern. However, most existing hardware SNN implementations use simplified neuron and synapse models ignoring synapse dynamic, which is critical for temporal pattern detection and other applications that require temporal dynamics. To adopt a more realistic synapse model in neuromorphic platform its significant computation overhead must be addressed. In this work, we propose an FPGA-based SNN with biologically realistic neuron and synapse for temporal information processing. An encoding scheme to convert continuous real-valued information into sparse spike events is presented. The event-driven implementation of synapse dynamic model and its hardware design that is optimized to exploit the sparsity are also presented. Finally, we train the SNN on various temporal pattern-learning tasks and evaluate its performance and efficiency as compared to rate-based models and artificial neural networks on different embedded platforms. Experiments show that our work can achieve 10X speed up and 196X gains in energy efficiency compared with GPU.more » « less
-
Introduction For artificial synapses whose strengths are assumed to be bounded and can only be updated with finite precision, achieving optimal memory consolidation using primitives from classical physics leads to synaptic models that are too complex to be scaled in-silico . Here we report that a relatively simple differential device that operates using the physics of Fowler-Nordheim (FN) quantum-mechanical tunneling can achieve tunable memory consolidation characteristics with different plasticity-stability trade-offs. Methods A prototype FN-synapse array was fabricated in a standard silicon process and was used to verify the optimal memory consolidation characteristics and used for estimating the parameters of an FN-synapse analytical model. The analytical model was then used for large-scale memory consolidation and continual learning experiments. Results We show that compared to other physical implementations of synapses for memory consolidation, the operation of the FN-synapse is near-optimal in terms of the synaptic lifetime and the consolidation properties. We also demonstrate that a network comprising FN-synapses outperforms a comparable elastic weight consolidation (EWC) network for some benchmark continual learning tasks. Discussions With an energy footprint of femtojoules per synaptic update, we believe that the proposed FN-synapse provides an ultra-energy-efficient approach for implementing both synaptic memory consolidation and continual learning on a physical device.more » « less
An official website of the United States government

