Spiking Neural Networks (SNNs) are energy-efficient artificial neural network models that can carry out data-intensive applications. Energy consumption, latency, and memory bottleneck are some of the major issues that arise in machine learning applications due to their data-demanding nature. Memristor-enabled Computing-In-Memory (CIM) architectures have been able to tackle the memory wall issue, eliminating the energy and time-consuming movement of data. In this work we develop a scalable CIM-based SNN architecture with our fabricated two-layer memristor crossbar array. In addition to having an enhanced heat dissipation capability, our memristor exhibits substantial enhancement of 10% to 66% in design area, power and latency compared to state-of-the-art memristors. This design incorporates an inter-spike interval (ISI) encoding scheme due to its high information density to convert the incoming input signals into spikes. Furthermore, we include a time-to-first-spike (TTFS) based output processing stage for its energy-efficiency to carry out the final classification. With the combination of ISI, CIM and TTFS, this network has a competitive inference speed of 2μs/image and can successfully classify handwritten digits with 2.9mW of power and 2.51pJ energy per spike. The proposed architecture with the ISI encoding scheme can achieve ∼10% higher accuracy than those of other encoding schemes in the MNIST dataset.
more »
« less
Realizing Behavior Level Associative Memory Learning Through Three-Dimensional Memristor-Based Neuromorphic Circuits
Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior.
more »
« less
- Award ID(s):
- 1750450
- PAR ID:
- 10128769
- Date Published:
- Journal Name:
- IEEE Transactions on Emerging Topics in Computational Intelligence
- ISSN:
- 2471-285X
- Page Range / eLocation ID:
- 1 to 11
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Machado, Pedro (Ed.)This study emulates associative learning in rodents by using a neuromorphic robot navigating an open-field arena. The goal is to investigate how biologically inspired neural models can reproduce animal-like learning behaviors in real-world robotic systems. We constructed a neuromorphic robot by deploying computational models of spatial and sensory neurons onto a mobile platform. Different coding schemes—rate coding for vibration signals and population coding for visual signals—were implemented. The associative learning model employs 19 spiking neurons and follows Hebbian plasticity principles to associate visual cues with favorable or unfavorable locations. Our robot successfully replicated classical rodent associative learning behavior by memorizing causal relationships between environmental cues and spatial outcomes. The robot’s self-learning capability emerged from repeated exposure and synaptic weight adaptation, without the need for labeled training data. Experiments confirmed functional learning behavior across multiple trials. This work provides a novel embodied platform for memory and learning research beyond traditional animal models. By embedding biologically inspired learning mechanisms into a real robot, we demonstrate how spatial memory can be formed and expressed through sensorimotor interactions. The model’s compact structure (19 neurons) illustrates a minimal yet functional learning network, and the study outlines principles for synaptic weight and threshold design, guiding future development of more complex neuromorphic systems.more » « less
-
Deep learning accomplishes remarkable success through training with massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationship between concurrent events. This learning paradigm is referred to as associative memory. The successful implementation of associative memory potentially achieves self-learning schemes analogous to animals to resolve the challenges of deep learning. The state-of-the-art implementations of associative memory are limited to small-scale and offline paradigms. Thus, in this work, we implement associative memory learning with an Unmanned Ground Vehicle (UGV) and neuromorphic chips (Intel Loihi) for an online learning scenario. Our system reproduces the classic associative memory in rats. In specific, our system successfully reproduces the fear conditioning with no pretraining procedure and labeled datasets. In our experiments, the UGV serves as a substitute for the rats. Our UGV autonomously memorizes the cause-and-effect of the light stimulus and vibration stimulus, then exhibits a movement response. During associative memory learning, the synaptic weights are updated by Hebbian learning. The Intel Loihi chip is integrated with our online learning system for processing visual signals. Its average power usages for computing logic and memory are 30 mW and 29 mW, respectively.more » « less
-
Abstract Artificial synaptic devices are the essential hardware component in emerging neuromorphic computing systems by mimicking biological synapse and brain functions. When made from natural organic materials such as protein and carbohydrate, they have potential to improve sustainability and reduce electronic waste by enabling environmentally‐friendly disposal. In this paper, a new natural organic memristor based artificial synaptic device is reported with the memristive film processed by a honey and carbon nanotube (CNT) admixture, that is, honey‐CNT memristor. Optical microscopy, scanning electron microscopy, and micro‐Raman spectroscopy are employed to analyze the morphology and chemical structure of the honey‐CNT film. The device demonstrates analog memristive potentiation and depression, with the mechanism governing these functions explained by the formation and dissolution of conductive paths due to the electrochemical metal filaments which are assisted by CNT clusters and bundles in the honey‐CNT film. The honey‐CNT memristor successfully emulates synaptic functionalities such as short‐term plasticity and its transition to long‐term plasticity for memory rehearsal, spatial summation, and shunting inhibition, and for the first time, the classical conditioning behavior for associative learning by mimicking the Pavlov's dog experiment. All these results testify that honey‐CNT memristor based artificial synaptic device is promising for energy‐efficient and eco‐friendly neuromorphic systems.more » « less
-
Abstract The constant drive to achieve higher performance in deep neural networks (DNNs) has led to the proliferation of very large models. Model training, however, requires intensive computation time and energy. Memristor‐based compute‐in‐memory (CIM) modules can perform vector‐matrix multiplication (VMM) in place and in parallel, and have shown great promises in DNN inference applications. However, CIM‐based model training faces challenges due to non‐linear weight updates, device variations, and low‐precision. In this work, a mixed‐precision training scheme is experimentally implemented to mitigate these effects using a bulk‐switching memristor‐based CIM module. Low‐precision CIM modules are used to accelerate the expensive VMM operations, with high‐precision weight updates accumulated in digital units. Memristor devices are only changed when the accumulated weight update value exceeds a pre‐defined threshold. The proposed scheme is implemented with a system‐onchip of fully integrated analog CIM modules and digital sub‐systems, showing fast convergence of LeNet training to 97.73%. The efficacy of training larger models is evaluated using realistic hardware parameters and verifies that CIM modules can enable efficient mix‐precision DNN training with accuracy comparable to full‐precision software‐trained models. Additionally, models trained on chip are inherently robust to hardware variations, allowing direct mapping to CIM inference chips without additional re‐training.more » « less
An official website of the United States government

