This work reports a spiking neuromorphic architecture for associative memory simulated in a SPICE environment using recently reported gated-RRAM (resistive random-access memory) devices as synapses alongside neurons based on complementary metal-oxide semiconductors (CMOSs). The network utilizes a Verilog A model to capture the behavior of the gated-RRAM devices within the architecture. The model uses parameters obtained from experimental gated-RRAM devices that were fabricated and tested in this work. Using these devices in tandem with CMOS neuron circuitry, our results indicate that the proposed architecture can learn an association in real time and retrieve the learned association when incomplete information is provided. These results show the promise for gated-RRAM devices for associative memory tasks within a spiking neuromorphic architecture framework.
more »
« less
Semantically-correlated memories in a dense associative model
I introduce a novel associative memory model named Correlated Dense Associative Memory (CDAM), which integrates both auto- and hetero-association in a unified framework for continuous-valued memory patterns. Employing an arbitrary graph structure to semantically link memory patterns, CDAM is theoretically and numerically analysed, revealing four distinct dynamical modes: auto-association, narrow hetero-association, wide hetero-association, and neutral quiescence. Drawing inspiration from inhibitory modulation studies, I employ anti-Hebbian learning rules to control the range of hetero-association, extract multi-scale representations of community structures in graphs, and stabilise the recall of temporal sequences. Experimental demonstrations showcase CDAM’s efficacy in handling real-world data, replicating a classical neuroscience experiment, performing image retrieval, and simulating arbitrary finite automata.
more »
« less
- Award ID(s):
- 1929284
- PAR ID:
- 10537509
- Publisher / Repository:
- Proceedings of Machine Learning Research
- Date Published:
- Volume:
- 235
- Page Range / eLocation ID:
- 4936-4970
- Format(s):
- Medium: X
- Location:
- Proceedings of the 41st International Conference on Machine Learning
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Large Language Models (LLMs) have the capacity to store and recall facts. Through experimentation with open-source models, we observe that this ability to retrieve facts can be easily manipulated by changing contexts, even without altering their factual meanings. These findings highlight that LLMs might behave like an associative memory model where certain tokens in the contexts serve as clues to retrieving facts. We mathematically explore this property by studying how transformers, the building blocks of LLMs, can complete such memory tasks. We study a simple latent concept association problem with a one-layer transformer and we show theoretically and empirically that the transformer gathers information using self-attention and uses the value matrix for associative memory.more » « less
-
Deep learning accomplishes remarkable success through training with massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationship between concurrent events. This learning paradigm is referred to as associative memory. The successful implementation of associative memory potentially achieves self-learning schemes analogous to animals to resolve the challenges of deep learning. The state-of-the-art implementations of associative memory are limited to small-scale and offline paradigms. Thus, in this work, we implement associative memory learning with an Unmanned Ground Vehicle (UGV) and neuromorphic chips (Intel Loihi) for an online learning scenario. Our system reproduces the classic associative memory in rats. In specific, our system successfully reproduces the fear conditioning with no pretraining procedure and labeled datasets. In our experiments, the UGV serves as a substitute for the rats. Our UGV autonomously memorizes the cause-and-effect of the light stimulus and vibration stimulus, then exhibits a movement response. During associative memory learning, the synaptic weights are updated by Hebbian learning. The Intel Loihi chip is integrated with our online learning system for processing visual signals. Its average power usages for computing logic and memory are 30 mW and 29 mW, respectively.more » « less
-
How memories are used by the brain to guide future action is poorly understood. In olfactory associative learning inDrosophila, multiple compartments of the mushroom body act in parallel to assign a valence to a stimulus. Here, we show that appetitive memories stored in different compartments induce different levels of upwind locomotion. Using a photoactivation screen of a new collection of split-GAL4 drivers and EM connectomics, we identified a cluster of neurons postsynaptic to the mushroom body output neurons (MBONs) that can trigger robust upwind steering. These UpWind Neurons (UpWiNs) integrate inhibitory and excitatory synaptic inputs from MBONs of appetitive and aversive memory compartments, respectively. After formation of appetitive memory, UpWiNs acquire enhanced response to reward-predicting odors as the response of the inhibitory presynaptic MBON undergoes depression. Blocking UpWiNs impaired appetitive memory and reduced upwind locomotion during retrieval. Photoactivation of UpWiNs also increased the chance of returning to a location where activation was terminated, suggesting an additional role in olfactory navigation. Thus, our results provide insight into how learned abstract valences are gradually transformed into concrete memory-driven actions through divergent and convergent networks, a neuronal architecture that is commonly found in the vertebrate and invertebrate brains.more » « less
-
Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior.more » « less
An official website of the United States government

