skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reproducing Fear Conditioning of Rats with Unmanned Ground Vehicles and Neuromorphic Systems
Deep learning accomplishes remarkable success through training with massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationship between concurrent events. This learning paradigm is referred to as associative memory. The successful implementation of associative memory potentially achieves self-learning schemes analogous to animals to resolve the challenges of deep learning. The state-of-the-art implementations of associative memory are limited to small-scale and offline paradigms. Thus, in this work, we implement associative memory learning with an Unmanned Ground Vehicle (UGV) and neuromorphic chips (Intel Loihi) for an online learning scenario. Our system reproduces the classic associative memory in rats. In specific, our system successfully reproduces the fear conditioning with no pretraining procedure and labeled datasets. In our experiments, the UGV serves as a substitute for the rats. Our UGV autonomously memorizes the cause-and-effect of the light stimulus and vibration stimulus, then exhibits a movement response. During associative memory learning, the synaptic weights are updated by Hebbian learning. The Intel Loihi chip is integrated with our online learning system for processing visual signals. Its average power usages for computing logic and memory are 30 mW and 29 mW, respectively.  more » « less
Award ID(s):
2245712
PAR ID:
10516637
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE International Symposium on Quality Electronic Design
ISSN:
1948-3295
ISBN:
979-8-3503-3475-3
Page Range / eLocation ID:
1 to 7
Subject(s) / Keyword(s):
Associative Memory Hebbian Learning Neuromorphic Computing Unmanned Ground Vehicles.
Format(s):
Medium: X
Location:
San Francisco, CA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Fear conditioning is a behavioral paradigm of learning to predict aversive events. It is a form of associative learning that memorizes an undesirable stimulus (e.g., an electrical shock) and a neutral stimulus (e.g., a tone), resulting in a fear response (such as running away) to the originally neutral stimulus. The association of concurrent events is implemented by strengthening the synaptic connection between the neurons. In this paper, with an analogous methodology, we reproduce the classic fear conditioning experiment of rats using mobile robots and a neuromorphic system. In our design, the acceleration from a vibration platform substitutes the undesirable stimulus in rats. Meanwhile, the brightness of light (dark vs. light) is used for a neutral stimulus, which is analogous to the neutral sound in fear conditioning experiments in rats. The brightness of the light is processed with sparse coding in the Intel Loihi chip. The simulation and experimental results demonstrate that our neuromorphic robot successfully, for the first time, reproduces the fear conditioning experiment of rats with a mobile robot. The work exhibits a potential online learning paradigm with no labeled data required. The mobile robot directly memorizes the events by interacting with its surroundings, essentially different from data-driven methods. 
    more » « less
  2. Deep neural networks (DNNs) have achieved remarkable success in various cognitive tasks through training on extensive labeled datasets. However, the heavy reliance on these datasets poses challenges for DNNs in scenarios with energy constraints in particular scenarios, such as on the moon. On the contrary, animals exhibit a self-learning capability by interacting with their surroundings and memorizing concurrent events without annotated data—a process known as associative learning. A classic example of associative learning is when a rat memorizes desired and undesired stimuli while exploring a T-maze. The successful implementation of associative learning aims to replicate the self-learning mechanisms observed in animals, addressing challenges in data-constrained environments. While current implementations of associative learning are predominantly small scale and offline, this work pioneers associative learning in a robot equipped with a neuromorphic chip, specifically for online learning in a T-maze. The system successfully replicates classic associative learning observed in rodents, using neuromorphic robots as substitutes for rodents. The neuromorphic robot autonomously learns the cause-and-effect relationship between audio and visual stimuli. 
    more » « less
  3. A variety of advanced machine learning and deep learning algorithms achieve state-of-the-art performance on various temporal processing tasks. However, these methods are heavily energy inefficient—they run mainly on the power hungry CPUs and GPUs. Computing with Spiking Networks, on the other hand, has shown to be energy efficient on specialized neuromorphic hardware, e.g., Loihi, TrueNorth, SpiNNaker, etc. In this work, we present two architectures of spiking models, inspired from the theory of Reservoir Computing and Legendre Memory Units, for the Time Series Classification (TSC) task. Our first spiking architecture is closer to the general Reservoir Computing architecture and we successfully deploy it on Loihi; the second spiking architecture differs from the first by the inclusion of non-linearity in the readout layer. Our second model (trained with Surrogate Gradient Descent method) shows that non-linear decoding of the linearly extracted temporal features through spiking neurons not only achieves promising results, but also offers low computation-overhead by significantly reducing the number of neurons compared to the popular LSM based models—more than 40x reduction with respect to the recent spiking model we compare with. We experiment on five TSC datasets and achieve new SoTA spiking results (—as much as 28.607% accuracy improvement on one of the datasets), thereby showing the potential of our models to address the TSC tasks in a green energy-efficient manner. In addition, we also do energy profiling and comparison on Loihi and CPU to support our claims. 
    more » « less
  4. The ability to acquire information about the environment through social observation or instruction is an essential form of learning in humans and other animals. Here, we assessed the ability of rats to acquire an association between a light stimulus and the presentation of a reward that is either hidden (sucrose solution) or visible (food pellet) via observation of a trained demonstrator. Subsequent training of observers on the light-reward association indicated that while observation alone was not sufficient for observers to acquire the association, contact with the reward location was higher in observers that were paired with a demonstrator. However, this was only true when the light cue predicted a sucrose reward. Additionally, we found that in the visible reward condition, levels of demonstrator orienting and food cup contact during the observation period tended to be positively correlated with the corresponding behaviour of their observer. This relationship was only seen during later sessions of observer training. Together, these results suggest that while our models were not sufficient to induce associative learning through observation alone, demonstrator behaviour during observation did influence how their paired observer's behavioural response to the cue evolved over the course of direct individual training. 
    more » « less
  5. Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior. 
    more » « less