skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Exploring Associative Learning of Audio and Color Stimuli with Neuromorphic Robots in a T-Maze
Deep neural networks (DNNs) have achieved remarkable success in various cognitive tasks through training on extensive labeled datasets. However, the heavy reliance on these datasets poses challenges for DNNs in scenarios with energy constraints in particular scenarios, such as on the moon. On the contrary, animals exhibit a self-learning capability by interacting with their surroundings and memorizing concurrent events without annotated data—a process known as associative learning. A classic example of associative learning is when a rat memorizes desired and undesired stimuli while exploring a T-maze. The successful implementation of associative learning aims to replicate the self-learning mechanisms observed in animals, addressing challenges in data-constrained environments. While current implementations of associative learning are predominantly small scale and offline, this work pioneers associative learning in a robot equipped with a neuromorphic chip, specifically for online learning in a T-maze. The system successfully replicates classic associative learning observed in rodents, using neuromorphic robots as substitutes for rodents. The neuromorphic robot autonomously learns the cause-and-effect relationship between audio and visual stimuli.  more » « less
Award ID(s):
2245712
PAR ID:
10633089
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IntechOpen
Date Published:
ISSN:
978-0-85466-224-1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep learning accomplishes remarkable success through training with massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationship between concurrent events. This learning paradigm is referred to as associative memory. The successful implementation of associative memory potentially achieves self-learning schemes analogous to animals to resolve the challenges of deep learning. The state-of-the-art implementations of associative memory are limited to small-scale and offline paradigms. Thus, in this work, we implement associative memory learning with an Unmanned Ground Vehicle (UGV) and neuromorphic chips (Intel Loihi) for an online learning scenario. Our system reproduces the classic associative memory in rats. In specific, our system successfully reproduces the fear conditioning with no pretraining procedure and labeled datasets. In our experiments, the UGV serves as a substitute for the rats. Our UGV autonomously memorizes the cause-and-effect of the light stimulus and vibration stimulus, then exhibits a movement response. During associative memory learning, the synaptic weights are updated by Hebbian learning. The Intel Loihi chip is integrated with our online learning system for processing visual signals. Its average power usages for computing logic and memory are 30 mW and 29 mW, respectively. 
    more » « less
  2. Machado, Pedro (Ed.)
    This study emulates associative learning in rodents by using a neuromorphic robot navigating an open-field arena. The goal is to investigate how biologically inspired neural models can reproduce animal-like learning behaviors in real-world robotic systems. We constructed a neuromorphic robot by deploying computational models of spatial and sensory neurons onto a mobile platform. Different coding schemes—rate coding for vibration signals and population coding for visual signals—were implemented. The associative learning model employs 19 spiking neurons and follows Hebbian plasticity principles to associate visual cues with favorable or unfavorable locations. Our robot successfully replicated classical rodent associative learning behavior by memorizing causal relationships between environmental cues and spatial outcomes. The robot’s self-learning capability emerged from repeated exposure and synaptic weight adaptation, without the need for labeled training data. Experiments confirmed functional learning behavior across multiple trials. This work provides a novel embodied platform for memory and learning research beyond traditional animal models. By embedding biologically inspired learning mechanisms into a real robot, we demonstrate how spatial memory can be formed and expressed through sensorimotor interactions. The model’s compact structure (19 neurons) illustrates a minimal yet functional learning network, and the study outlines principles for synaptic weight and threshold design, guiding future development of more complex neuromorphic systems. 
    more » « less
  3. Fear conditioning is a behavioral paradigm of learning to predict aversive events. It is a form of associative learning that memorizes an undesirable stimulus (e.g., an electrical shock) and a neutral stimulus (e.g., a tone), resulting in a fear response (such as running away) to the originally neutral stimulus. The association of concurrent events is implemented by strengthening the synaptic connection between the neurons. In this paper, with an analogous methodology, we reproduce the classic fear conditioning experiment of rats using mobile robots and a neuromorphic system. In our design, the acceleration from a vibration platform substitutes the undesirable stimulus in rats. Meanwhile, the brightness of light (dark vs. light) is used for a neutral stimulus, which is analogous to the neutral sound in fear conditioning experiments in rats. The brightness of the light is processed with sparse coding in the Intel Loihi chip. The simulation and experimental results demonstrate that our neuromorphic robot successfully, for the first time, reproduces the fear conditioning experiment of rats with a mobile robot. The work exhibits a potential online learning paradigm with no labeled data required. The mobile robot directly memorizes the events by interacting with its surroundings, essentially different from data-driven methods. 
    more » « less
  4. Zhang, Yanqing (Ed.)
    Learning from complex, multidimensional data has become central to computational mathematics, and among the most successful high-dimensional function approximators are deep neural networks (DNNs). Training DNNs is posed as an optimization problem to learn network weights or parameters that well-approximate a mapping from input to target data. Multiway data or tensors arise naturally in myriad ways in deep learning, in particular as input data and as high-dimensional weights and features extracted by the network, with the latter often being a bottleneck in terms of speed and memory. In this work, we leverage tensor representations and processing to efficiently parameterize DNNs when learning from high-dimensional data. We propose tensor neural networks (t-NNs), a natural extension of traditional fully-connected networks, that can be trained efficiently in a reduced, yet more powerful parameter space. Our t-NNs are built upon matrix-mimetic tensor-tensor products, which retain algebraic properties of matrix multiplication while capturing high-dimensional correlations. Mimeticity enables t-NNs to inherit desirable properties of modern DNN architectures. We exemplify this by extending recent work on stable neural networks, which interpret DNNs as discretizations of differential equations, to our multidimensional framework. We provide empirical evidence of the parametric advantages of t-NNs on dimensionality reduction using autoencoders and classification using fully-connected and stable variants on benchmark imaging datasets MNIST and CIFAR-10. 
    more » « less
  5. null (Ed.)
    Abstract: Modeling student learning processes is highly complex since it is influenced by many factors such as motivation and learning habits. The high volume of features and tools provided by computer-based learning environments confounds the task of tracking student knowledge even further. Deep Learning models such as Long-Short Term Memory (LSTMs) and classic Markovian models such as Bayesian Knowledge Tracing (BKT) have been successfully applied for student modeling. However, much of this prior work is designed to handle sequences of events with discrete timesteps, rather than considering the continuous aspect of time. Given that time elapsed between successive elements in a student’s trajectory can vary from seconds to days, we applied a Timeaware LSTM (T-LSTM) to model the dynamics of student knowledge state in continuous time. We investigate the effectiveness of T-LSTM on two domains with very different characteristics. One involves an open-ended programming environment where students can self-pace their progress and T-LSTM is compared against LSTM, Recent Temporal Pattern Mining, and the classic Logistic Regression (LR) on the early prediction of student success; the other involves a classic tutor-driven intelligent tutoring system where the tutor scaffolds the student learning step by step and T-LSTM is compared with LSTM, LR, and BKT on the early prediction of student learning gains. Our results show that TLSTM significantly outperforms the other methods on the self-paced, open-ended programming environment; while on the tutor-driven ITS, it ties with LSTM and outperforms both LR and BKT. In other words, while time-irregularity exists in both datasets, T-LSTM works significantly better than other student models when the pace is driven by students. On the other hand, when such irregularity results from the tutor, T-LSTM was not superior to other models but its performance was not hurt either. 
    more » « less