skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Layerwise Hebbian/anti-Hebbian (HaH) Learning In Deep Networks: A Neuro-inspired Approach To Robustness
Award ID(s):
1909320 2224263
PAR ID:
10359437
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ICML 2022 Workshop on New Frontiers in Adversarial Machine Learning
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding how neural circuits generate sequential activity is a longstanding challenge. While foundational theoretical models have shown how sequences can be stored as memories in neural networks with Hebbian plasticity rules, these models considered only a narrow range of Hebbian rules. Here, we introduce a model for arbitrary Hebbian plasticity rules, capturing the diversity of spike-timing-dependent synaptic plasticity seen in experiments, and show how the choice of these rules and of neural activity patterns influences sequence memory formation and retrieval. In particular, we derive a general theory that predicts the tempo of sequence replay. This theory lays a foundation for explaining how cortical tutor signals might give rise to motor actions that eventually become “automatic.” Our theory also captures the impact of changing the tempo of the tutor signal. Beyond shedding light on biological circuits, this theory has relevance in artificial intelligence by laying a foundation for frameworks whereby slow and computationally expensive deliberation can be stored as memories and eventually replaced by inexpensive recall. 
    more » « less
  2. Brain-computer interface (BCI) actively translates the brain signals into executable actions by establishing direct communication between the human brain and external devices. Recording brain activity through electroencephalography (EEG) is generally contaminated with both physiological and nonphysiological artifacts, which significantly hinders the BCI performance. Artifact subspace reconstruction (ASR) is a well-known statistical technique that automatically removes artifact components by determining the rejection threshold based on the initial reference EEG segment in multichannel EEG recordings. In real-world applications, the fixed threshold may limit the efficacy of the artifact correction, especially when the quality of the reference data is poor. This study proposes an adaptive online ASR technique by integrating the Hebbian/anti-Hebbian neural networks into the ASR algorithm, namely, principle subspace projection ASR (PSP-ASR) and principal subspace whitening ASR (PSW-ASR) that segmentwise self-organize the artifact subspace by updating the synaptic weights according to the Hebbian and anti-Hebbian learning rules. The effectiveness of the proposed algorithm is compared to the conventional ASR approaches on benchmark EEG dataset and three BCI frameworks, including steady-state visual evoked potential (SSVEP), rapid serial visual presentation (RSVP), and motor imagery (MI) by evaluating the root-mean-square error (RMSE), the signal-to-noise ratio (SNR), the Pearson correlation, and classification accuracy. The results demonstrated that the PSW-ASR algorithm effectively removed the EEG artifacts and retained the activity-specific brain signals compared to the PSP-ASR, standard ASR (Init-ASR), and moving-window ASR (MW-ASR) methods, thereby enhancing the SSVEP, RSVP, and MI BCI performances. Finally, our empirical results from the PSW-ASR algorithm suggested the choice of an aggressive cutoff range of c = 1-10 for activity-specific BCI applications and a moderate range of for the benchmark dataset and general BCI applications. 
    more » « less
  3. null (Ed.)