skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Adversary on Multimodal BCI-based Classification
Neural networks (NN) has been adopted by brain-computer interfaces (BCI) to encode brain signals acquired using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). However, it has been found that NN models are vulnerable to adversarial examples, i.e., corrupted samples with imperceptible noise. Once attacked, it could impact medical diagnosis and patients’ quality of life. While early work focuses on interference using external devices at the time of signal acquisition, recent research shifts to collected signals, features, and learning models under various attack modes (e.g., white-, grey-, and black-box). However, existing work only considers single-modality attacks and ignores the topological relationships among different observations, e.g., samples having strong similarities. Different from previous approaches, we introduce graph neural networks (GNN) to multimodal BCI-based classification and explore its performance and robustness against adversarial attacks. This study will evaluate the robustness of NN models with and without graph knowledge on both single and multimodal data.  more » « less
Award ID(s):
2050972 2024418
PAR ID:
10403080
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
11th International IEEE EMBS Conference on Neural Engineering
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Applications of multimodal neuroimaging techniques, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have gained prominence in recent years, and they are widely practiced in brain–computer interface (BCI) and neuro-pathological diagnosis applications. Most existing approaches assume observations are independent and identically distributed (i.i.d.), as shown in the top section of the right figure, yet ignore the difference among subjects. It has been challenging to model subject groups to maintain topological information (e.g., patient graphs) while fusing BCI signals for discriminant feature learning. In this article, we introduce a topology-aware graph-based multimodal fusion (TaGMF) framework to classify amyotrophic lateral sclerosis (ALS) and healthy subjects, illustrated in the lower section of the right image. Our framework is built on graph neural networks (GNNs) but with two unique contributions. First, a novel topology-aware graph (TaG) is proposed to model subject groups by considering: 1) intersubject; 2) intrasubject; and 3) intergroup relations. Second, the learned representation of EEG and fNIRS signals of each subject allows for explorations of different fusion strategies along with the TaGMF optimizations. Our analysis demonstrates the effectiveness of our graph-based fusion approach in multimodal classification by achieving a 22.6% performance improvement over classical approaches. 
    more » « less
  2. The burgeoning fields of machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems across various domains. However, their susceptibility to adversarial attacks raises concerns when deploying these systems in security-sensitive applications. In this study, we present a comparative analysis of the vulnerability of ML and QML models, specifically conventional neural networks (NN) and quantum neural networks (QNN), to adversarial attacks using a malware dataset. We utilize a software supply chain attack dataset known as ClaMP and develop two distinct models for QNN and NN, employing Pennylane for quantum implementations and TensorFlow and Keras for traditional implementations. Our methodology involves crafting adversarial samples by introducing random noise to a small portion of the dataset and evaluating the impact on the models’ performance using accuracy, precision, recall, and F1 score metrics. Based on our observations, both ML and QML models exhibit vulnerability to adversarial attacks. While the QNN’s accuracy decreases more significantly compared to the NN after the attack, it demonstrates better performance in terms of precision and recall, indicating higher resilience in detecting true positives under adversarial conditions. We also find that adversarial samples crafted for one model type can impair the performance of the other, highlighting the need for robust defense mechanisms. Our study serves as a foundation for future research focused on enhancing the security and resilience of ML and QML models, particularly QNN, given its recent advancements. A more extensive range of experiments will be conducted to better understand the performance and robustness of both models in the face of adversarial attacks. 
    more » « less
  3. The prospect of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) in the presence of topological information of participants is often left unexplored in most of the brain-computer interface (BCI) systems. Additionally, the usage of these modalities together in the field of multimodality analysis to support multiple brain signals toward improving BCI performance is not fully examined. This study first presents a multimodal data fusion framework to exploit and decode the complementary synergistic properties in multimodal neural signals. Moreover, the relations among different subjects and their observations also play critical roles in classifying unknown subjects. We developed a context-aware graph neural network (GNN) model utilizing the pairwise relationship among participants to investigate the performance on an auditory task classification. We explored standard and deviant auditory EEG and fNIRS data where each subject was asked to perform an auditory oddball task and has multiple trials regarded as context-aware nodes in our graph construction. In experiments, our multimodal data fusion strategy showed an improvement up to 8.40% via SVM and 2.02% via GNN, compared to the single-modal EEG or fNIRS. In addition, our context-aware GNN achieved 5.3%, 4.07% and 4.53% higher accuracy for EEG, fNIRS and multimodal data based experiments, compared to the baseline models. 
    more » « less
  4. Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs. 
    more » « less
  5. Abstract- Neural networks (NNs) are increasingly often employed in safety critical systems. It is therefore necessary to ensure that these NNs are robust against malicious interference in the form of adversarial attacks, which cause an NN to misclassify inputs. Many proposed defenses against such attacks incorporate randomness in order to make it harder for an attacker to find small input modifications that result in misclassification. Stochastic computing (SC) is a type of approximate computing based on pseudo-random bit-streams that has been successfully used to implement convolutional neural networks (CNNs). Some results have previously suggested that such stochastic CNNs (SCNNs) are partially robust against adversarial attacks. In this work, we will demonstrate that SCNNs do indeed possess inherent protection against some powerful adversarial attacks. Our results show that the white-box C&W attack is up to 16x less successful compared to an equivalent binary NN, and Boundary Attack even fails to generate adversarial inputs in many cases. 
    more » « less