skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2024418

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Multi-view learning is a rapidly evolving research area focused on developing diverse learning representations. In neural data analysis, this approach holds immense potential by capturing spatial, temporal, and frequency features. Despite its promise, multi-view application to functional near-infrared spectroscopy (fNIRS) has remained largely unexplored. This study addresses this gap by introducing fNIRSNET, a novel framework that generates and fuses multi-view spatio-temporal representations using convolutional neural networks. It investigates the combined informational strength of oxygenated (HbO2) and deoxygenated (HbR) hemoglobin signals, further extending these capabilities by integrating with electroencephalography (EEG) networks to achieve robust multimodal classification. Experiments involved classifying neural responses to auditory stimuli with nine healthy participants. fNIRS signals were decomposed into HbO2/HbR concentration changes, resulting in Parallel and Merged input types. We evaluated four input types across three data compositions: balanced, subject, and complete datasets. Our fNIRSNET's performance was compared with eight baseline classification models and merged it with four common EEG networks to assess the efficacy of combined features for multimodal classification. Compared to baselines, fNIRSNET using the Merged input type achieved the highest accuracy of 83.22%, 81.18%, and 91.58% for balanced, subject, and complete datasets, respectively. In the complete set, the approach effectively mitigated class imbalance issues, achieving sensitivity of 83.58% and specificity of 95.42%. Multimodal fusion of EEG networks and fNIRSNET outperformed single-modality performance with the highest accuracy of 87.15% on balanced data. Overall, this study introduces an innovative fusion approach for decoding fNIRS data and illustrates its integration with established EEG networks to enhance performance. 
    more » « less
    Free, publicly-accessible full text available November 1, 2025
  2. Applications of multimodal neuroimaging techniques, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have gained prominence in recent years, and they are widely practiced in brain–computer interface (BCI) and neuro-pathological diagnosis applications. Most existing approaches assume observations are independent and identically distributed (i.i.d.), as shown in the top section of the right figure, yet ignore the difference among subjects. It has been challenging to model subject groups to maintain topological information (e.g., patient graphs) while fusing BCI signals for discriminant feature learning. In this article, we introduce a topology-aware graph-based multimodal fusion (TaGMF) framework to classify amyotrophic lateral sclerosis (ALS) and healthy subjects, illustrated in the lower section of the right image. Our framework is built on graph neural networks (GNNs) but with two unique contributions. First, a novel topology-aware graph (TaG) is proposed to model subject groups by considering: 1) intersubject; 2) intrasubject; and 3) intergroup relations. Second, the learned representation of EEG and fNIRS signals of each subject allows for explorations of different fusion strategies along with the TaGMF optimizations. Our analysis demonstrates the effectiveness of our graph-based fusion approach in multimodal classification by achieving a 22.6% performance improvement over classical approaches. 
    more » « less
  3. Multimodal neuroimaging using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) provides complementary views of cortical processes, including those related to auditory processing. However, current multimodal approaches often overlook potential insights that can be gained from nonlinear interactions between electrical and hemodynamic signals. Here, we explore electro-vascular phase-amplitude coupling (PAC) between low-frequency hemodynamic and high-frequency electrical oscillations during an auditory task. We further apply a temporally embedded canonical correlation analysis (tCCA)-general linear model (GLM)-based correction approach to reduce the possible effect of systemic physiology on fNIRS recordings. Before correction, we observed significant PAC between fNIRS and broadband EEG in the frontal region (p ≪ 0.05), β (p ≪ 0.05) and γ (p = 0.010) in the left temporal/temporoparietal (left auditory; LA) region, and γ (p = 0.032) in the right temporal/temporoparietal (right auditory; RA) region across the entire dataset. Significant differences in PAC across conditions (task versus silence) were observed in LA (p = 0.023) and RA (p = 0.049) γ sub-bands and in lower frequency (5-20 Hz) frontal activity (p = 0.005). After correction, significant fNIRS-γ-band PAC was observed in the frontal (p = 0.021) and LA (p = 0.025) regions, while fNIRS-α (p = 0.003) and fNIRS-β (p = 0.041) PAC were observed in RA. Decreased frontal γ-band (p = 0.008) and increased β-band (p ≪ 0.05) PAC were observed during the task. These outcomes represent the first characterization of electro-vascular PAC between fNIRS and EEG signals during an auditory task, providing insights into electro-vascular coupling in auditory processing. 
    more » « less
  4. The prospect of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) in the presence of topological information of participants is often left unexplored in most of the brain-computer interface (BCI) systems. Additionally, the usage of these modalities together in the field of multimodality analysis to support multiple brain signals toward improving BCI performance is not fully examined. This study first presents a multimodal data fusion framework to exploit and decode the complementary synergistic properties in multimodal neural signals. Moreover, the relations among different subjects and their observations also play critical roles in classifying unknown subjects. We developed a context-aware graph neural network (GNN) model utilizing the pairwise relationship among participants to investigate the performance on an auditory task classification. We explored standard and deviant auditory EEG and fNIRS data where each subject was asked to perform an auditory oddball task and has multiple trials regarded as context-aware nodes in our graph construction. In experiments, our multimodal data fusion strategy showed an improvement up to 8.40% via SVM and 2.02% via GNN, compared to the single-modal EEG or fNIRS. In addition, our context-aware GNN achieved 5.3%, 4.07% and 4.53% higher accuracy for EEG, fNIRS and multimodal data based experiments, compared to the baseline models. 
    more » « less
  5. Neural networks (NN) has been adopted by brain-computer interfaces (BCI) to encode brain signals acquired using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). However, it has been found that NN models are vulnerable to adversarial examples, i.e., corrupted samples with imperceptible noise. Once attacked, it could impact medical diagnosis and patients’ quality of life. While early work focuses on interference using external devices at the time of signal acquisition, recent research shifts to collected signals, features, and learning models under various attack modes (e.g., white-, grey-, and black-box). However, existing work only considers single-modality attacks and ignores the topological relationships among different observations, e.g., samples having strong similarities. Different from previous approaches, we introduce graph neural networks (GNN) to multimodal BCI-based classification and explore its performance and robustness against adversarial attacks. This study will evaluate the robustness of NN models with and without graph knowledge on both single and multimodal data. 
    more » « less