The prospect of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) in the presence of topological information of participants is often left unexplored in most of the brain-computer interface (BCI) systems. Additionally, the usage of these modalities together in the field of multimodality analysis to support multiple brain signals toward improving BCI performance is not fully examined. This study first presents a multimodal data fusion framework to exploit and decode the complementary synergistic properties in multimodal neural signals. Moreover, the relations among different subjects and their observations also play critical roles in classifying unknown subjects. We developed a context-aware graph neural network (GNN) model utilizing the pairwise relationship among participants to investigate the performance on an auditory task classification. We explored standard and deviant auditory EEG and fNIRS data where each subject was asked to perform an auditory oddball task and has multiple trials regarded as context-aware nodes in our graph construction. In experiments, our multimodal data fusion strategy showed an improvement up to 8.40% via SVM and 2.02% via GNN, compared to the single-modal EEG or fNIRS. In addition, our context-aware GNN achieved 5.3%, 4.07% and 4.53% higher accuracy for EEG, fNIRS and multimodal data based experiments, compared to the baseline models.
more »
« less
Multimodal fusion of EEG-fNIRS: a mutual information-based hybrid classification framework
Multimodal data fusion is one of the current primary neuroimaging research directions to overcome the fundamental limitations of individual modalities by exploiting complementary information from different modalities. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) are especially compelling modalities due to their potentially complementary features reflecting the electro-hemodynamic characteristics of neural responses. However, the current multimodal studies lack a comprehensive systematic approach to properly merge the complementary features from their multimodal data. Identifying a systematic approach to properly fuse EEG-fNIRS data and exploit their complementary potential is crucial in improving performance. This paper proposes a framework for classifying fused EEG-fNIRS data at the feature level, relying on a mutual information-based feature selection approach with respect to the complementarity between features. The goal is to optimize the complementarity, redundancy and relevance between multimodal features with respect to the class labels as belonging to a pathological condition or healthy control. Nine amyotrophic lateral sclerosis (ALS) patients and nine controls underwent multimodal data recording during a visuo-mental task. Multiple spectral and temporal features were extracted and fed to a feature selection algorithm followed by a classifier, which selected the optimized subset of features through a cross-validation process. The results demonstrated considerably improved hybrid classification performance compared to the individual modalities and compared to conventional classification without feature selection, suggesting a potential efficacy of our proposed framework for wider neuro-clinical applications.
more »
« less
- Award ID(s):
- 1913492
- PAR ID:
- 10215403
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Biomedical Optics Express
- Volume:
- 12
- Issue:
- 3
- ISSN:
- 2156-7085
- Format(s):
- Medium: X Size: Article No. 1635
- Size(s):
- Article No. 1635
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Multi-view learning is a rapidly evolving research area focused on developing diverse learning representations. In neural data analysis, this approach holds immense potential by capturing spatial, temporal, and frequency features. Despite its promise, multi-view application to functional near-infrared spectroscopy (fNIRS) has remained largely unexplored. This study addresses this gap by introducing fNIRSNET, a novel framework that generates and fuses multi-view spatio-temporal representations using convolutional neural networks. It investigates the combined informational strength of oxygenated (HbO2) and deoxygenated (HbR) hemoglobin signals, further extending these capabilities by integrating with electroencephalography (EEG) networks to achieve robust multimodal classification. Experiments involved classifying neural responses to auditory stimuli with nine healthy participants. fNIRS signals were decomposed into HbO2/HbR concentration changes, resulting in Parallel and Merged input types. We evaluated four input types across three data compositions: balanced, subject, and complete datasets. Our fNIRSNET's performance was compared with eight baseline classification models and merged it with four common EEG networks to assess the efficacy of combined features for multimodal classification. Compared to baselines, fNIRSNET using the Merged input type achieved the highest accuracy of 83.22%, 81.18%, and 91.58% for balanced, subject, and complete datasets, respectively. In the complete set, the approach effectively mitigated class imbalance issues, achieving sensitivity of 83.58% and specificity of 95.42%. Multimodal fusion of EEG networks and fNIRSNET outperformed single-modality performance with the highest accuracy of 87.15% on balanced data. Overall, this study introduces an innovative fusion approach for decoding fNIRS data and illustrates its integration with established EEG networks to enhance performance.more » « less
-
Background: Machine learning is a promising tool for biomarker-based diagnosis of Alzheimer’s disease (AD). Performing multimodal feature selection and studying the interaction between biological and clinical AD can help to improve the performance of the diagnosis models. Objective: This study aims to formulate a feature ranking metric based on the mutual information index to assess the relevance and redundancy of regional biomarkers and improve the AD classification accuracy. Methods: From the Alzheimer’s Disease Neuroimaging Initiative (ADNI), 722 participants with three modalities, including florbetapir-PET, flortaucipir-PET, and MRI, were studied. The multivariate mutual information metric was utilized to capture the redundancy and complementarity of the predictors and develop a feature ranking approach. This was followed by evaluating the capability of single-modal and multimodal biomarkers in predicting the cognitive stage. Results: Although amyloid-β deposition is an earlier event in the disease trajectory, tau PET with feature selection yielded a higher early-stage classification F1-score (65.4%) compared to amyloid-β PET (63.3%) and MRI (63.2%). The SVC multimodal scenario with feature selection improved the F1-score to 70.0% and 71.8% for the early and late-stage, respectively. When age and risk factors were included, the scores improved by 2 to 4%. The Amyloid-Tau-Neurodegeneration [AT(N)] framework helped to interpret the classification results for different biomarker categories. Conclusion: The results underscore the utility of a novel feature selection approach to reduce the dimensionality of multimodal datasets and enhance model performance. The AT(N) biomarker framework can help to explore the misclassified cases by revealing the relationship between neuropathological biomarkers and cognition.more » « less
-
Applications of multimodal neuroimaging techniques, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have gained prominence in recent years, and they are widely practiced in brain–computer interface (BCI) and neuro-pathological diagnosis applications. Most existing approaches assume observations are independent and identically distributed (i.i.d.), as shown in the top section of the right figure, yet ignore the difference among subjects. It has been challenging to model subject groups to maintain topological information (e.g., patient graphs) while fusing BCI signals for discriminant feature learning. In this article, we introduce a topology-aware graph-based multimodal fusion (TaGMF) framework to classify amyotrophic lateral sclerosis (ALS) and healthy subjects, illustrated in the lower section of the right image. Our framework is built on graph neural networks (GNNs) but with two unique contributions. First, a novel topology-aware graph (TaG) is proposed to model subject groups by considering: 1) intersubject; 2) intrasubject; and 3) intergroup relations. Second, the learned representation of EEG and fNIRS signals of each subject allows for explorations of different fusion strategies along with the TaGMF optimizations. Our analysis demonstrates the effectiveness of our graph-based fusion approach in multimodal classification by achieving a 22.6% performance improvement over classical approaches.more » « less
-
Multimodal neuroimaging using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) provides complementary views of cortical processes, including those related to auditory processing. However, current multimodal approaches often overlook potential insights that can be gained from nonlinear interactions between electrical and hemodynamic signals. Here, we explore electro-vascular phase-amplitude coupling (PAC) between low-frequency hemodynamic and high-frequency electrical oscillations during an auditory task. We further apply a temporally embedded canonical correlation analysis (tCCA)-general linear model (GLM)-based correction approach to reduce the possible effect of systemic physiology on fNIRS recordings. Before correction, we observed significant PAC between fNIRS and broadband EEG in the frontal region (p ≪ 0.05), β (p ≪ 0.05) and γ (p = 0.010) in the left temporal/temporoparietal (left auditory; LA) region, and γ (p = 0.032) in the right temporal/temporoparietal (right auditory; RA) region across the entire dataset. Significant differences in PAC across conditions (task versus silence) were observed in LA (p = 0.023) and RA (p = 0.049) γ sub-bands and in lower frequency (5-20 Hz) frontal activity (p = 0.005). After correction, significant fNIRS-γ-band PAC was observed in the frontal (p = 0.021) and LA (p = 0.025) regions, while fNIRS-α (p = 0.003) and fNIRS-β (p = 0.041) PAC were observed in RA. Decreased frontal γ-band (p = 0.008) and increased β-band (p ≪ 0.05) PAC were observed during the task. These outcomes represent the first characterization of electro-vascular PAC between fNIRS and EEG signals during an auditory task, providing insights into electro-vascular coupling in auditory processing.more » « less
An official website of the United States government
