The prospect of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) in the presence of topological information of participants is often left unexplored in most of the brain-computer interface (BCI) systems. Additionally, the usage of these modalities together in the field of multimodality analysis to support multiple brain signals toward improving BCI performance is not fully examined. This study first presents a multimodal data fusion framework to exploit and decode the complementary synergistic properties in multimodal neural signals. Moreover, the relations among different subjects and their observations also play critical roles in classifying unknown subjects. We developed a context-aware graph neural network (GNN) model utilizing the pairwise relationship among participants to investigate the performance on an auditory task classification. We explored standard and deviant auditory EEG and fNIRS data where each subject was asked to perform an auditory oddball task and has multiple trials regarded as context-aware nodes in our graph construction. In experiments, our multimodal data fusion strategy showed an improvement up to 8.40% via SVM and 2.02% via GNN, compared to the single-modal EEG or fNIRS. In addition, our context-aware GNN achieved 5.3%, 4.07% and 4.53% higher accuracy for EEG, fNIRS and multimodal data based experiments, compared to the baseline models.
more »
« less
Investigation of electro-vascular phase-amplitude coupling during an auditory task
Multimodal neuroimaging using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) provides complementary views of cortical processes, including those related to auditory processing. However, current multimodal approaches often overlook potential insights that can be gained from nonlinear interactions between electrical and hemodynamic signals. Here, we explore electro-vascular phase-amplitude coupling (PAC) between low-frequency hemodynamic and high-frequency electrical oscillations during an auditory task. We further apply a temporally embedded canonical correlation analysis (tCCA)-general linear model (GLM)-based correction approach to reduce the possible effect of systemic physiology on fNIRS recordings. Before correction, we observed significant PAC between fNIRS and broadband EEG in the frontal region (p ≪ 0.05), β (p ≪ 0.05) and γ (p = 0.010) in the left temporal/temporoparietal (left auditory; LA) region, and γ (p = 0.032) in the right temporal/temporoparietal (right auditory; RA) region across the entire dataset. Significant differences in PAC across conditions (task versus silence) were observed in LA (p = 0.023) and RA (p = 0.049) γ sub-bands and in lower frequency (5-20 Hz) frontal activity (p = 0.005). After correction, significant fNIRS-γ-band PAC was observed in the frontal (p = 0.021) and LA (p = 0.025) regions, while fNIRS-α (p = 0.003) and fNIRS-β (p = 0.041) PAC were observed in RA. Decreased frontal γ-band (p = 0.008) and increased β-band (p ≪ 0.05) PAC were observed during the task. These outcomes represent the first characterization of electro-vascular PAC between fNIRS and EEG signals during an auditory task, providing insights into electro-vascular coupling in auditory processing.
more »
« less
- Award ID(s):
- 2024418
- PAR ID:
- 10544928
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Computers in Biology and Medicine
- Volume:
- 169
- Issue:
- C
- ISSN:
- 0010-4825
- Page Range / eLocation ID:
- 107902
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Multimodal data fusion is one of the current primary neuroimaging research directions to overcome the fundamental limitations of individual modalities by exploiting complementary information from different modalities. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) are especially compelling modalities due to their potentially complementary features reflecting the electro-hemodynamic characteristics of neural responses. However, the current multimodal studies lack a comprehensive systematic approach to properly merge the complementary features from their multimodal data. Identifying a systematic approach to properly fuse EEG-fNIRS data and exploit their complementary potential is crucial in improving performance. This paper proposes a framework for classifying fused EEG-fNIRS data at the feature level, relying on a mutual information-based feature selection approach with respect to the complementarity between features. The goal is to optimize the complementarity, redundancy and relevance between multimodal features with respect to the class labels as belonging to a pathological condition or healthy control. Nine amyotrophic lateral sclerosis (ALS) patients and nine controls underwent multimodal data recording during a visuo-mental task. Multiple spectral and temporal features were extracted and fed to a feature selection algorithm followed by a classifier, which selected the optimized subset of features through a cross-validation process. The results demonstrated considerably improved hybrid classification performance compared to the individual modalities and compared to conventional classification without feature selection, suggesting a potential efficacy of our proposed framework for wider neuro-clinical applications.more » « less
-
null (Ed.)Two stages of the creative writing process were characterized through mobile scalp electroencephalography (EEG) in a 16-week creative writing workshop. Portable dry EEG systems (four channels: TP09, AF07, AF08, TP10) with synchronized head acceleration, video recordings, and journal entries, recorded mobile brain-body activity of Spanish heritage students. Each student's brain-body activity was recorded as they experienced spaces in Houston, Texas (“Preparation” stage), and while they worked on their creative texts (“Generation” stage). We used Generalized Partial Directed Coherence (gPDC) to compare the functional connectivity among both stages. There was a trend of higher gPDC in the Preparation stage from right temporo-parietal (TP10) to left anterior-frontal (AF07) brain scalp areas within 1–50 Hz, not reaching statistical significance. The opposite directionality was found for the Generation stage, with statistical significant differences ( p < 0.05) restricted to the delta band (1–4 Hz). There was statistically higher gPDC observed for the inter-hemispheric connections AF07–AF08 in the delta and theta bands (1–8 Hz), and AF08 to TP09 in the alpha and beta (8–30 Hz) bands. The left anterior-frontal (AF07) recordings showed higher power localized to the gamma band (32–50 Hz) for the Generation stage. An ancillary analysis of Sample Entropy did not show significant difference. The information transfer from anterior-frontal to temporal-parietal areas of the scalp may reflect multisensory interpretation during the Preparation stage, while brain signals originating at temporal-parietal toward frontal locations during the Generation stage may reflect the final decision making process to translate the multisensory experience into a creative text.more » « less
-
Multi-view learning is a rapidly evolving research area focused on developing diverse learning representations. In neural data analysis, this approach holds immense potential by capturing spatial, temporal, and frequency features. Despite its promise, multi-view application to functional near-infrared spectroscopy (fNIRS) has remained largely unexplored. This study addresses this gap by introducing fNIRSNET, a novel framework that generates and fuses multi-view spatio-temporal representations using convolutional neural networks. It investigates the combined informational strength of oxygenated (HbO2) and deoxygenated (HbR) hemoglobin signals, further extending these capabilities by integrating with electroencephalography (EEG) networks to achieve robust multimodal classification. Experiments involved classifying neural responses to auditory stimuli with nine healthy participants. fNIRS signals were decomposed into HbO2/HbR concentration changes, resulting in Parallel and Merged input types. We evaluated four input types across three data compositions: balanced, subject, and complete datasets. Our fNIRSNET's performance was compared with eight baseline classification models and merged it with four common EEG networks to assess the efficacy of combined features for multimodal classification. Compared to baselines, fNIRSNET using the Merged input type achieved the highest accuracy of 83.22%, 81.18%, and 91.58% for balanced, subject, and complete datasets, respectively. In the complete set, the approach effectively mitigated class imbalance issues, achieving sensitivity of 83.58% and specificity of 95.42%. Multimodal fusion of EEG networks and fNIRSNET outperformed single-modality performance with the highest accuracy of 87.15% on balanced data. Overall, this study introduces an innovative fusion approach for decoding fNIRS data and illustrates its integration with established EEG networks to enhance performance.more » « less
-
Abstract Rhythm perception depends on the ability to predict the onset of rhythmic events. Previous studies indicate beta band modulation is involved in predicting the onset of auditory rhythmic events (Fujioka et al., 2009, 2012; Snyder & Large, 2005). We sought to determine if similar processes are recruited for prediction of visual rhythms by investigating whether beta band activity plays a role in a modality‐dependent manner for rhythm perception. We looked at electroencephalography time–frequency neural correlates of prediction using an omission paradigm with auditory and visual rhythms. By using omissions, we can separate out predictive timing activity from stimulus‐driven activity. We hypothesized that there would be modality‐independent markers of rhythm prediction in induced beta band oscillatory activity, and our results support this hypothesis. We find induced and evoked predictive timing in both auditory and visual modalities. Additionally, we performed an exploratory‐independent components‐based spatial clustering analysis, and describe all resulting clusters. This analysis reveals that there may be overlapping networks of predictive beta activity based on common activation in the parietal and right frontal regions, auditory‐specific predictive beta in bilateral sensorimotor regions, and visually specific predictive beta in midline central, and bilateral temporal/parietal regions. This analysis also shows evoked predictive beta activity in the left sensorimotor region specific to auditory rhythms and implicates modality‐dependent networks for auditory and visual rhythm perception.more » « less