skip to main content


Title: EPViz: A flexible and lightweight visualizer to facilitate predictive modeling for multi-channel EEG

Scalp Electroencephalography (EEG) is one of the most popular noninvasive modalities for studying real-time neural phenomena. While traditional EEG studies have focused on identifying group-level statistical effects, the rise of machine learning has prompted a shift in computational neuroscience towards spatio-temporal predictive analyses. We introduce a novel open-source viewer, the EEG Prediction Visualizer (EPViz), to aid researchers in developing, validating, and reporting their predictive modeling outputs. EPViz is a lightweight and standalone software package developed in Python. Beyond viewing and manipulating the EEG data, EPViz allows researchers to load a PyTorch deep learning model, apply it to EEG features, and overlay the output channel-wise or subject-level temporal predictions on top of the original time series. These results can be saved as high-resolution images for use in manuscripts and presentations. EPViz also provides valuable tools for clinician-scientists, including spectrum visualization, computation of basic data statistics, and annotation editing. Finally, we have included a built-in EDF anonymization module to facilitate sharing of clinical data. Taken together, EPViz fills a much needed gap in EEG visualization. Our user-friendly interface and rich collection of features may also help to promote collaboration between engineers and clinicians.

 
more » « less
Award ID(s):
2322823 1845430
PAR ID:
10492625
Author(s) / Creator(s):
; ; ; ;
Editor(s):
M, Murugappan
Publisher / Repository:
PLOS ONE
Date Published:
Journal Name:
PLOS ONE
Volume:
18
Issue:
2
ISSN:
1932-6203
Page Range / eLocation ID:
e0282268
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Recent advancements in machine learning and deep learning (DL) based neural decoders have significantly improved decoding capabilities using scalp electroencephalography (EEG). However, the interpretability of DL models remains an under-explored area. In this study, we compared multiple model explanation methods to identify the most suitable method for EEG and understand when some of these approaches might fail. A simulation framework was developed to evaluate the robustness and sensitivity of twelve back-propagation-based visualization methods by comparing to ground truth features. Multiple methods tested here showed reliability issues after randomizing either model weights or labels: e.g., the saliency approach, which is the most used visualization technique in EEG, was not class or model-specific. We found that DeepLift was consistently accurate as well as robust to detect the three key attributes tested here (temporal, spatial, and spectral precision). Overall, this study provides a review of model explanation methods for DL-based neural decoders and recommendations to understand when some of these methods fail and what they can capture in EEG.

     
    more » « less
  2. Abstract Objective

    Anterior temporal lobectomy (ATL) is a widely performed and successful intervention for drug‐resistant temporal lobe epilepsy (TLE). However, up to one third of patients experience seizure recurrence within 1 year after ATL. Despite the extensive literature on presurgical electroencephalography (EEG) and magnetic resonance imaging (MRI) abnormalities to prognosticate seizure freedom following ATL, the value of quantitative analysis of visually reviewed normal interictal EEG in such prognostication remains unclear. In this retrospective multicenter study, we investigate whether machine learning analysis of normal interictal scalp EEG studies can inform the prediction of postoperative seizure freedom outcomes in patients who have undergone ATL.

    Methods

    We analyzed normal presurgical scalp EEG recordings from 41 Mayo Clinic (MC) and 23 Cleveland Clinic (CC) patients. We used an unbiased automated algorithm to extract eyes closed awake epochs from scalp EEG studies that were free of any epileptiform activity and then extracted spectral EEG features representing (a) spectral power and (b) interhemispheric spectral coherence in frequencies between 1 and 25 Hz across several brain regions. We analyzed the differences between the seizure‐free and non–seizure‐free patients and employed a Naïve Bayes classifier using multiple spectral features to predict surgery outcomes. We trained the classifier using a leave‐one‐patient‐out cross‐validation scheme within the MC data set and then tested using the out‐of‐sample CC data set. Finally, we compared the predictive performance of normal scalp EEG‐derived features against MRI abnormalities.

    Results

    We found that several spectral power and coherence features showed significant differences correlated with surgical outcomes and that they were most pronounced in the 10–25 Hz range. The Naïve Bayes classification based on those features predicted 1‐year seizure freedom following ATL with area under the curve (AUC) values of 0.78 and 0.76 for the MC and CC data sets, respectively. Subsequent analyses revealed that (a) interhemispheric spectral coherence features in the 10–25 Hz range provided better predictability than other combinations and (b) normal scalp EEG‐derived features provided superior and potentially distinct predictive value when compared with MRI abnormalities (>10% higher F1 score).

    Significance

    These results support that quantitative analysis of even a normal presurgical scalp EEG may help prognosticate seizure freedom following ATL in patients with drug‐resistant TLE. Although the mechanism for this result is not known, the scalp EEG spectral and coherence properties predicting seizure freedom may represent activity arising from the neocortex or the networks responsible for temporal lobe seizure generation within vs outside the margins of an ATL.

     
    more » « less
  3. Machine learning techniques have proven to be a useful tool in cognitive neuroscience. However, their implementation in scalp‐recorded electroencephalography (EEG) is relatively limited. To address this, we present three analyses using data from a previous study that examined event‐related potential (ERP) responses to a wide range of naturally‐produced speech sounds. First, we explore which features of the EEG signal best maximize machine learning accuracy for a voicing distinction, using a support vector machine (SVM). We manipulate three dimensions of the EEG signal as input to the SVM: number of trials averaged, number of time points averaged, and polynomial fit. We discuss the trade‐offs in using different feature sets and offer some recommendations for researchers using machine learning. Next, we use SVMs to classify specific pairs of phonemes, finding that we can detect differences in the EEG signal that are not otherwise detectable using conventional ERP analyses. Finally, we characterize the timecourse of phonetic feature decoding across three phonological dimensions (voicing, manner of articulation, and place of articulation), and find that voicing and manner are decodable from neural activity, whereas place of articulation is not. This set of analyses addresses both practical considerations in the application of machine learning to EEG, particularly for speech studies, and also sheds light on current issues regarding the nature of perceptual representations of speech. 
    more » « less
  4. In recent years, the use of convolutional neural networks (CNNs) for raw resting-state electroencephalography (EEG) analysis has grown increasingly common. However, relative to earlier machine learning and deep learning methods with manually extracted features, CNNs for raw EEG analysis present unique problems for explainability. As such, a growing group of methods have been developed that provide insight into the spectral features learned by CNNs. However, spectral power is not the only important form of information within EEG, and the capacity to understand the roles of specific multispectral waveforms identified by CNNs could be very helpful. In this study, we present a novel model visualization-based approach that adapts the traditional CNN architecture to increase interpretability and combines that inherent interpretability with a systematic evaluation of the modelviaa series of novel explainability methods. Our approach evaluates the importance of spectrally distinct first-layer clusters of filters before examining the contributions of identified waveforms and spectra to cluster importance. We evaluate our approach within the context of automated sleep stage classification and find that, for the most part, our explainability results are highly consistent with clinical guidelines. Our approach is the first to systematically evaluate both waveform and spectral feature importance in CNNs trained on resting-state EEG data.

     
    more » « less
  5. Predicting valence and arousal values from EEG signals has been a steadfast research topic within the field of affective computing or emotional AI. Although numerous valid techniques to predict valence and arousal values from EEG signals have been established and verified, the EEG data collection process itself is relatively undocumented. This creates an artificial learning curve for new researchers seeking to incorporate EEGs within their research workflow. In this article, a study is presented that illustrates the importance of a strict EEG data collection process for EEG affective computing studies. The work was evaluated by first validating the effectiveness of a machine learning prediction model on the DREAMER dataset, then showcasing the lack of effectiveness of the same machine learning prediction model on cursorily obtained EEG data.

     
    more » « less