skip to main content


Title: Neural silences can be localized rapidly using noninvasive scalp EEG
A rapid and cost-effective noninvasive tool to detect and characterize neural silences can be of important benefit in diagnosing and treating many disorders. We propose an algorithm, SilenceMap, for uncovering the absence of electrophysiological signals, or neural silences, using noninvasive scalp electroencephalography (EEG) signals. By accounting for the contributions of different sources to the power of the recorded signals, and using a hemispheric baseline approach and a convex spectral clustering framework, SilenceMap permits rapid detection and localization of regions of silence in the brain using a relatively small amount of EEG data. SilenceMap substantially outperformed existing source localization algorithms in estimating the center-of-mass of the silence for three pediatric cortical resection patients, using fewer than 3 minutes of EEG recordings (13, 2, and 11mm vs. 25, 62, and 53 mm), as well for 100 different simulated regions of silence based on a real human head model (12 ± 0.7 mm vs. 54 ± 2.2 mm). SilenceMap paves the way towards accessible early diagnosis and continuous monitoring of altered physiological properties of human cortical function.  more » « less
Award ID(s):
1763561
NSF-PAR ID:
10252508
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Nature communications biology selections
ISSN:
2188-5028
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. There is a rise in the study of functional connectivity among various cortical regions and investigations to uncover causal links between a stimulus and the corresponding neural dynamics through electrophysiological imaging of the human brain. Animal model that exhibit simplistic representations of such networks open a doorway for such investigations and are gaining rapid popularity. In this study, we investigate and compare resting state network and auditory stimulus related activity with minimal invasive technology along computational spectral analysis on a C57/BL6 based mouse model. Somatosensory, motor and visual cortex are observed to be highly active and significantly correlated (p-value<0.05). Moreover, given the spatial limitation due to small size of the mouse head, we also describe a low-cost and effective fabrication process for the mouse EEG Polyimide Based Microelectrodes (PBM) array. The easy-to-implement fabrication process involves transfer of the pattern on a copper layer of the Kapton film followed by gold electroplating and application of insulation paint. Acoustic stimulation is done by using tube extensions for avoiding electrical coupling to EEG signals. Unlike multi-electrode array type of invasive methods that are local to a cortical region, the methods established in this study can be used for examining functional connectivity analysis, neural dynamics and cortical response at a global level. 
    more » « less
  2. During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening. 
    more » « less
  3. Abstract

    Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.

     
    more » « less
  4. Objectively differentiating patient mental states based on electrical activity, as opposed to overt behavior, is a fundamental neuroscience problem with medical applications, such as identifying patients in locked-in state vs. coma. Electroencephalography (EEG), which detects millisecond-level changes in brain activity across a range of frequencies, allows for assessment of external stimulus processing by the brain in a non-invasive manner. We applied machine learning methods to 26-channel EEG data of 24 fluent Deaf signers watching videos of sign language sentences (comprehension condition), and the same videos reversed in time (non-comprehension condition), to objectively separate vision-based high-level cognition states. While spectrotemporal parameters of the stimuli were identical in comprehension vs. non-comprehension conditions, the neural responses of participants varied based on their ability to linguistically decode visual data. We aimed to determine which subset of parameters (specific scalp regions or frequency ranges) would be necessary and sufficient for high classification accuracy of comprehension state. Optical flow, characterizing distribution of velocities of objects in an image, was calculated for each pixel of stimulus videos using MATLAB Vision toolbox. Coherence between optical flow in the stimulus and EEG neural response (per video, per participant) was then computed using canonical component analysis with NoiseTools toolbox. Peak correlations were extracted for each frequency for each electrode, participant, and video. A set of standard ML algorithms were applied to the entire dataset (26 channels, frequencies from .2 Hz to 12.4 Hz, binned in 1 Hz increments), with consistent out-of-sample 100% accuracy for frequencies in .2-1 Hz range for all regions, and above 80% accuracy for frequencies < 4 Hz. Sparse Optimal Scoring (SOS) was then applied to the EEG data to reduce the dimensionality of the features and improve model interpretability. SOS with elastic-net penalty resulted in out-of-sample classification accuracy of 98.89%. The sparsity pattern in the model indicated that frequencies between 0.2–4 Hz were primarily used in the classification, suggesting that underlying data may be group sparse. Further, SOS with group lasso penalty was applied to regional subsets of electrodes (anterior, posterior, left, right). All trials achieved greater than 97% out-of-sample classification accuracy. The sparsity patterns from the trials using 1 Hz bins over individual regions consistently indicated frequencies between 0.2–1 Hz were primarily used in the classification, with anterior and left regions performing the best with 98.89% and 99.17% classification accuracy, respectively. While the sparsity pattern may not be the unique optimal model for a given trial, the high classification accuracy indicates that these models have accurately identified common neural responses to visual linguistic stimuli. Cortical tracking of spectro-temporal change in the visual signal of sign language appears to rely on lower frequencies proportional to the N400/P600 time-domain evoked response potentials, indicating that visual language comprehension is grounded in predictive processing mechanisms. 
    more » « less
  5. Introduction:Current brain-computer interfaces (BCIs) primarily rely on visual feedback. However, visual feedback may not be sufficient for applications such as movement restoration, where somatosensory feedback plays a crucial role. For electrocorticography (ECoG)-based BCIs, somatosensory feedback can be elicited by cortical surface electro-stimulation [1]. However, simultaneous cortical stimulation and recording is challenging due to stimulation artifacts. Depending on the orientation of stimulating electrodes, their distance to the recording site, and the stimulation intensity, these artifacts may overwhelm the neural signals of interest and saturate the recording bioamplifiers, making it impossible to recover the underlying information [2]. To understand how these factors affect artifact propagation, we performed a preliminary characterization of ECoG signals during cortical stimulation.Materials/Methods/ResultsECoG electrodes were implanted in a 39-year old epilepsy patient as shown in Fig. 1. Pairs of adjacent electrodes were stimulated as a part of language cortical mapping. For each stimulating pair, a charge-balanced biphasic square pulse train of current at 50 Hz was delivered for five seconds at 2, 4, 6, 8 and 10 mA. ECoG signals were recorded at 512 Hz. The signals were then high-pass filtered (≥1.5 Hz, zero phase), and the 5-second stimulation epochs were segmented. Within each epoch, artifact-induced peaks were detected for each electrode, except the stimulating pair, where signals were clipped due to amplifier saturation. These peaks were phase-locked across electrodes and were 20 ms apart, thus matching the pulse train frequency. The response was characterized by calculating the median peak within the 5-second epochs. Fig. 1 shows a representative response of the right temporal grid (RTG), with the stimulation channel at RTG electrodes 14 and 15. It also shows a hypothetical amplifier saturation contour of an implantable, bi-directional, ECoG-based BCI prototype [2], assuming the supply voltage of 2.2 V and a gain of 66 dB. Finally, we quantify the worstcase scenario by calculating the largest distance between the saturation contour and the midpoint of each stimulating channel.Discussion:Our results indicate that artifact propagation follows a dipole potential distribution with the extent of the saturation region (the interior of the white contour) proportional to the stimulation amplitude. In general, the artifacts propagated farthest when a 10 mA current was applied with the saturation regions extending from 17 to 32 mm away from the midpoint of the dipole. Consistent with the electric dipole model, this maximum spread happened along the direction of the dipole moment. An exception occurred at stimulation channel RTG11-16, for which an additional saturation contour emerged away from the dipole contour (not shown), extending the saturation region to 41 mm. Also, the worst-case scenario was observed at 6 mA stimulation amplitude. This departure could be a sign of a nonlinear, switch-like behavior, wherein additional conduction pathways could become engaged in response to sufficiently high stimulation.Significance:While ECoG stimulation is routinely performed in the clinical setting, quantitative studies of the resulting signals are lacking. Our preliminary study demonstrates that stimulation artifacts largely obey dipole distributions, suggesting that the dipole model could be used to predict artifact propagation. Further studies are necessary to ascertain whether these results hold across other subjects and combinations of stimulation/recording grids. Once completed, these studies will reveal practical design constraints for future implantable bi-directional ECoG-based BCIs. These include parameters such as the distances between and relative orientations of the stimulating and recording electrodes, the choice of the stimulating electrodes, the optimal placement of the reference electrode, and the maximum stimulation amplitude. These findings would also have important implications for the design of custom, low-power bioamplifiers for implantable bi-directional ECoG-based BCIs.References:[1] Hiremath, S. V., et al. "Human perception of electrical stimulation on the surface of somatosensory cortex." PloS one 12.5 (2017): e0176020.[2] Rouse, A. G., et al. "A chronic generalized bi-directional brain-machine interface." Journal of Neural Engineering 8.3 (2011): 036018 
    more » « less