Introduction: Back pain is one of the most common causes of pain in the United States. Spinal cord stimulation (SCS) is an intervention for patients with chronic back pain (CBP). However, SCS decreases pain in only 58% of patients and relies on self-reported pain scores as outcome measures. An SCS trial is temporarily implanted for seven days and helps to determine if a permanent SCS is needed. Patients that have a >50% reduction in pain from the trial stimulator makes them eligible for permanent implantation. However, self-reported measures reveal little on how mechanisms in the brain are altered. Other measurements of pain intensity, onset, medication, disabilities, depression, and anxiety have been used with machine learning to predict outcomes with accuracies <70%. We aim to predict long-term SCS responders at 6-months using baseline resting EEG and machine learning. Materials and Methods: We obtained 10-minutes of resting electroencephalography (EEG) and pain questionnaires from nine participants with CBP at two time points: 1) pre-trial baseline. 2) Six months after SCS permanent implant surgery. Subjects were designated as high or moderate responders based on the amount of pain relief provided by the long-term (post six months) SCS, and pain scored on a scale ofmore »
Salience of low-frequency entrainment to visual signal for classification points to predictive processing in sign language. In Proceedings of 30th Annual Computational Neuroscience Meeting: CNS*2021
Objectively differentiating patient mental states based on electrical activity, as opposed to overt behavior, is a fundamental neuroscience problem with medical applications, such as identifying patients in locked-in state vs. coma. Electroencephalography (EEG), which detects millisecond-level changes in brain activity across a range of frequencies, allows for assessment of external stimulus processing by the brain in a non-invasive manner. We applied machine learning methods to 26-channel EEG data of 24 fluent Deaf signers watching videos of sign language sentences (comprehension condition), and the same videos reversed in time (non-comprehension condition), to objectively separate vision-based high-level cognition states. While spectrotemporal parameters of the stimuli were identical in comprehension vs. non-comprehension conditions, the neural responses of participants varied based on their ability to linguistically decode visual data. We aimed to determine which subset of parameters (specific scalp regions or frequency ranges) would be necessary and sufficient for high classification accuracy of comprehension state.
Optical flow, characterizing distribution of velocities of objects in an image, was calculated for each pixel of stimulus videos using MATLAB Vision toolbox. Coherence between optical flow in the stimulus and EEG neural response (per video, per participant) was then computed using canonical component analysis with NoiseTools toolbox. Peak correlations more »
- Award ID(s):
- 2012554
- Publication Date:
- NSF-PAR ID:
- 10341227
- Journal Name:
- Journal of Computational Neuroscience
- Volume:
- 49
- Issue:
- S1
- Page Range or eLocation-ID:
- 3 to 208
- ISSN:
- 0929-5313
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Li-Jessen, Nicole Yee-Key (Ed.)The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eyemore »
-
Obeid, Iyad Selesnick (Ed.)Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEGmore »
-
Introduction:Current brain-computer interfaces (BCIs) primarily rely on visual feedback. However, visual feedback may not be sufficient for applications such as movement restoration, where somatosensory feedback plays a crucial role. For electrocorticography (ECoG)-based BCIs, somatosensory feedback can be elicited by cortical surface electro-stimulation [1]. However, simultaneous cortical stimulation and recording is challenging due to stimulation artifacts. Depending on the orientation of stimulating electrodes, their distance to the recording site, and the stimulation intensity, these artifacts may overwhelm the neural signals of interest and saturate the recording bioamplifiers, making it impossible to recover the underlying information [2]. To understand how these factors affect artifact propagation, we performed a preliminary characterization of ECoG signals during cortical stimulation.Materials/Methods/ResultsECoG electrodes were implanted in a 39-year old epilepsy patient as shown in Fig. 1. Pairs of adjacent electrodes were stimulated as a part of language cortical mapping. For each stimulating pair, a charge-balanced biphasic square pulse train of current at 50 Hz was delivered for five seconds at 2, 4, 6, 8 and 10 mA. ECoG signals were recorded at 512 Hz. The signals were then high-pass filtered (≥1.5 Hz, zero phase), and the 5-second stimulation epochs were segmented. Within each epoch, artifact-induced peaks were detectedmore »
-
Deaf spaces are unique indoor environments designed to optimize visual communication and Deaf cultural expression. However, much of the technological research geared towards the deaf involve use of video or wearables for American sign language (ASL) translation, with little consideration for Deaf perspective on privacy and usability of the technology. In contrast to video, RF sensors offer the avenue for ambient ASL recognition while also preserving privacy for Deaf signers. Methods: This paper investigates the RF transmit waveform parameters required for effective measurement of ASL signs and their effect on word-level classification accuracy attained with transfer learning and convolutional autoencoders (CAE). A multi-frequency fusion network is proposed to exploit data from all sensors in an RF sensor network and improve the recognition accuracy of fluent ASL signing. Results: For fluent signers, CAEs yield a 20-sign classification accuracy of %76 at 77 GHz and %73 at 24 GHz, while at X-band (10 Ghz) accuracy drops to 67%. For hearing imitation signers, signs are more separable, resulting in a 96% accuracy with CAEs. Further, fluent ASL recognition accuracy is significantly increased with use of the multi-frequency fusion network, which boosts the 20-sign fluent ASL recognition accuracy to 95%, surpassing conventional feature levelmore »