skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: RNN Classification of Spectral EEG/EMG Data Associated with Facial Movements for Drone Control
This research involves developing a drone control system that functions by relating EEG and EMG from the forehead to different facial movements using recurrent neural networks (RNN) such as long-short term memory (LSTM) and gated recurrent Unit (GRU). As current drone control methods are largely limited to handheld devices, regular operators are actively engaged while flying and cannot perform any passive control. Passive control of drones would prove advantageous in various applications as drone operators can focus on additional tasks. The advantages of the chosen methods and those of some alternative system designs are discussed. For this research, EEG signals were acquired at three frontal cortex locations (fp1, fpz , fp2 ) using electrodes from an OpenBCI headband and observed for patterns of Fast Fourier Transform (FFT) frequency-amplitude distributions. Five different facial expressions were repeated while recording EEG signals of 0-60Hz frequencies with two reference electrodes placed on both earlobes. EMG noise received during EEG measurements was not filtered away but was observed to be minimal. A dataset was first created for the actions done, and later categorized by a mean average error (MAE), a statistical error deviation analysis and then classified with both an LSTM and GRU neural network by relating FFT amplitudes to the actions. On average, the LSTM network had classification accuracy of 78.6%, and the GRU network had a classification accuracy of 81.8%.  more » « less
Award ID(s):
1950207
PAR ID:
10414642
Author(s) / Creator(s):
Editor(s):
IEEE
Date Published:
Journal Name:
Proceedings from IEEE SouthEastcon
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. IEEE (Ed.)
    This work describes a system that uses electromyography (EMG) signals obtained from muscle sensors and an Artificial Neural Network (ANN) for signal classification and pattern recognition that is used to control a small unmanned aerial vehicle using specific arm movements. The main objective of this endeavor is the development an intelligent interface that allows the user to control the flight of a drone beyond direct manual control. The biosensors used in this work were the MyoWare Muscle sensors which contain two EMG electrodes and were used to collect signals from the posterior (extensor) and anterior (flexor) forearm, and the bicep. The collection of the raw signals from each sensor were performed using an Arduino Uno. Data processing algorithms were developed with the purpose of classifying the signals generated by the arm’s muscles when performing specific movements, namely: flexing, resting, arm-up, and arm motion from right to left. With these arm motions, roll control of the drone was achieved. MATLAB software was utilized to condition the signals and prepare them for the classification stage. To generate the input vector for the ANN and perform the classification, the root mean squared, and the standard deviation were processed for the signals from each electrode. The neuromuscular information was trained using an ANN with a single 10 neurons hidden layer to categorize the four targets. The result of the classification shows that an accuracy of 97.5% was obtained for a single user. Afterwards, classification results were used to generate the appropriate control signals from the computer to the drone through a Wi-Fi network connection. These procedures were successfully tested, where the drone responded successfully in real time to the commanded inputs. 
    more » « less
  2. Li-Jessen, Nicole Yee-Key (Ed.)
    The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings. 
    more » « less
  3. In directed energy deposition (DED), accurately controlling and predicting melt pool characteristics is essential for ensuring desired material qualities and geometric accuracies. This paper introduces a robust surrogate model based on recurrent neural network (RNN) architectures—Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit (GRU). Leveraging a time series dataset from multi-physics simulations and a three-factor, three-level experimental design, the model accurately predicts melt pool peak temperatures, lengths, widths, and depths under varying conditions. RNN algorithms, particularly Bi-LSTM, demonstrate high predictive accuracy, with an R-square of 0.983 for melt pool peak temperatures. For melt pool geometry, the GRU-based model excels, achieving R-square values above 0.88 and reducing computation time by at least 29%, showcasing its accuracy and efficiency. The RNN-based surrogate model built in this research enhances understanding of melt pool dynamics and supports precise DED system setups. 
    more » « less
  4. Recently, a multi-agent based network automation architecture has been proposed. The architecture is named multi-agent based network automation of the network management system (MANA-NMS). The architectural framework introduced atomized network functions (ANFs). ANFs should be autonomous, atomic, and intelligent agents. Such agents should be implemented as an independent decision element, using machine/deep learning (ML/DL) as an internal cognitive and reasoning part. Using these atomic and intelligent agents as a building block, a MANA-NMS can be composed using the appropriate functions. As a continuation toward implementation of the architecture MANA-NMS, this paper presents a network traffic prediction agent (NTPA) and a network traffic classification agent (NTCA) for a network traffic management system. First, an NTPA is designed and implemented using DL algorithms, i.e., long short-term memory (LSTM), gated recurrent unit (GRU), multilayer perceptrons (MLPs), and convolutional neural network (CNN) algorithms as a reasoning and cognitive part of the agent. Similarly, an NTCA is designed using decision tree (DT), K-nearest neighbors (K-NN), support vector machine (SVM), and naive Bayes (NB) as a cognitive component in the agent design. We then measure the NTPA prediction accuracy, training latency, prediction latency, and computational resource consumption. The results indicate that the LSTM-based NTPA outperforms compared to GRU, MLP, and CNN-based NTPA in terms of prediction accuracy, and prediction latency. We also evaluate the accuracy of the classifier, training latency, classification latency, and computational resource consumption of NTCA using the ML models. The performance evaluation shows that the DT-based NTCA performs the best. 
    more » « less
  5. ABSTRACT Traditional software reliability growth models (SRGM) characterize defect discovery with the Non‐Homogeneous Poisson Process (NHPP) as a function of testing time or effort. More recently, covariate NHPP SRGM models have substantially improved tracking and prediction of the defect discovery process by explicitly incorporating discrete multivariate time series on the amount of each underlying testing activity performed in successive intervals. Both classes of NHPP models with and without covariates are parametric in nature, imposing assumptions on the defect discovery process, and, while neural networks have been applied to SRGM models without covariates, no such studies have been applied in the context of covariate SRGM models. Therefore, this paper assesses the effectiveness of neural networks in predicting the software defect discovery process, incorporating covariates. Three types of neural networks are considered, including (i) recurrent neural networks (RNNs), (ii) long short‐term memory (LSTM), and (iii) gated recurrent unit (GRU), which are then compared with covariate models to validate tracking and predictive accuracy. Our results suggest that GRU achieved better overall goodness‐of‐fit, such as approximately 3.22 and 1.10 times smaller predictive mean square error, and 5.33 and 1.22 times smaller predictive ratio risk in DS1G and DS2G data sets, respectively, compared to covariate models when of the data is used for training. Moreover, to provide an objective comparison, three different proportions for training data splits were employed to illustrate the advancements between the top‐performing covariate NHPP model and the neural network, in which GRU illustrated a better performance over most of the scenarios. Thus, the neural network model with gated recurrent units may be a suitable alternative to track and predict the number of defects based on covariates associated with the software testing process. 
    more » « less