skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Biosensors Based Controller for Small Unmanned Aerial Vehicle Navigation
This work describes a system that uses electromyography (EMG) signals obtained from muscle sensors and an Artificial Neural Network (ANN) for signal classification and pattern recognition that is used to control a small unmanned aerial vehicle using specific arm movements. The main objective of this endeavor is the development an intelligent interface that allows the user to control the flight of a drone beyond direct manual control. The biosensors used in this work were the MyoWare Muscle sensors which contain two EMG electrodes and were used to collect signals from the posterior (extensor) and anterior (flexor) forearm, and the bicep. The collection of the raw signals from each sensor were performed using an Arduino Uno. Data processing algorithms were developed with the purpose of classifying the signals generated by the arm’s muscles when performing specific movements, namely: flexing, resting, arm-up, and arm motion from right to left. With these arm motions, roll control of the drone was achieved. MATLAB software was utilized to condition the signals and prepare them for the classification stage. To generate the input vector for the ANN and perform the classification, the root mean squared, and the standard deviation were processed for the signals from each electrode. The neuromuscular information was trained using an ANN with a single 10 neurons hidden layer to categorize the four targets. The result of the classification shows that an accuracy of 97.5% was obtained for a single user. Afterwards, classification results were used to generate the appropriate control signals from the computer to the drone through a Wi-Fi network connection. These procedures were successfully tested, where the drone responded successfully in real time to the commanded inputs.  more » « less
Award ID(s):
1950207
PAR ID:
10327232
Author(s) / Creator(s):
Editor(s):
IEEE
Date Published:
Journal Name:
SoutheastCon 2022
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Clinical translation of “intelligent” lower-limb assistive technologies relies on robust control interfaces capable of accurately detecting user intent. To date, mechanical sensors and surface electromyography (EMG) have been the primary sensing modalities used to classify ambulation. Ultrasound (US) imaging can be used to detect user-intent by characterizing structural changes of muscle. Our study evaluates wearable US imaging as a new sensing modality for continuous classification of five discrete ambulation modes: level, incline, decline, stair ascent, and stair descent ambulation, and benchmarks performance relative to EMG sensing. Ten able-bodied subjects were equipped with a wearable US scanner and eight unilateral EMG sensors. Time-intensity features were recorded from US images of three thigh muscles. Features from sliding windows of EMG signals were analyzed in two configurations: one including 5 EMG sensors on muscles around the thigh, and another with 3 additional sensors placed on the shank. Linear discriminate analysis was implemented to continuously classify these phase-dependent features of each sensing modality as one of five ambulation modes. US-based sensing statistically improved mean classification accuracy to 99.8% (99.5-100% CI) compared to 8-EMG sensors (85.8%; 84.0-87.6% CI) and 5-EMG sensors (75.3%; 74.5-76.1% CI). Further, separability analyses show the importance of superficial and deep US information for stair classification relative to other modes. These results are the first to demonstrate the ability of US-based sensing to classify discrete ambulation modes, highlighting the potential for improved assistive device control using less widespread, less superficial and higher resolution sensing of skeletal muscle. 
    more » « less
  2. IEEE (Ed.)
    This research involves developing a drone control system that functions by relating EEG and EMG from the forehead to different facial movements using recurrent neural networks (RNN) such as long-short term memory (LSTM) and gated recurrent Unit (GRU). As current drone control methods are largely limited to handheld devices, regular operators are actively engaged while flying and cannot perform any passive control. Passive control of drones would prove advantageous in various applications as drone operators can focus on additional tasks. The advantages of the chosen methods and those of some alternative system designs are discussed. For this research, EEG signals were acquired at three frontal cortex locations (fp1, fpz , fp2 ) using electrodes from an OpenBCI headband and observed for patterns of Fast Fourier Transform (FFT) frequency-amplitude distributions. Five different facial expressions were repeated while recording EEG signals of 0-60Hz frequencies with two reference electrodes placed on both earlobes. EMG noise received during EEG measurements was not filtered away but was observed to be minimal. A dataset was first created for the actions done, and later categorized by a mean average error (MAE), a statistical error deviation analysis and then classified with both an LSTM and GRU neural network by relating FFT amplitudes to the actions. On average, the LSTM network had classification accuracy of 78.6%, and the GRU network had a classification accuracy of 81.8%. 
    more » « less
  3. Isometric force generation and kinematic reaching in the upper extremity has been found to be represented by a limited number of muscle synergies, even across task-specific variations. However, the extent of the generalizability of muscle synergies between these two motor tasks within the arm workspace remains unknown. In this study, we recorded electromyographic (EMG) signals from 13 different arm, shoulder, and back muscles of ten healthy individuals while they performed isometric and kinematic center-out target matches to one of 12 equidistant directional targets in the horizontal plane and at each of four starting arm positions. Non-negative matrix factorization was applied to the EMG data to identify the muscle synergies. Five and six muscle synergies were found to represent the isometric force generation and point-to-point reaches. We also found that the number and composition of muscle synergies were conserved across the arm workspace per motor task. Similar tuning directions of muscle synergy activation profiles were observed at different starting arm locations. Between the isometric and kinematic motor tasks, we found that two to four out of five muscle synergies were common in the composition and activation profiles across the starting arm locations. The greater number of muscle synergies that were involved in achieving a target match in the reaching task compared to the isometric task may explain the complexity of neuromotor control in arm reaching movements. Overall, our results may provide further insight into the neuromotor compartmentalization of shared muscle synergies between two different arm motor tasks and can be utilized to assess motor disabilities in individuals with upper limb motor impairments. 
    more » « less
  4. Li-Jessen, Nicole Yee-Key (Ed.)
    The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings. 
    more » « less
  5. Abstract Accurate anatomical matching for patient-specific electromyographic (EMG) mapping is crucial yet technically challenging in various medical disciplines. The fixed electrode construction of multielectrode arrays (MEAs) makes it nearly impossible to match an individual's unique muscle anatomy. This mismatch between the MEAs and target muscles leads to missing relevant muscle activity, highly redundant data, complicated electrode placement optimization, and inaccuracies in classification algorithms. Here, we present customizable and reconfigurable drawn-on-skin (DoS) MEAs as the first demonstration of high-density EMG mapping from in situ-fabricated electrodes with tunable configurations adapted to subject-specific muscle anatomy. The DoS MEAs show uniform electrical properties and can map EMG activity with high fidelity under skin deformation-induced motion, which stems from the unique and robust skin-electrode interface. They can be used to localize innervation zones (IZs), detect motor unit propagation, and capture EMG signals with consistent quality during large muscle movements. Reconfiguring the electrode arrangement of DoS MEAs to match and extend the coverage of the forearm flexors enables localization of the muscle activity and prevents missed information such as IZs. In addition, DoS MEAs customized to the specific anatomy of subjects produce highly informative data, leading to accurate finger gesture detection and prosthetic control compared with conventional technology. 
    more » « less