skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Interpretable Deep Learning Models for Single Trial Prediction of Balance Loss
Wearable robotic devices are being designed to assist the elderly population and other patients with locomotion disabilities. However, wearable robotics increases the risk from falling. Neuroimaging studies have provided evidence for the involvement of frontocentral and parietal cortices in postural control and this opens up the possibility of using decoders for early detection of balance loss by using electroencephalography (EEG). This study investigates the presence of commonly identified components of the perturbation evoked responses (PEP) when a person is in an exoskeleton. We also evaluated the feasibility of using single-trial EEG to predict the loss of balance using a convolution neural network. Overall, the model achieved a mean 5-fold cross-validation test accuracy of 75.2 % across six subjects with 50% as the chance level. We employed a gradient class activation map-based visualization technique for interpreting the decisions of the CNN and demonstrated that the network learns from PEP components present in these single trials. The high localization ability of Grad-CAM demonstrated here, opens up the possibilities for deploying CNN for ERP/PEP analysis while emphasizing on model interpretability.  more » « less
Award ID(s):
1650536
PAR ID:
10212419
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
Page Range / eLocation ID:
268 to 273
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Step length is a critical gait parameter that allows a quantitative assessment of gait asymmetry. Gait asymmetry can lead to many potential health threats such as joint degeneration, difficult balance control, and gait inefficiency. Therefore, accurate step length estimation is essential to understand gait asymmetry and provide appropriate clinical interventions or gait training programs. The conventional method for step length measurement relies on using foot-mounted inertial measurement units (IMUs). However, this may not be suitable for real-world applications due to sensor signal drift and the potential obtrusiveness of using distal sensors. To overcome this challenge, we propose a deep convolutional neural network-based step length estimation using only proximal wearable sensors (hip goniometer, trunk IMU, and thigh IMU) capable of generalizing to various walking speeds. To evaluate this approach, we utilized treadmill data collected from sixteen able-bodied subjects at different walking speeds. We tested our optimized model on the overground walking data. Our CNN model estimated the step length with an average mean absolute error of 2.89 ± 0.89 cm across all subjects and walking speeds. Since wearable sensors and CNN models are easily deployable in real-time, our study findings can provide personalized real-time step length monitoring in wearable assistive devices and gait training programs. 
    more » « less
  2. Multi-modal bio-sensing has recently been used as effective research tools in affective computing, autism, clinical disorders, and virtual reality among other areas. However, none of the existing bio-sensing systems support multi-modality in a wearable manner outside well-controlled laboratory environments with research-grade measurements. This work attempts to bridge this gap by developing a wearable multi-modal biosensing system capable of collecting, synchronizing, recording and transmitting data from multiple bio-sensors: PPG, EEG, eye-gaze headset, body motion capture, GSR, etc. while also providing task modulation features including visual-stimulus tagging. This study describes the development and integration of the various components of our system. We evaluate the developed sensors by comparing their measurements to those obtained by a standard research-grade bio-sensors. We first evaluate different sensor modalities of our headset, namely earlobe-based PPG module with motion-noise canceling for ECG during heart-beat calculation. We also compare the steady-state visually evoked potentials (SSVEP) measured by our shielded dry EEG sensors with the potentials obtained by commercially available dry EEG sensors. We also investigate the effect of head movements on the accuracy and precision of our wearable eyegaze system. Furthermore, we carry out two practical tasks to demonstrate the applications of using multiple sensor modalities for exploring previously unanswerable questions in bio-sensing. Specifically, utilizing bio-sensing we show which strategy works best for playing Where is Waldo? visual-search game, changes in EEG corresponding to true versus false target fixations in this game, and predicting the loss/draw/win states through biosensing modalities while learning their limitations in a Rock-Paper-Scissors game. 
    more » « less
  3. null (Ed.)
    Convolutional Neural Networks are compute-intensive learning models that have demonstrated ability and effectiveness in solving complex learning problems. However, developing a high-performance FPGA accelerator for CNN often demands high programming skills, hardware verification, precise distribution localization, and long development cycles. Besides, CNN depth increases by reuse and replication of multiple layers. This paper proposes a programming flow for CNN on FPGA to generate high-performance accelerators by assembling CNN pre-implemented components as a puzzle based on the graph topology. Using pre-implemented components allows us to use the minimum of resources necessary, predict the performance, and gain in productivity since there is no need to synthesize any HDL code. Furthermore, components can be reused for a different range of applications. Through prototyping, we demonstrated the viability and relevance of our approach. Experiments show a productivity improvement of up to 69% compared to a traditional FPGA implementation while achieving over 1.75× higher Fmax with lower resources and power consumption. 
    more » « less
  4. In this work, we present a novel non-visual HAR system that achieves state-of-the-art performance on realistic SCE tasks via a single wearable sensor. We leverage surface electromyography and inertial data from a low-profile wearable sensor to attain performant robot perception while remaining unobtrusive and user-friendly. By capturing both convolutional and temporal features with a hybrid CNN-LSTM classifier, our system is able to robustly and effectively classify complex, full-body human activities with only this single sensor. We perform a rigorous analysis of our method on two datasets representative of SCE tasks, and compare performance with several prominent HAR algorithms. Results show our system substantially outperforms rival algorithms in identifying complex human tasks from minimal sensing hardware, achieving F1-scores up to 84% over 31 strenuous activity classes. To our knowledge, we are the first to robustly identify complex full-body tasks using a single, unobtrusive sensor feasible for real-world use in SCEs. Using our approach, robots will be able to more reliably understand human activity, enabling them to safely navigate sensitive, crowded spaces. 
    more » « less
  5. Traumatic Brain Injury (TBI) is a common cause of death and disability. However, existing tools for TBI diagnosis are either subjective or require extensive clinical setup and expertise. The increasing affordability and reduction in the size of relatively high-performance computing systems combined with promising results from TBI related machine learning research make it possible to create compact and portable systems for early detection of TBI. This work describes a Raspberry Pi based portable, real-time data acquisition, and automated processing system that uses machine learning to efficiently identify TBI and automatically score sleep stages from a single-channel Electroencephalogram (EEG) signal. We discuss the design, implementation, and verification of the system that can digitize the EEG signal using an Analog to Digital Converter (ADC) and perform real-time signal classification to detect the presence of mild TBI (mTBI). We utilize Convolutional Neural Networks (CNN) and XGBoost based predictive models to evaluate the performance and demonstrate the versatility of the system to operate with multiple types of predictive models. We achieve a peak classification accuracy of more than 90% with a classification time of less than 1 s across 16–64 s epochs for TBI vs. control conditions. This work can enable the development of systems suitable for field use without requiring specialized medical equipment for early TBI detection applications and TBI research. Further, this work opens avenues to implement connected, real-time TBI related health and wellness monitoring systems. 
    more » « less