skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Wearable Multi-modal Bio-sensing System Towards Real-world Applications
Multi-modal bio-sensing has recently been used as effective research tools in affective computing, autism, clinical disorders, and virtual reality among other areas. However, none of the existing bio-sensing systems support multi-modality in a wearable manner outside well-controlled laboratory environments with research-grade measurements. This work attempts to bridge this gap by developing a wearable multi-modal biosensing system capable of collecting, synchronizing, recording and transmitting data from multiple bio-sensors: PPG, EEG, eye-gaze headset, body motion capture, GSR, etc. while also providing task modulation features including visual-stimulus tagging. This study describes the development and integration of the various components of our system. We evaluate the developed sensors by comparing their measurements to those obtained by a standard research-grade bio-sensors. We first evaluate different sensor modalities of our headset, namely earlobe-based PPG module with motion-noise canceling for ECG during heart-beat calculation. We also compare the steady-state visually evoked potentials (SSVEP) measured by our shielded dry EEG sensors with the potentials obtained by commercially available dry EEG sensors. We also investigate the effect of head movements on the accuracy and precision of our wearable eyegaze system. Furthermore, we carry out two practical tasks to demonstrate the applications of using multiple sensor modalities for exploring previously unanswerable questions in bio-sensing. Specifically, utilizing bio-sensing we show which strategy works best for playing Where is Waldo? visual-search game, changes in EEG corresponding to true versus false target fixations in this game, and predicting the loss/draw/win states through biosensing modalities while learning their limitations in a Rock-Paper-Scissors game.  more » « less
Award ID(s):
1719130
PAR ID:
10107953
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE transactions on biomedical engineering
Volume:
66
Issue:
4
ISSN:
0018-9294
Page Range / eLocation ID:
1137 - 1147
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Smart manufacturing, which integrates a multi-sensing system with physical manufacturing processes, has been widely adopted in the industry to support online and real-time decision making to improve manufacturing quality. A multi-sensing system for each specific manufacturing process can efficiently collect the in situ process variables from different sensor modalities to reflect the process variations in real-time. However, in practice, we usually do not have enough budget to equip too many sensors in each manufacturing process due to the cost consideration. Moreover, it is also important to better interpret the relationship between the sensing modalities and the quality variables based on the model. Therefore, it is necessary to model the quality-process relationship by selecting the most relevant sensor modalities with the specific quality measurement from the multi-modal sensing system in smart manufacturing. In this research, we adopted the concept of best subset variable selection and proposed a new model called Multi-mOdal beSt Subset modeling (MOSS). The proposed MOSS can effectively select the important sensor modalities and improve the modeling accuracy in quality-process modeling via functional norms that characterize the overall effects of individual modalities. The significance of sensor modalities can be used to determine the sensor placement strategy in smart manufacturing. Moreover, the selected modalities can better interpret the quality-process model by identifying the most correlated root cause of quality variations. The merits of the proposed model are illustrated by both simulations and a real case study in an additive manufacturing (i.e., fused deposition modeling) process. 
    more » « less
  2. A photoplethysmography (PPG) is an uncomplicated and inexpensive optical technique widely used in the healthcare domain to extract valuable health-related information, e.g., heart rate variability, blood pressure, and respiration rate. PPG signals can easily be collected continuously and remotely using portable wearable devices. However, these measuring devices are vulnerable to motion artifacts caused by daily life activities. The most common ways to eliminate motion artifacts use extra accelerometer sensors, which suffer from two limitations: i) high power consumption and ii) the need to integrate an accelerometer sensor in a wearable device (which is not required in certain wearables). This paper proposes a low-power non-accelerometer-based PPG motion artifacts removal method outperforming the accuracy of the existing methods. We use Cycle Generative Adversarial Network to reconstruct clean PPG signals from noisy PPG signals. Our novel machine-learning-based technique achieves 9.5 times improvement in motion artifact removal compared to the state-of-the-art without using extra sensors such as an accelerometer, which leads to 45% improvement in energy efficiency. 
    more » « less
  3. In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications. 
    more » « less
  4. Baba, Justin S; Coté, Gerard L (Ed.)
    In this research, we examine the potential of measuring physiological variables, including heart rate (HR) and respiration rate (RR) on the upper arm using a wireless multimodal sensing system consisting of an accelerometer, a gyroscope, a three-wavelength photoplethysmography (PPG), single-sided electrocardiography (SS-ECG), and bioimpedance (BioZ). The study included collecting HR data when the subject was at rest and typing, and RR data when the subject was at rest. The data from three wavelengths of PPG and BioZ were collected and compared to the SS-ECG as the standard. The accelerometer and gyro signals were used to exclude data with excessive noise due to motion. The results showed that when the subject remained sedentary, the mean absolute error (MAE) for the HR calculation for all three wavelengths of the PPG modality was less than two bpm, while the BioZ was 3.5 bpm compared with SS-ECG HR. The MAE for typing increased for both modalities and was less than three bpm for all three wavelengths of the PPG but increased to 7.5 bpm for the BioZ. Regarding RR, both modalities resulted in RR within one breath per minute of the SS-ECG modality for the one breathing rate. Overall, all modalities on this upper arm wearable worked well when the subject was sedentary. Still, the SS-ECG and PPG showed less variability for the HR signal in the presence of motion during micro-motions such as typing. 
    more » « less
  5. Optimization of pain assessment and treatment is an active area of research in healthcare. The purpose of this research is to create an objective pain intensity estimation system based on multimodal sensing signals through experimental studies. Twenty eight healthy subjects were recruited at Northeastern University. Nine physiological modalities were utilized in this research, namely facial expressions (FE), electroencephalography (EEG), eye movement (EM), skin conductance (SC), and blood volume pulse (BVP), electromyography (EMG), respiration rate (RR), skin temperature (ST), blood pressure (BP). Statistical analysis and machine learning algorithms were deployed to analyze the physiological data. FE, EEG, SC, BVP, and BP proved to be able to detect different pain states from healthy subjects. Multi-modalities proved to be promising in detecting different levels of painful states. A decision-level multi-modal fusion also proved to be efficient and accurate in classifying painful states. 
    more » « less