skip to main content

This content will become publicly available on January 1, 2023

Title: DeepVS: A Deep Learning Approach For RF-based Vital Signs Sensing
Vital signs (e.g., heart and respiratory rate) are indicative for health status assessment. Efforts have been made to extract vital signs using radio frequency (RF) techniques (e.g., Wi-Fi, FMCW, UWB), which offer a non-touch solution for continuous and ubiquitous monitoring without users’ cooperative efforts. While RF-based vital signs monitoring is user-friendly, its robustness faces two challenges. On the one hand, the RF signal is modulated by the periodic chest wall displacement due to heartbeat and breathing in a nonlinear manner. It is inherently hard to identify the fundamental heart and respiratory rates (HR and RR) in the presence of higher order harmonics of them and intermodulation between HR and RR, especially when they have overlapping frequency bands. On the other hand, the inadvertent body movements may disturb and distort the RF signal, overwhelming the vital signals, thus inhibiting the parameter estimation of the physiological movement (i.e., heartbeat and breathing). In this paper, we propose DeepVS, a deep learning approach that addresses the aforementioned challenges from the non-linearity and inadvertent movements for robust RF-based vital signs sensing in a unified manner. DeepVS combines 1D CNN and attention models to exploit local features and temporal correlations. Moreover, it leverages a two-stream scheme to more » integrate features from both time and frequency domains. Additionally, DeepVS unifies the estimation of HR and RR with a multi-head structure, which only adds limited extra overhead (<1%) to the existing model, compared to doubling the overhead using two separate models for HR and RR respectively. Our experiments demonstrate that DeepVS achieves 80-percentile HR/RR errors of 7.4/4.9 beat/breaths per minute (bpm) on a challenging dataset, as compared to 11.8/7.3 bpm of a non-learning solution. Besides, an ablation study has been conducted to quantify the effectiveness of DeepVS. « less
Authors:
; ; ; ;
Award ID(s):
2119299
Publication Date:
NSF-PAR ID:
10356921
Journal Name:
In 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics (BCB ’22)
Sponsoring Org:
National Science Foundation
More Like this
  1. Non-contact vital signs monitoring (NCVSM) with radio frequency (RF) is attracting increasing attention due to its non-invasive nature. Recent advances in COTS radar technologies accelerate the development of RF-based solutions. While researchers have implemented and demonstrated the feasibility of NCVSM with diverse radar hardware, most efforts have been focused on devising algorithms to extract vital signs, with limited understanding about the effects of radar configurations. The deficiency in such understanding hinders the design of software defined radar (SDR) optimally customized for NCVSM. In this work, we first hypothesize the effects of FMCW radar configurations using signal-to-interference-plus-noise ratio (SINR) based signal modeling, then we conduct extensive experiments with a COTS FMCW radar, TinyRad, to understand how various parameters impact NCVSM performance compared to a medical device. We find that a larger bandwidth or higher transmitting power in general improves vital sign estimation accuracy; however, coherent processing of consecutive chirps (time diversity) or multiple receiving antennas (space diversity) does not improve the performance. Observations on the baseband (BB) signal show that coherent processing contributes to a higher amplitude but similar phase patterns, whose periodic changes are the key in extracting vital signs.
  2. Breathing biomarkers, such as breathing rate, fractional inspiratory time, and inhalation-exhalation ratio, are vital for monitoring the user's health and well-being. Accurate estimation of such biomarkers requires breathing phase detection, i.e., inhalation and exhalation. However, traditional breathing phase monitoring relies on uncomfortable equipment, e.g., chestbands. Smartphone acoustic sensors have shown promising results for passive breathing monitoring during sleep or guided breathing. However, detecting breathing phases using acoustic data can be challenging for various reasons. One of the major obstacles is the complexity of annotating breathing sounds due to inaudible parts in regular breathing and background noises. This paper assesses the potential of using smartphone acoustic sensors for passive unguided breathing phase monitoring in a natural environment. We address the annotation challenges by developing a novel variant of the teacher-student training method for transferring knowledge from an inertial sensor to an acoustic sensor, eliminating the need for manual breathing sound annotation by fusing signal processing with deep learning techniques. We train and evaluate our model on the breathing data collected from 131 subjects, including healthy individuals and respiratory patients. Experimental results show that our model can detect breathing phases with 77.33% accuracy using acoustic sensors. We further present an example use-casemore »of breathing phase-detection by first estimating the biomarkers from the estimated breathing phases and then using these biomarkers for pulmonary patient detection. Using the detected breathing phases, we can estimate fractional inspiratory time with 92.08% accuracy, the inhalation-exhalation ratio with 86.76% accuracy, and the breathing rate with 91.74% accuracy. Moreover, we can distinguish respiratory patients from healthy individuals with up to 76% accuracy. This paper is the first to show the feasibility of detecting regular breathing phases towards passively monitoring respiratory health and well-being using acoustic data captured by a smartphone.« less
  3. Continuous monitoring of perinatal women in a descriptive case study allowed us the opportunity to examine the time during which the COVID-19 infection led to physiological changes in two low-income pregnant women. An important component of this study was the use of a wearable sensor device, the Oura ring, to monitor and record vital physiological parameters during sleep. Two women in their second and third trimesters, respectively, were selected based on a positive COVID-19 diagnosis. Both women were tested using the polymerase chain reaction method to confirm the presence of the virus during which time we were able to collect these physiological data. In both cases, we observed 3–6 days of peak physiological changes in resting heart rate (HR), heart rate variability (HRV), and respiratory rate (RR), as well as sleep surrounding the onset of COVID-19 symptoms. The pregnant woman in her third trimester showed a significant increase in resting HR ( p = 0.006) and RR ( p = 0.048), and a significant decrease in HRV ( p = 0.027) and deep sleep duration ( p = 0.029). She reported experiencing moderate COVID-19 symptoms and did not require hospitalization. At 38 weeks of gestation, she had a normal deliverymore »and gave birth to a healthy infant. The participant in her second trimester showed similar physiological changes during the 3-day peak period. Importantly, these changes appeared to return to the pre-peak levels. Common symptoms reported by both cases included loss of smell and nasal congestion, with one losing her sense of taste. Results suggest the potential to use the changes in cardiorespiratory responses and sleep for real-time monitoring of health and well-being during pregnancy.« less
  4. This demonstration presents a working prototype of VitalHub, a practical solution for longitudinal in-home vital signs monitoring. To balance the trade-offs between the challenges related to an individual’s efforts thus compliance, and robustness with vital signs monitoring, we introduce a passive monitoring solution, which is free of any on-body device or cooperative efforts from the user. By fusing the inputs from a pair of co-located UWB and depth sensors, VitalHub achieves robust, passive, context-aware and privacy-preserving sensing. We use a COTS UWB sensor to detect chest wall displacement due to the respiration and heartbeat for vital signs extraction. We use the depth information from Microsoft Kinect to detect and locate the users in the field of view and recognize the activities of the respective users for further analysis. We have tested the prototype extensively in engineering and medical lab environments. We will demonstrate the features and performance of VitalHub using real-world data in comparison with an FDA approved medical device.
  5. This demonstration presents a working prototype of VitalHub, a practical solution for longitudinal in-home vital signs monitoring. To balance the trade-offs between the challenges related to an individual’s efforts thus compliance, and robustness with vital signs monitoring, we introduce a passive monitoring solution, which is free of any on-body device or cooperative efforts from the user. By fusing the inputs from a pair of co-located UWB and depth sensors, VitalHub achieves robust, passive, context-aware and privacy-preserving sensing. We use a COTS UWB sensor to detect chest wall displacement due to the respiration and heartbeat for vital signs extraction. We use the depth information from Microsoft Kinect to detect and locate the users in the field of view and recognize the activities of the respective users for further analysis.We have tested the prototype extensively in engineering and medical lab environments. We will demonstrate the features and performance of VitalHub using realworld data in comparison with an FDA approved medical device.