Continuous monitoring of respiration provides invaluable insights about health status management (e.g., the progression or recovery of diseases). Recent advancements in radio frequency (RF) technologies show promise for continuous respiration monitoring by virtue of their non-invasive nature, and preferred over wearable solutions that require frequent charging and continuous wearing. However, RF signals are susceptible to large body movements, which are inevitable in real life, challenging the robustness of respiration monitoring. While many existing methods have been proposed to achieve robust RF-based respiration monitoring, their reliance on supervised data limits their potential for broad applicability. In this context, we propose, RF-Q, an unsupervised/self-supervised model to achieve signal quality assessment and quality-aware estimation for robust RF-based respiration monitoring. RF-Q uses the recon- struction error of an autoencoder (AE) neural network to quantify the quality of respiratory information in RF signals without the need for data labeling. With the combination of the quantified sig- nal quality and reconstructed signal in a weighted fusion, we are able to achieve improved robustness of RF respiration monitor- ing. We demonstrate that, instead of applying sophisticated models devised with respective expertise using a considerable amount of labeled data, by just quantifying the signal quality in an unsupervised manner we can significantly boost the average end-to-end (e2e) respiratory rate estimation accuracy of a baseline by an improvement ratio of 2.75, higher than the gain of 1.94 achieved by a supervised baseline method that excludes distorted data. 
                        more » 
                        « less   
                    
                            
                            Poster: Quantifying Signal Quality Using Autoencoder for Robust RF-based Respiration Monitoring
                        
                    
    
            While radio frequency (RF) based respiration monitoring for at- home health screening is receiving increasing attention, robustness remains an open challenge. In recent work, deep learning (DL) methods have been demonstrated effective in dealing with non- linear issues from multi-path interference to motion disturbance, thus improving the accuracy of RF-based respiration monitoring. However, such DL methods usually require large amounts of train- ing data with intensive manual labeling efforts, and frequently not openly available. We propose RF-Q for robust RF-based respiration monitoring, using self-supervised learning with an autoencoder (AE) neural network to quantify the quality of respiratory signal based on the residual between the original and reconstructed sig- nals. We demonstrate that, by simply quantifying the signal quality with AE for weighted estimation we can boost the end-to-end (e2e) respiration monitoring accuracy by an improvement ratio of 2.75 compared to a baseline. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1951880
- PAR ID:
- 10439763
- Date Published:
- Journal Name:
- ACM/IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Radar-based solutions support practical and longi- tudinal respiration monitoring owing to their non-invasive nature. Nighttime respiration monitoring at home provides rich and high- quality data, mostly free of motion disturbances because the user is quasi-stationary during sleep, and 6-8 hours per day rather than tens of minutes, promising for longitudinal studies. However, most existing work was conducted in laboratory environments for short periods, thus the environment, user motions, and postures can differ significantly from those in real homes. To understand how to obtain quality, overnight respiration data in real homes, we conduct a thorough experimental study with 6 participants of various sleep postures over 9 nights in 4 real-home testbeds, each configured with 3–4 sensors around the bed. We first compare the performance among four typical sensor placements around the bed to understand which is the optimal location for high quality data. Then we explore methods to track range bins with high quality signals as occasional user motions change the distance thus signal qualities, and different aspects of amplitude and phase data to further improve the signal quality using metrics of the periodicity-to-noise ratio (PNR) and end-to-end (e2e) accuracy. The experiments demonstrate that the sensor placement is a vital factor, and the bedside is an optimal choice considering both accuracy and ease of deployment (2.65 bpm error at 80 percentile), also consistent among four typical sleep postures. We also observe that, a proper range bin selection method can improve the PNR by 2 dB at 75-percentile, and e2e accuracy by 0.9 bpm at 80-percentile. Both amplitude and phase data have comparable e2e accuracy, while phase is more sensitive to motions thus suitable for nighttime movement detection. Based on these discoveries, we develop a few simple practical guidelines useful for the community to achieve high quality, longitudinal home- based overnight respiration monitoring.more » « less
- 
            Abstract Teeth scans are essential for many applications in orthodontics, where the teeth structures are virtualized to facilitate the design and fabrication of the prosthetic piece. Nevertheless, due to the limitations caused by factors such as viewing angles, occlusions, and sensor resolution, the 3D scanned point clouds (PCs) could be noisy or incomplete. Hence, there is a critical need to enhance the quality of the teeth PCs to ensure a suitable dental treatment. Toward this end, we propose a systematic framework including a two-step data augmentation (DA) technique to augment the limited teeth PCs and a hybrid deep learning (DL) method to complete the incomplete PCs. For the two-step DA, we first mirror and combine the PCs based on the bilateral symmetry of the human teeth and then augment the PCs based on an iterative generative adversarial network (GAN). Two filters are designed to avoid the outlier and duplicated PCs during the DA. For the hybrid DL, we first use a deep autoencoder (AE) to represent the PCs. Then, we propose a hybrid approach that selects the best completion to the teeth PCs from AE and a reinforcement learning (RL) agent-controlled GAN. Ablation study is performed to analyze each component’s contribution. We compared our method with other benchmark methods including point cloud network (PCN), cascaded refinement network (CRN), and variational relational point completion network (VRC-Net), and demonstrated that the proposed framework is suitable for completing teeth PCs with good accuracy over different scenarios.more » « less
- 
            This paper presents Hit2Flux, a machine learning framework for boiling heat flux prediction using acoustic emission (AE) hits generated through threshold-based transient sampling. Unlike continuously sampled data, AE hits are recorded when the signal exceeds a predefined threshold and are thus discontinuous in nature. Meanwhile, each hit represents a waveform at a high sampling frequency ( 1 MHz). In order to capture the features of both the high-frequency waveforms and the temporal distribution of hits, Hit2Flux involves i) feature extraction by transforming AE hits into the frequency domain and organizing these spectra into sequences using a rolling window to form “sequences-of-sequences,” and ii) heat flux prediction using a long short-term memory (LSTM) network with sequences of sequences. The model is trained on AE hits recorded during pool boiling experiments using an AE sensor attached to the boiling chamber. Continuously sampled acoustic data using a hydrophone were also collected as a reference data set for this study. Results demonstrate that the proposed AE-based method achieves performance comparable to hydrophones, validating its potential for heat flux monitoring. Additionally, it is shown that the inclusion of multiple acoustic emission hits as model inputs leads to higher performance. The Hit2Flux model is also compared to methods pairing various signal preparation techniques with state-of-the-art models. This comparison further highlighted the superior accuracy of the proposed approach. The developed Hi2Flux algorithm can be applied to other transient sampling events, such as structural health monitoring, detection of electromagnetic pulses, among others.more » « less
- 
            Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    