skip to main content

This content will become publicly available on July 31, 2023

Title: PhyMask: Robust Sensing of Brain Activity and Physiological Signals During Sleep with an All-textile Eye Mask
Clinical-grade wearable sleep monitoring is a challenging problem since it requires concurrently monitoring brain activity, eye movement, muscle activity, cardio-respiratory features, and gross body movements. This requires multiple sensors to be worn at different locations as well as uncomfortable adhesives and discrete electronic components to be placed on the head. As a result, existing wearables either compromise comfort or compromise accuracy in tracking sleep variables. We propose PhyMask, an all-textile sleep monitoring solution that is practical and comfortable for continuous use and that acquires all signals of interest to sleep solely using comfortable textile sensors placed on the head. We show that PhyMask can be used to accurately measure all the signals required for precise sleep stage tracking and to extract advanced sleep markers such as spindles and K-complexes robustly in the real-world setting. We validate PhyMask against polysomnography (PSG) and show that it significantly outperforms two commercially-available sleep tracking wearables—Fitbit and Oura Ring.
; ; ; ;
Award ID(s):
1839999 1763524 1815347
Publication Date:
Journal Name:
ACM Transactions on Computing for Healthcare
Page Range or eLocation-ID:
1 to 35
Sponsoring Org:
National Science Foundation
More Like this
  1. Currently, many critical care indices are repetitively assessed and recorded by overburdened nurses, e.g. physical function or facial pain expressions of nonverbal patients. In addition, many essential information on patients and their environment are not captured at all, or are captured in a non-granular manner, e.g. sleep disturbance factors such as bright light, loud background noise, or excessive visitations. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring of critically ill patients and their environment in the Intensive Care Unit (ICU). As an exemplar prevalent condition, we also characterized delirious and non-delirious patients and their environment. We used wearable sensors, light and sound sensors, and a high-resolution camera to collected data on patients and their environment. We analyzed collected data using deep learning and statistical analysis. Our system performed face detection, face recognition, facial action unit detection, head pose detection, facial expression recognition, posture recognition, actigraphy analysis, sound pressure and light level detection, and visitation frequency detection. We were able to detect patient's face (Mean average precision (mAP)=0.94), recognize patient's face (mAP=0.80), and their postures (F1=0.94). We also found that all facial expressions, 11 activity features, visitation frequencymore »during the day, visitation frequency during the night, light levels, and sound pressure levels during the night were significantly different between delirious and non-delirious patients (p-value<0.05). In summary, we showed that granular and autonomous monitoring of critically ill patients and their environment is feasible and can be used for characterizing critical care conditions and related environment factors.« less
  2. Objective: Accurate implementation of real-time non-invasive Brain-Machine / Computer Interfaces (BMI / BCI) requires handling physiological and non-physiological artifacts associated with the measurement modalities. For example, scalp electroencephalographic (EEG) measurements are often considered prone to excessive motion artifacts and other types of artifacts that contaminate the EEG recordings. Although the magnitude of such artifacts heavily depends on the task and the setup, complete minimization or isolation of such artifacts is generally not possible. Approach: We present an adaptive de-noising framework with robustness properties, using a Volterra based non-linear mapping to characterize and handle the motion artifact contamination in EEG measurements. We asked healthy able-bodied subjects to walk on a treadmill at gait speeds of 1-to-4 mph, while we tracked the motion of select EEG electrodes with an infrared video-based motion tracking system. We also placed Inertial Measurement Unit (IMU) sensors on the forehead and feet of the subjects for assessing the overall head movement and segmenting the gait. Main Results: We discuss in detail the characteristics of the motion artifacts and propose a real-time compatible solution to filter them. We report the effective handling of both the fundamental frequency of contamination (synchronized to the walking speed) and its harmonics. Event-Relatedmore »Spectral Perturbation (ERSP) analysis for walking shows that the gait dependency of artifact contamination is also eliminated on all target frequencies. Significance: The real-time compatibility and generalizability of our adaptive filtering framework allows for the effective use of non-invasive BMI/BCI systems and greatly expands the implementation type and application domains to other types of problems where signal denoising is desirable. Combined with our previous efforts of filtering ocular artifacts, the presented technique allows for a comprehensive adaptive filtering framework to increase the EEG Signal to Noise Ratio (SNR). We believe the implementation will benefit all non-invasive neural measurement modalities, including studies discussing neural correlates of movement and other internal states, not necessarily of BMI focus.« less
  3. Advances in embedded systems have enabled integration of many lightweight sensory devices within our daily life. In particular, this trend has given rise to continuous expansion of wearable sensors in a broad range of applications from health and fitness monitoring to social networking and military surveillance. Wearables leverage machine learning techniques to profile behavioral routine of their end-users through activity recognition algorithms. Current research assumes that such machine learning algorithms are trained offline. In reality, however, wearables demand continuous reconfiguration of their computational algorithms due to their highly dynamic operation. Developing a personalized and adaptive machine learning model requires real-time reconfiguration of the model. Due to stringent computation and memory constraints of these embedded sensors, the training/re-training of the computational algorithms need to be memory- and computation-efficient. In this paper, we propose a framework, based on the notion of online learning, for real-time and on-device machine learning training. We propose to transform the activity recognition problem from a multi-class classification problem to a hierarchical model of binary decisions using cascading online binary classifiers. Our results, based on Pegasos online learning, demonstrate that the proposed approach achieves 97% accuracy in detecting activities of varying intensities using a limited memory while powermore »usages of the system is reduced by more than 40%.« less
  4. Automatic lying posture tracking is an important factor in human health monitoring. The increasing popularity of the wrist-based trackers provides the means for unobtrusive, affordable, and long-term monitoring with minimized privacy concerns for the end-users and promising results in detecting the type of physical activity, step counting, and sleep quality assessment. However, there is limited research on development of accurate and efficient lying posture tracking models using wrist-based sensor. Our experiments demonstrate a major drop in the accuracy of the lying posture tracking using wrist-based accelerometer sensor due to the unpredictable noise from arbitrary wrist movements and rotations while sleeping. In this paper, we develop a deep transfer learning method that improves performance of lying posture tracking using noisy data from wrist sensor by transferring the knowledge from an initial setting which contains both clean and noisy data. The proposed solution develops an optimal mapping model from the noisy data to the clean data in the initial setting using LSTM sequence regression, and reconstruct clean synthesized data in another setting where no noisy sensor data is available. This increases the lying posture tracking F1-Score by 24.9% for left-wrist and by 18.1% for right-wrist sensors comparing to the case without mapping.
  5. Breath monitoring is important for monitoring illnesses, such as sleep apnea, for people of all ages. One cause of concern for parents is sudden infant death syndrome (SIDS), where an infant suddenly passes away during sleep, usually due to complications in breathing. There are a variety of works and products on the market for monitoring breathing, especially for children and infants. Many of these are wearables that require you to attach an accessory onto the child or person, which can be uncomfortable. Other solutions utilize a camera, which can be privacy-intrusive and function poorly during the night, when lighting is poor. In this work, we introduce BuMA, an audio-based, non-intrusive, and contactless, breathing monitoring system. BuMA utilizes a microphone array, beamforming, and audio filtering to enhance the sounds of breathing by filtering out several common noises in or near home environments, such as construction, speech, and music, that could make detection difficult. We show that BuMA improves breathing detection accuracy by up to 12%, within 30cm from a person, over existing audio filtering algorithms or platforms that do not leverage filtering.