skip to main content


Title: mTeeth: Identifying Brushing Teeth Surfaces Using Wrist-Worn Inertial Sensors
Ensuring that all the teeth surfaces are adequately covered during daily brushing can reduce the risk of several oral diseases. In this paper, we propose the mTeeth model to detect teeth surfaces being brushed with a manual toothbrush in the natural free-living environment using wrist-worn inertial sensors. To unambiguously label sensor data corresponding to different surfaces and capture all transitions that last only milliseconds, we present a lightweight method to detect the micro-event of brushing strokes that cleanly demarcates transitions among brushing surfaces. Using features extracted from brushing strokes, we propose a Bayesian Ensemble method that leverages the natural hierarchy among teeth surfaces and patterns of transition among them. For training and testing, we enrich a publicly-available wrist-worn inertial sensor dataset collected from the natural environment with time-synchronized precise labels of brushing surface timings and moments of transition. We annotate 10,230 instances of brushing on different surfaces from 114 episodes and evaluate the impact of wide between-person and within-person between-episode variability on machine learning model's performance for brushing surface detection.  more » « less
Award ID(s):
1640813 1823221
NSF-PAR ID:
10290605
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
5
Issue:
2
ISSN:
2474-9567
Page Range / eLocation ID:
1 to 25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper presents ssLOTR (self-supervised learning on the rings), a system that shows the feasibility of designing self-supervised learning based techniques for 3D finger motion tracking using a custom-designed wearable inertial measurement unit (IMU) sensor with a minimal overhead of labeled training data. Ubiquitous finger motion tracking enables a number of applications in augmented and virtual reality, sign language recognition, rehabilitation healthcare, sports analytics, etc. However, unlike vision, there are no large-scale training datasets for developing robust machine learning (ML) models on wearable devices. ssLOTR designs ML models based on data augmentation and self-supervised learning to first extract efficient representations from raw IMU data without the need for any training labels. The extracted representations are further trained with small-scale labeled training data. In comparison to fully supervised learning, we show that only 15% of labeled training data is sufficient with self-supervised learning to achieve similar accuracy. Our sensor device is designed using a two-layer printed circuit board (PCB) to minimize the footprint and uses a combination of Polylactic acid (PLA) and Thermoplastic polyurethane (TPU) as housing materials for sturdiness and flexibility. It incorporates a system-on-chip (SoC) microcontroller with integrated WiFi/Bluetooth Low Energy (BLE) modules for real-time wireless communication, portability, and ubiquity. In contrast to gloves, our device is worn like rings on fingers, and therefore, does not impede dexterous finger motion. Extensive evaluation with 12 users depicts a 3D joint angle tracking accuracy of 9.07° (joint position accuracy of 6.55mm) with robustness to natural variation in sensor positions, wrist motion, etc, with low overhead in latency and power consumption on embedded platforms. 
    more » « less
  2. Automatic action identification from video and kinematic data is an important machine learning problem with applications ranging from robotics to smart health. Most existing works focus on identifying coarse actions such as running, climbing, or cutting vegetables, which have relatively long durations and a complex series of motions. This is an important limitation for applications that require identification of more elemental motions at high temporal resolution. For example, in the rehabilitation of arm impairment after stroke, quantifying the training dose (number of repetitions) requires differentiating motions with sub-second durations. Our goal is to bridge this gap. To this end, we introduce a large-scale, multimodal dataset, StrokeRehab, as a new action-recognition benchmark that includes elemental short-duration actions labeled at a high temporal resolution. StrokeRehab consists of a high-quality inertial measurement unit sensor and video data of 51 stroke-impaired patients and 20 healthy subjects performing activities of daily living like feeding, brushing teeth, etc. Because it contains data from both healthy and impaired individuals, StrokeRehab can be used to study the influence of distribution shift in action-recognition tasks. When evaluated on StrokeRehab, current state-of-the-art models for action segmentation produce noisy predictions, which reduces their accuracy in identifying the corresponding sequence of actions. To address this, we propose a novel approach for high-resolution action identification, inspired by speech-recognition techniques, which is based on a sequence-to-sequence model that directly predicts the sequence of actions. This approach outperforms current state-of-the-art methods on StrokeRehab, as well as on the standard benchmark datasets 50Salads, Breakfast, and Jigsaws. 
    more » « less
  3. null (Ed.)
    Abstract Study Objectives The usage of wrist-worn wearables to detect sleep–wake states remains a formidable challenge, particularly among individuals with disordered sleep. We developed a novel and unbiased data-driven method for the detection of sleep–wake and compared its performance with the well-established Oakley algorithm (OA) relative to polysomnography (PSG) in elderly men with disordered sleep. Methods Overnight in-lab PSG from 102 participants was compared with accelerometry and photoplethysmography simultaneously collected with a wearable device (Empatica E4). A binary segmentation algorithm was used to detect change points in these signals. A model that estimates sleep or wake states given the changes in these signals was established (change point decoder, CPD). The CPD’s performance was compared with the performance of the OA in relation to PSG. Results On the testing set, OA provided sleep accuracy of 0.85, wake accuracy of 0.54, AUC of 0.67, and Kappa of 0.39. Comparable values for CPD were 0.70, 0.74, 0.78, and 0.40. The CPD method had sleep onset latency error of −22.9 min, sleep efficiency error of 2.09%, and underestimated the number of sleep–wake transitions with an error of 64.4. The OA method’s performance was 28.6 min, −0.03%, and −17.2, respectively. Conclusions The CPD aggregates information from both cardiac and motion signals for state determination as well as the cross-dimensional influences from these domains. Therefore, CPD classification achieved balanced performance and higher AUC, despite underestimating sleep–wake transitions. The CPD could be used as an alternate framework to investigate sleep–wake dynamics within the conventional time frame of 30-s epochs. 
    more » « less
  4. Hand hygiene is crucial in preventing the spread of infections and diseases. Lack of hand hygiene is one of the major reasons for healthcare associated infections (HAIs) in hospitals. Adherence to hand hygiene compliance by the workers in the food business is very important for preventing food-borne illness. In addition to healthcare settings and food businesses, hand washing is also vital for personal well-being. Despite the importance of hand hygiene, people often do not wash hands when necessary. Automatic detection of hand washing activity can facilitate justin-time alerts when a person forgets to wash hands. Monitoring hand washing practices is also essential in ensuring accountability and providing personalized feedback, particularly in hospitals and food businesses. Inertial sensors available in smart wrist devices can capture hand movements, and so it is feasible to detect hand washing using these devices. However, it is challenging to detect hand washing using wrist wearable sensors since hand movements are associated with a wide range of activities. In this paper, we present HAWAD, a robust solution for hand washing detection using wrist wearable inertial sensors. We leverage the distribution of penultimate layer output of a neural network to detect hand washing from a wide range of activities. Our method reduces false positives by 77% and improves F1-score by 30% compared to the baseline method. 
    more » « less
  5. Pleasant brush therapies may benefit those with autism, trauma, and anxiety. While studies monitor brushing velocity, hand-delivery of brush strokes introduces variability. Detailed measurements of human-delivered brushing physics may help under-stand such variability and subsequent impact on receivers’ perceived pleasantness. Herein, we instrument a brush with multi-axis force and displacement sensors to measure their physics as 12 participants pleasantly stroke a receiver’s forearm. Algorithmic procedures identify skin contact, and define four stages of arrival, stroke, departure, and airtime between strokes. Torque magnitude, rather than force, is evaluated as a metric to minimize inertial noise, as it registers brush bend and orientation. Overall, the results of the naturally delivered brushing experiments indicate force and velocity values in the range of 0.4 N and 3-10 cm/s, in alignment with prior work. However, we observe significant variance between brushers across velocity, force, torque, and brushstroke length. Upon further analysis, torque and force measures are correlated, yet torque provides distinct information from velocity. In evaluating the receiver’s response to individual differences between brushers of the preliminary case study, higher pleasantness is tied to lower mean torque, and lower instantaneous variance over the stroke duration. Torque magnitude appears to complement velocity’s influence on perceived pleasant-ness. 
    more » « less