This paper proposes MilliPose, a system that facilitates full human body silhouette imaging and 3D pose estimation from millimeterwave (mmWave) devices. Unlike existing vision-based motion capture systems, MilliPose is not privacy-invasive and is capable of working under obstructions, poor visibility, and low light conditions. MilliPose leverages machine-learning models based on conditional Generative Adversarial Networks and Recurrent Neural Network to solve the challenges of poor resolution, specularity, and variable reflectivity with existing mmWave imaging systems. Our preliminary results show the efficacy of MilliPose in accurately predicting body joint locations under natural human movement.
more »
« less
MiShape: Accurate Human Silhouettes and Body Joints from Commodity Millimeter-Wave Devices
We propose MiShape, a millimeter-wave (mmWave) wireless signal based imaging system that generates high-resolution human silhouettes and predicts 3D locations of body joints. The system can capture human motions in real-time under low light and low-visibility conditions. Unlike existing vision-based motion capture systems, MiShape is privacy non-invasive and can generalize to a wide range of motion tracking applications at-home. To overcome the challenges with low-resolution, specularity, and aliasing in images from Commercial-Off-The-Shelf (COTS) mmWave systems, MiShape designs deep learning models based on conditional Generative Adversarial Networks and incorporates the rules of human biomechanics. We have customized MiShape for gait monitoring, but the model is well adaptive to any tracking applications with limited fine-tuning samples. We experimentally evaluate MiShape with real data collected from a COTS mmWave system for 10 volunteers, with diverse ages, gender, height, and somatotype, performing different poses. Our experimental results demonstrate that MiShape delivers high-resolution silhouettes and accurate body poses on par with an existing vision-based system, and unlocks the potential of mmWave systems, such as 5G home wireless routers, for privacy-noninvasive healthcare applications.
more »
« less
- PAR ID:
- 10358468
- Date Published:
- Journal Name:
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Volume:
- 6
- Issue:
- 3
- ISSN:
- 2474-9567
- Page Range / eLocation ID:
- 1 to 31
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this work, we proposeMiSleep, a deep learning augmented millimeter-wave (mmWave) wireless system to monitor human sleep posture by predicting the 3D location of the body joints of a person during sleep. Unlike existing vision- or wearable-based sleep monitoring systems,MiSleepis not privacy-invasive and does not require users to wear anything on their body.MiSleepleverages knowledge of human anatomical features and deep learning models to solve challenges in existing mmWave devices with low-resolution and aliased imaging, and specularity in signals.MiSleepbuilds the model by learning the relationship between mmWave reflected signals and body postures from thousands of existing samples. Since a practical sleep also involves sudden toss-turns, which could introduce errors in posture prediction,MiSleepdesigns a state machine based on the reflected signals to classify the sleeping states into rest or toss-turn, and predict the posture only during the rest states. We evaluateMiSleepwith real data collected from Commercial-Off-The-Shelf mmWave devices for 8 volunteers of diverse ages, genders, and heights performing different sleep postures. We observe thatMiSleepidentifies the toss-turn events start time and duration within 1.25 s and 1.7 s of the ground truth, respectively, and predicts the 3D location of body joints with a median error of 1.3 cm only and can perform even under the blankets, with accuracy on par with the existing vision-based system, unlocking the potential of mmWave systems for privacy-noninvasive at-home healthcare applications.more » « less
-
mmWave signals form a critical component of 5G and next-generation wireless networks, which are also being increasingly considered for sensing the environment around us to enable ubiquitous IoT applications. In this context, this paper leverages the properties of mmWave signals for tracking 3D finger motion for interactive IoT applications. While conventional vision-based solutions break down under poor lighting, occlusions, and also suffer from privacy concerns, mmWave signals work under typical occlusions and non-line-of-sight conditions, while being privacy-preserving. In contrast to prior works on mmWave sensing that focus on predefined gesture classification, this work performs continuous 3D finger motion tracking. Towards this end, we first observe via simulations and experiments that the small size of fingers coupled with specular reflections do not yield stable mmWave reflections. However, we make an interesting observation that focusing on the forearm instead of the fingers can provide stable reflections for 3D finger motion tracking. Muscles that activate the fingers extend through the forearm, whose motion manifests as vibrations on the forearm. By analyzing the variation in phases of reflected mmWave signals from the forearm, this paper designs mm4Arm, a system that tracks 3D finger motion. Nontrivial challenges arise due to the high dimensional search space, complex vibration patterns, diversity across users, hardware noise, etc. mm4Arm exploits anatomical constraints in finger motions and fuses them with machine learning architectures based on encoder-decoder and ResNets in enabling accurate tracking. A systematic performance evaluation with 10 users demonstrates a median error of 5.73° (location error of 4.07 mm) with robustness to multipath and natural variation in hand position/orientation. The accuracy is also consistent under non-line-of-sight conditions and clothing that might occlude the forearm. mm4Arm runs on smartphones with a latency of 19 ms and low energy overhead.more » « less
-
Kinematic motion analysis is widely used in health-care, sports medicine, robotics, biomechanics, sports science, etc. Motion capture systems are essential for motion analysis. There are three types of motion capture systems: marker-based capture, vision-based capture, and volumetric capture. Marker-based motion capture systems can achieve fairly accurate results but attaching markers to a body is inconvenient and time-consuming. Vision-based, marker-less motion capture systems are more desirable because of their non-intrusiveness and flexibility. Volumetric capture is a newer and more advanced marker-less motion capture system that can reconstruct realistic, full-body, animated 3D character models. But volumetric capture has rarely been used for motion analysis because volumetric motion data presents new challenges. We propose a new method for conducting kinematic motion analysis using volumetric capture data. This method consists of a three-stage pipeline. First, the motion is captured by a volumetric capture system. Then the volumetric capture data is processed using the Iterative Closest Points (ICP) algorithm to generate virtual markers that track the motion. Third, the motion tracking data is imported into the biomechanical analysis tool OpenSim for kinematic motion analysis. Our motion analysis method enables users to apply numerical motion analysis to the skeleton model in OpenSim while also studying the full-body, animated 3D model from different angles. It has the potential to provide more detailed and in-depth motion analysis for areas such as healthcare, sports science, and biomechanics.more » « less
-
Rehabilitation is a crucial process for patients suffering from motor disorders. The current practice is performing rehabilitation exercises under clinical expert supervision. New approaches are needed to allow patients to perform prescribed exercises at their homes and alleviate commuting requirements, expert shortages, and healthcare costs. Human joint estimation is a substantial component of these programs since it offers valuable visualization and feedback based on body movements. Camera-based systems have been popular for capturing joint motion. However, they have high-cost, raise serious privacy concerns, and require strict lighting and placement settings. We propose a millimeter-wave (mmWave)-based assistive rehabilitation system (MARS) for motor disorders to address these challenges. MARS provides a low-cost solution with a competitive object localization and detection accuracy. It first maps the 5D time-series point cloud from mmWave to a lower dimension. Then, it uses a convolution neural network (CNN) to estimate the accurate location of human joints. MARS can reconstruct 19 human joints and their skeleton from the point cloud generated by mmWave radar. We evaluate MARS using ten specific rehabilitation movements performed by four human subjects involving all body parts and obtain an average mean absolute error of 5.87 cm for all joint positions. To the best of our knowledge, this is the first rehabilitation movements dataset using mmWave point cloud. MARS is evaluated on the Nvidia Jetson Xavier-NX board. Model inference takes only 64 s and consumes 442 J energy. These results demonstrate the practicality of MARS on low-power edge devices.more » « less
An official website of the United States government

