Robotic lower-limb exoskeletons can augment human mobility, but current systems require extensive, context-specific considerations, limiting their real-world viability. Here, we present a unified exoskeleton control framework that autonomously adapts assistance on the basis of instantaneous user joint moment estimates from a temporal convolutional network (TCN). When deployed on our hip exoskeleton, the TCN achieved an average root mean square error of 0.142 newton-meters per kilogram across 35 ambulatory conditions without any user-specific calibration. Further, the unified controller significantly reduced user metabolic cost and lower-limb positive work during level-ground and incline walking compared with walking without wearing the exoskeleton. This advancement bridges the gap between in-lab exoskeleton technology and real-world human ambulation, making exoskeleton control technology viable for a broad community.
more »
« less
PoseSonic: 3D Upper Body Pose Estimation Through Egocentric Acoustic Sensing on Smartglasses
In this paper, we introduce PoseSonic, an intelligent acoustic sensing solution for smartglasses that estimates upper body poses. Our system only requires two pairs of microphones and speakers on the hinges of the eyeglasses to emit FMCW-encoded inaudible acoustic signals and receive reflected signals for body pose estimation. Using a customized deep learning model, PoseSonic estimates the 3D positions of 9 body joints including the shoulders, elbows, wrists, hips, and nose. We adopt a cross-modal supervision strategy to train our model using synchronized RGB video frames as ground truth. We conducted in-lab and semi-in-the-wild user studies with 22 participants to evaluate PoseSonic, and our user-independent model achieved a mean per joint position error of 6.17 cm in the lab setting and 14.12 cm in semi-in-the-wild setting when predicting the 9 body joint positions in 3D. Our further studies show that the performance was not significantly impacted by different surroundings or when the devices were remounted or by real-world environmental noise. Finally, we discuss the opportunities, challenges, and limitations of deploying PoseSonic in real-world applications.
more »
« less
- Award ID(s):
- 2239569
- PAR ID:
- 10492772
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Volume:
- 7
- Issue:
- 3
- ISSN:
- 2474-9567
- Page Range / eLocation ID:
- 1 to 28
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We introduce multimodal neural acoustic fields for synthesizing spatial sound and enabling the creation of immersive auditory experiences from novel viewpoints and in completely unseen new environments, both virtual and real. Extending the concept of neural radiance fields to acoustics, we develop a neural network-based model that maps an environment's geometric and visual features to its audio characteristics. Specifically, we introduce a novel hybrid transformer-convolutional neural network to accomplish two core tasks: capturing the reverberation characteristics of a scene from audio-visual data, and generating spatial sound in an unseen new environment from signals recorded at sparse positions and orientations within the original scene. By learning to represent spatial acoustics in a given environment, our approach enables creation of realistic immersive auditory experiences, thereby enhancing the sense of presence in augmented and virtual reality applications. We validate the proposed approach on both synthetic and real-world visual-acoustic data and demonstrate that our method produces nonlinear acoustic effects such as reverberations, and improves spatial audio quality compared to existing methods. Furthermore, we also conduct subjective user studies and demonstrate that the proposed framework significantly improves audio perception in immersive mixed reality applications.more » « less
-
Sleep is a vital physiological state that significantly impacts overall health. Continuous monitoring of sleep posture, heart rate, respiratory rate, and body movement is crucial for diagnosing and managing sleep disorders. Current monitoring solutions often disrupt natural sleep due to discomfort or raise privacy and instrumentation concerns. We introduce PillowSense, a fabric-based sleep monitoring system seamlessly integrated into a pillowcase. PillowSense utilizes a dual-layer fabric design. The top layer comprises conductive fabrics for sensing electrocardiogram (ECG) and surface electromyogram (sEMG), while the bottom layer features pressure-sensitive fabrics to monitor sleep location and movement. The system processes ECG and sEMG signals sequentially to infer multiple sleep variables and incorporates an adversarial neural network to enhance posture classification accuracy. We fabricate prototypes using off-the-shelf hardware and conduct both lab-based and in-the-wild longitudinal user studies to evaluate the system's effectiveness. Across 151 nights and 912.2 hours of real-world sleep data, the system achieves an F1 score of 88% for classifying seven sleep postures, and clinically-acceptable accuracy in vital sign monitoring. PillowSense's comfort, washability, and robustness in multi-user scenarios underscore its potential for unobtrusive, large-scale sleep monitoring.more » « less
-
Recognizing the affective state of children with autism spectrum disorder (ASD) in real-world settings poses challenges due to the varying head poses, illumination levels, occlusion and a lack of datasets annotated with emotions in in-the-wild scenarios. Understanding the emotional state of children with ASD is crucial for providing personalized interventions and support. Existing methods often rely on controlled lab environments, limiting their applicability to real-world scenarios. Hence, a framework that enables the recognition of affective states in children with ASD in uncontrolled settings is needed. This paper presents a framework for recognizing the affective state of children with ASD in an in-the-wild setting using heart rate (HR) information. More specifically, an algorithm is developed that can classify a participant’s emotion as positive, negative, or neutral by analyzing the heart rate signal acquired from a smartwatch. The heart rate data are obtained in real time using a smartwatch application while the child learns to code a robot and interacts with an avatar. The avatar assists the child in developing communication skills and programming the robot. In this paper, we also present a semi-automated annotation technique based on facial expression recognition for the heart rate data. The HR signal is analyzed to extract features that capture the emotional state of the child. Additionally, in this paper, the performance of a raw HR-signal-based emotion classification algorithm is compared with a classification approach based on features extracted from HR signals using discrete wavelet transform (DWT). The experimental results demonstrate that the proposed method achieves comparable performance to state-of-the-art HR-based emotion recognition techniques, despite being conducted in an uncontrolled setting rather than a controlled lab environment. The framework presented in this paper contributes to the real-world affect analysis of children with ASD using HR information. By enabling emotion recognition in uncontrolled settings, this approach has the potential to improve the monitoring and understanding of the emotional well-being of children with ASD in their daily lives.more » « less
-
Stress affects physical and mental health, and wearable devices have been widely used to detect daily stress through physiological signals. However, these signals vary due to factors such as individual differences and health conditions, making generalizing machine learning models difficult. To address these challenges, we present Human Heterogeneity Invariant Stress Sensing (HHISS), a domain generalization approach designed to find consistent patterns in stress signals by removing person-specific differences. This helps the model perform more accurately across new people, environments, and stress types not seen during training. Its novelty lies in proposing a novel technique called person-wise sub-network pruning intersection to focus on shared features across individuals, alongside preventing overfitting by leveraging continuous labels while training. The present study focuses on people with opioid use disorder (OUD)---a group where stress responses can change dramatically depending on the presents of opioids in their system, including daily timed medication for OUD (MOUD). Since stress often triggers cravings, a model that can adapt well to these changes could support better OUD rehabilitation and recovery. We tested HHISS on seven different stress datasets---four which we collected ourselves and three public datasets. Four are from lab setups, one from a controlled real-world driving setting, and two are from real-world in-the-wild field datasets with no constraints. The present study is the first known to evaluate how well a stress detection model works across such a wide range of data. Results show HHISS consistently outperformed state-of-the-art baseline methods, proving both effective and practical for real-world use. Ablation studies, empirical justifications, and runtime evaluations confirm HHISS's feasibility and scalability for mobile stress sensing in sensitive real-world applications.more » « less
An official website of the United States government

