We present Ring-a-Pose, a single untethered ring that tracks continuous 3D hand poses. Located in the center of the hand, the ring emits an inaudible acoustic signal that each hand pose reflects differently. Ring-a-Pose imposes minimal obtrusions on the hand, unlike multi-ring or glove systems. It is not affected by the choice of clothing that may cover wrist-worn systems. In a series of three user studies with a total of 36 participants, we evaluate Ring-a-Pose's performance on pose tracking and micro-finger gesture recognition. Without collecting any training data from a user, Ring-a-Pose tracks continuous hand poses with a joint error of 14.1mm. The joint error decreases to 10.3mm for fine-tuned user-dependent models. Ring-a-Pose recognizes 7-class micro-gestures with a 90.60% and 99.27% accuracy for user-independent and user-dependent models, respectively. Furthermore, the ring exhibits promising performance when worn on any finger. Ring-a-Pose enables the future of smart rings to track and recognize hand poses using relatively low-power acoustic sensing.
more »
« less
Interactive hand pose estimation using a stretch-sensing soft glove
We propose a stretch-sensing soft glove to interactively capture hand poses with high accuracy and without requiring an external optical setup. We demonstrate how our device can be fabricated and calibrated at low cost, using simple tools available in most fabrication labs. To reconstruct the pose from the capacitive sensors embedded in the glove, we propose a deep network architecture that exploits the spatial layout of the sensor itself. The network is trained only once, using an inexpensive off-the-shelf hand pose reconstruction system to gather the training data. The per-user calibration is then performed on-the-fly using only the glove. The glove's capabilities are demonstrated in a series of ablative experiments, exploring different models and calibration methods. Comparing against commercial data gloves, we achieve a 35% improvement in reconstruction accuracy.
more »
« less
- PAR ID:
- 10602565
- Publisher / Repository:
- Association for Computing Machinery (ACM)
- Date Published:
- Journal Name:
- ACM Transactions on Graphics
- Volume:
- 38
- Issue:
- 4
- ISSN:
- 0730-0301
- Format(s):
- Medium: X Size: p. 1-15
- Size(s):
- p. 1-15
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents the theoretical foundation, practical implementation, and empirical evaluation of a glove for interaction with 3-D virtual environments. At the dawn of the “Spatial Computing Era”, where users continuously interact with 3-D Virtual and Augmented Reality environments, the need for a practical and intuitive interaction system that can efficiently engage 3-D elements is becoming pressing. Over the last few decades, there have been attempts to provide such an interaction mechanism using a glove. However, glove systems are currently not in widespread use due to their high cost and, we propose, due to their inability to sustain high levels of performance under certain situations. Performance deterioration has been observed due to the distortion of the local magnetic field caused by ordinary ferromagnetic objects present near the glove’s operating space. There are several areas where reliable hand-tracking gloves could provide a next generation of improved solutions, such as American Sign Language training and automatic translation to text and training and evaluation for activities that require high motor skills in the hands (e.g., playing some musical instruments, training of surgeons, etc.). While the use of a hand-tracking glove toward these goals seems intuitive, some of the currently available glove systems may not meet the accuracy and reliability levels required for those use cases. This paper describes our concept of an interaction glove instrumented with miniature magnetic, angular rate, and gravity (MARG) sensors and aided by a single camera. The camera used is an off-the-shelf red, green, and blue–depth (RGB-D) camera. We describe a proof-of-concept implementation of the system using our custom “GMVDK” orientation estimation algorithm. This paper also describes the glove’s empirical evaluation with human-subject performance tests. The results show that the prototype glove, using the GMVDK algorithm, is able to operate without performance losses, even in magnetically distorted environments.more » « less
-
Haptic devices are in general more adept at mimicking the bulk properties of materials than they are at mimicking the surface properties. Herein, a haptic glove is described which is capable of producing sensations reminiscent of three types of near‐surface properties: hardness, temperature, and roughness. To accomplish this mixed mode of stimulation, three types of haptic actuators are combined: vibrotactile motors, thermoelectric devices, and electrotactile electrodes made from a stretchable conductive polymer synthesized in the laboratory. This polymer consists of a stretchable polyanion which serves as a scaffold for the polymerization of poly(3,4‐ethylenedioxythiophene). The scaffold is synthesized using controlled radical polymerization to afford material of low dispersity, relatively high conductivity, and low impedance relative to metals. The glove is equipped with flex sensors to make it possible to control a robotic hand and a hand in virtual reality (VR). In psychophysical experiments, human participants are able to discern combinations of electrotactile, vibrotactile, and thermal stimulation in VR. Participants trained to associate these sensations with roughness, hardness, and temperature have an overall accuracy of 98%, whereas untrained participants have an accuracy of 85%. Sensations can similarly be conveyed using a robotic hand equipped with sensors for pressure and temperature.more » « less
-
Motor impairments resulting from neurological disorders, such as strokes or spinal cord injuries, often impair hand and finger mobility, restricting a person’s ability to grasp and perform fine motor tasks. Brain plasticity refers to the inherent capability of the central nervous system to functionally and structurally reorganize itself in response to stimulation, which underpins rehabilitation from brain injuries or strokes. Linking voluntary cortical activity with corresponding motor execution has been identified as effective in promoting adaptive plasticity. This study introduces NeuroFlex, a motion-intent-controlled soft robotic glove for hand rehabilitation. NeuroFlex utilizes a transformer-based deep learning (DL) architecture to decode motion intent from motor imagery (MI) EEG data and translate it into control inputs for the assistive glove. The glove’s soft, lightweight, and flexible design enables users to perform rehabilitation exercises involving fist formation and grasping movements, aligning with natural hand functions for fine motor practices. The results show that the accuracy of decoding the intent of fingers making a fist from MI EEG can reach up to 85.3%, with an average AUC of 0.88. NeuroFlex demonstrates the feasibility of detecting and assisting the patient’s attempted movements using pure thinking through a non-intrusive brain–computer interface (BCI). This EEG-based soft glove aims to enhance the effectiveness and user experience of rehabilitation protocols, providing the possibility of extending therapeutic opportunities outside clinical settings.more » « less
-
This work proposes a novel pose estimation model for object categories that can be effectively transferred to pre-viously unseen environments. The deep convolutional network models (CNN) for pose estimation are typically trained and evaluated on datasets specifically curated for object detection, pose estimation, or 3D reconstruction, which requires large amounts of training data. In this work, we propose a model for pose estimation that can be trained with small amount of data and is built on the top of generic mid-level represen-tations [33] (e.g. surface normal estimation and re-shading). These representations are trained on a large dataset without requiring pose and object annotations. Later on, the predictions are refined with a small CNN neural network that exploits object masks and silhouette retrieval. The presented approach achieves superior performance on the Pix3D dataset [26] and shows nearly 35 % improvement over the existing models when only 25 % of the training data is available. We show that the approach is favorable when it comes to generalization and transfer to novel environments. Towards this end, we introduce a new pose estimation benchmark for commonly encountered furniture categories on challenging Active Vision Dataset [1] and evaluated the models trained on the Pix3D dataset.more » « less
An official website of the United States government
