Best Paper Award. When estimating the distance or size of an object in the real world, we often use our own body as a metric; this strategy is called body-based scaling. However, object size estimation in a virtual environment presented via a head-mounted display differs from the physical world due to technical limitations such as narrow field of view and low fidelity of the virtual body when compared to one's real body. In this paper, we focus on increasing the fidelity of a participant's body representation in virtual environments with a personalized hand using personalized characteristics and a visually faithful augmented virtuality approach. To investigate the impact of the personalized hand, we compared it against a generic virtual hand and measured effects on virtual body ownership, spatial presence, and object size estimation. Specifically, we asked participants to perform a perceptual matching task that was based on scaling a virtual box on a table in front of them. Our results show that the personalized hand not only increased virtual body ownership and spatial presence, but also supported participants in correctly estimating the size of a virtual object in the proximity of their hand.
more »
« less
This content will become publicly available on May 1, 2026
Does Hand Size Matter? The Effect of Avatar Hand Size on Non-verbal Communication in Virtual Reality
- Award ID(s):
- 1652210
- PAR ID:
- 10611914
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Visualization and Computer Graphics
- Volume:
- 31
- Issue:
- 5
- ISSN:
- 1077-2626
- Page Range / eLocation ID:
- 3161 to 3171
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)First-person-view videos of hands interacting with tools are widely used in the computer vision industry. However, creating a dataset with pixel-wise segmentation of hands is challenging since most videos are captured with fingertips occluded by the hand dorsum and grasped tools. Current methods often rely on manually segmenting hands to create annotations, which is inefficient and costly. To relieve this challenge, we create a method that utilizes thermal information of hands for efficient pixel-wise hand segmentation to create a multi-modal activity video dataset. Our method is not affected by fingertip and joint occlusions and does not require hand pose ground truth. We show our method to be 24 times faster than the traditional polygon labeling method while maintaining high quality. With the segmentation method, we propose a multi-modal hand activity video dataset with 790 sequences and 401,765 frames of "hands using tools" videos captured by thermal and RGB-D cameras with hand segmentation data. We analyze multiple models for hand segmentation performance and benchmark four segmentation networks. We show that our multi-modal dataset with fusing Long-Wave InfraRed (LWIR) and RGB-D frames achieves 5% better hand IoU performance than using RGB frames.more » « less
-
We consider the problem of in-hand dexterous manipulation with a focus on unknown or uncertain hand–object parameters, such as hand configuration, object pose within hand, and contact positions. In particular, in this work we formulate a generic framework for hand–object configuration estimation using underactuated hands as an example. Owing to the passive reconfigurability and the lack of encoders in the hand’s joints, it is challenging to estimate, plan, and actively control underactuated manipulation. By modeling the grasp constraints, we present a particle filter-based framework to estimate the hand configuration. Specifically, given an arbitrary grasp, we start by sampling a set of hand configuration hypotheses and then randomly manipulate the object within the hand. While observing the object’s movements as evidence using an external camera, which is not necessarily calibrated with the hand frame, our estimator calculates the likelihood of each hypothesis to iteratively estimate the hand configuration. Once converged, the estimator is used to track the hand configuration in real time for future manipulations. Thereafter, we develop an algorithm to precisely plan and control the underactuated manipulation to move the grasped object to desired poses. In contrast to most other dexterous manipulation approaches, our framework does not require any tactile sensing or joint encoders, and can directly operate on any novel objects, without requiring a model of the object a priori. We implemented our framework on both the Yale Model O hand and the Yale T42 hand. The results show that the estimation is accurate for different objects, and that the framework can be easily adapted across different underactuated hand models. In the end, we evaluated our planning and control algorithm with handwriting tasks, and demonstrated the effectiveness of the proposed framework.more » « less
An official website of the United States government
