skip to main content


Title: FMKit: An In-Air-Handwriting Analysis Library and Data Repository
Hand-gesture and in-air-handwriting provide ways for users to input information in Augmented Reality (AR) and Virtual Reality (VR) applications where a physical keyboard or a touch screen is unavailable. However, understanding the movement of hands and fingers is challenging, which requires a large amount of data and data-driven models. In this paper, we propose an open research infrastructure named FMKit for in-air-handwriting analysis, which contains a set of Python libraries and a data repository collected from over 180 users with two different types of motion capture sensors. We also present three research tasks enabled by FMKit, including in-air-handwriting based user authentication, user identification, and word recognition, and preliminary baseline performance.  more » « less
Award ID(s):
1925709
NSF-PAR ID:
10201287
Author(s) / Creator(s):
Date Published:
Journal Name:
CVPR Workshop on Computer Vision for Augmented and Virtual Reality
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Hand-gesture and in-air-handwriting provide ways for users to input information in Augmented Reality (AR) and Virtual Reality (VR) applications where a physical keyboard or a touch screen is unavailable. However, understanding the movement of hands and fingers is challenging, which requires a large amount of data and data-driven models. In this paper, we propose an open research infrastructure named FMKit for in-air-handwriting analysis, which contains a set of Python libraries and a data repository collected from over 180 users with two different types of motion capture sensors. We also present three research tasks enabled by FMKit, including in-air-handwriting based user authentication, user identification, and word recognition, and preliminary baseline performance. 
    more » « less
  2. This article provides a systematic review of research related to Human–Computer Interaction techniques supporting training and learning in various domains including medicine, healthcare, and engineering. The focus is on HCI techniques involving Extended Reality (XR) technology which encompasses Virtual Reality, Augmented Reality, and Mixed Reality. HCI-based research is assuming more importance with the rapid adoption of XR tools and techniques in various training and learning contexts including education. There are many challenges in the adoption of HCI approaches, which results in a need to have a comprehensive and systematic review of such HCI methods in various domains. This article addresses this need by providing a systematic literature review of a cross-s Q1 ection of HCI approaches involving proposed so far. The PRISMA-guided search strategy identified 1156 articles for abstract review. Irrelevant abstracts were discarded. The whole body of each article was reviewed for the remaining articles, and those that were not linked to the scope of our specific issue were also eliminated. Following the application of inclusion/exclusion criteria, 69 publications were chosen for review. This article has been divided into the following sections: Introduction; Research methodology; Literature review; Threats of validity; Future research and Conclusion. Detailed classifications (pertaining to HCI criteria and concepts, such as affordance; training, and learning techniques) have also been included based on different parameters based on the analysis of research techniques adopted by various investigators. The article concludes with a discussion of the key challenges for this HCI area along with future research directions. A review of the research outcomes from these publications underscores the potential for greater success when such HCI-based approaches are adopted during such 3D-based training interactions. Such a higher degree of success may be due to the emphasis on the design of userfriendly (and user-centric) training environments, interactions, and processes that positively impact the cognitive abilities of users and their respective learning/training experiences. We discovered data validating XR-HCI as an ascending method that brings a new paradigm by enhancing skills and safety while reducing costs and learning time through replies to three exploratory study questions. We believe that the findings of this study will aid academics in developing new research avenues that will assist XR-HCI applications to mature and become more widely adopted. 
    more » « less
  3.  
    more » « less
  4. By allowing people to manipulate digital content placed in the real world, Augmented Reality (AR) provides immersive and enriched experiences in a variety of domains. Despite its increasing popularity, providing a seamless AR experience under bandwidth fluctuations is still a challenge, since delivering these experiences at photorealistic quality with minimal latency requires high bandwidth. Streaming approaches have already been proposed to solve this problem, but they require accurate prediction of the Field-Of-View of the user to only stream those regions of scene that are most likely to be watched by the user. To solve this prediction problem, we study in this paper the watching behavior of users exploring different types of AR scenes via mobile devices. To this end, we introduce the ACE Dataset, the first dataset collecting movement data of 50 users exploring 5 different AR scenes. We also propose a four-feature taxonomy for AR scene design, which allows categorizing different types of AR scenes in a methodical way, and supporting further research in this domain. Motivated by the ACE dataset analysis results, we develop a novel user visual attention prediction algorithm that jointly utilizes information of users' historical movements and digital objects positions in the AR scene. The evaluation on the ACE Dataset show the proposed approach outperforms baseline approaches under prediction horizons of variable lengths, and can therefore be beneficial to the AR ecosystem in terms of bandwidth reduction and improved quality of users' experience. 
    more » « less
  5. We present a design-based exploration of the potential to reinterpret glyph-based visualization of scalar fields on 3D surfaces, a traditional scientific visualization technique, as a data physicalization technique. Even with the best virtual reality displays, users often struggle to correctly interpret spatial relationships in 3D datasets; thus, we are motivated to understand the extent to which traditional scientific visualization methods can translate to physical media where users may simultaneously leverage their visual systems and tactile senses to, in theory, better understand and connect with the data of interest. This pictorial traces the process of our design for a specific user study experiment: (1) inspiration, (2) exploring the data physicalization design space, (3) prototyping with 3D printing, (4) applying the techniques to different synthetic datasets. We call our most recent and compelling visual/tactile design boxcars on potatoes, and the next step in the research is to run a user-based evaluation to elucidate how this design compares to several of the others pictured here. 
    more » « less