skip to main content


Title: A Comparative Study of Hand-Gesture Recognition Devices for Games
Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices to be used in games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand-gesture devices: Leap Motion, Microsoft’s Kinect, and Intel’s RealSense. The comparison provides game designers with an understanding of the main factors to consider when selecting these devices and how to design games that use them. We developed a simple hand-gesture-based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings were supported by players’ accounts of their experiences using these gesture devices. Based on these findings, we discuss how such devices can be used by game designers and provide them with a set of design cautions that provide insights into the design of gesture-based games.  more » « less
Award ID(s):
1651532 1619273
NSF-PAR ID:
10174265
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer Link
Date Published:
Journal Name:
Lecture notes in computer science
Volume:
12182
ISSN:
1611-3349
Page Range / eLocation ID:
57 - 76
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices for games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand gesture devices: Leap Motion, Microsoft's Kinect, and Intel's RealSense. We developed a simple hand-gesture based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants' preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings suggest that not all gesture recognition devices can be suitable for games and that designers need to make better decisions when selecting gesture recognition devices and designing gesture based games to insure the usability, accuracy, and comfort of such games. 
    more » « less
  2. Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactionsfavored the high-embodied experimental group that used the MR devicefor both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces.

     
    more » « less
  3. Raynal, Ann M. ; Ranney, Kenneth I. (Ed.)
    Most research in technologies for the Deaf community have focused on translation using either video or wearable devices. Sensor-augmented gloves have been reported to yield higher gesture recognition rates than camera-based systems; however, they cannot capture information expressed through head and body movement. Gloves are also intrusive and inhibit users in their pursuit of normal daily life, while cameras can raise concerns over privacy and are ineffective in the dark. In contrast, RF sensors are non-contact, non-invasive and do not reveal private information even if hacked. Although RF sensors are unable to measure facial expressions or hand shapes, which would be required for complete translation, this paper aims to exploit near real-time ASL recognition using RF sensors for the design of smart Deaf spaces. In this way, we hope to enable the Deaf community to benefit from advances in technologies that could generate tangible improvements in their quality of life. More specifically, this paper investigates near real-time implementation of machine learning and deep learning architectures for the purpose of sequential ASL signing recognition. We utilize a 60 GHz RF sensor which transmits a frequency modulation continuous wave (FMWC waveform). RF sensors can acquire a unique source of information that is inaccessible to optical or wearable devices: namely, a visual representation of the kinematic patterns of motion via the micro-Doppler signature. Micro-Doppler refers to frequency modulations that appear about the central Doppler shift, which are caused by rotational or vibrational motions that deviate from principle translational motion. In prior work, we showed that fractal complexity computed from RF data could be used to discriminate signing from daily activities and that RF data could reveal linguistic properties, such as coarticulation. We have also shown that machine learning can be used to discriminate with 99% accuracy the signing of native Deaf ASL users from that of copysigning (or imitation signing) by hearing individuals. Therefore, imitation signing data is not effective for directly training deep models. But, adversarial learning can be used to transform imitation signing to resemble native signing, or, alternatively, physics-aware generative models can be used to synthesize ASL micro-Doppler signatures for training deep neural networks. With such approaches, we have achieved over 90% recognition accuracy of 20 ASL signs. In natural environments, however, near real-time implementations of classification algorithms are required, as well as an ability to process data streams in a continuous and sequential fashion. In this work, we focus on extensions of our prior work towards this aim, and compare the efficacy of various approaches for embedding deep neural networks (DNNs) on platforms such as a Raspberry Pi or Jetson board. We examine methods for optimizing the size and computational complexity of DNNs for embedded micro-Doppler analysis, methods for network compression, and their resulting sequential ASL recognition performance. 
    more » « less
  4. The goal of this research is to provide much needed empirical data on how the fidelity of popular hand gesture tracked based pointing metaphors versus commodity controller based input affects the efficiency and speed-accuracy tradeoff in users’ spatial selection in personal space interactions in VR. We conduct two experiments in which participants select spherical targets arranged in a circle in personal space, or near-field within their maximum arms reach distance, in VR. Both experiments required participants to select the targets with either a VR controller or with their dominant hand’s index finger, which was tracked with one of two popular contemporary tracking methods. In the first experiment, the targets are arranged in a flat circle in accordance with the ISO 9241-9 Fitts’ law standard, and the simulation selected random combinations of 3 target amplitudes and 3 target widths. Targets were placed centered around the users’ eye level, and the arrangement was placed at either 60%, 75%, or 90% depth plane of the users’ maximum arm’s reach. In experiment 2, the targets varied in depth randomly from one depth plane to another within the same configuration of 13 targets within a trial set, which resembled button selection task in hierarchical menus in differing depth planes in the near-field. The study was conducted using the HTC Vive head-mounted display, and used either a VR controller (HTC Vive), low-fidelity virtual pointing (Leap Motion), or a high-fidelity virtual pointing (tracked VR glove) conditions. Our results revealed that low-fidelity pointing performed worse than both high-fidelity pointing and the VR controller. Overall, target selection performance was found to be worse in depth planes closer to the maximum arms reach, as compared to middle and nearer distances. 
    more » « less
  5. This Work-In-Progress falls within the research category of study and, focuses on the experiences and perceptions of first- and second year engineering students when using an online engineering game that was designed to enhance understanding of statics concepts. Technology and online games are increasingly being used in engineering education to help students gain competencies in technical domains in the engineering field. Less is known about the way that these online games are designed and incorporated into the classroom environment and how these factors can ignite inequitable perspectives and experiences among engineering students. Also, little if any work that combines the TAM model and intersectionality of race and gender in engineering education has been done, though several studies have been modified to account for gender or race. This study expands upon the Technology Acceptance Model (TAM) by exploring perspectives of intersectional groups (defined as women of color who are engineering students). A Mixed Method Sequential Exploratory Research Design approach was used that extends the TAM model. Students were asked to play the engineering educational game, complete an open-ended questionnaire and then to participate in a focus group. Early findings suggest that while many students were open to learning to use the game and recommended inclusion of online engineering educational games as learning tools in classrooms, only a few indicated that they would use this tool to prepare for exams or technical job interviews. Some of the main themes identified in this study included unintended perpetuation of inequality through bias in favor of students who enjoyed competition-based learning and assessment of knowledge, and bias for students having prior experience in playing online games. Competition-based assessment related to presumed learning of course content enhanced student anxiety and feelings of intimidation and led to some students seeking to “game the game” versus learning the material, in efforts to achieve grade goals. Other students associated use of the game and the classroom weighted grading with intense stress that led them to prematurely stop the use of the engineering tool. Initial findings indicate that both game design and how technology is incorporated into the grading and testing of learning outcomes, influence student perceptions of the technology’s usefulness and ultimately the acceptance of the online game as a "learning tool." Results also point to the need to explore how the crediting and assessment of students’ performance and learning gains in these types of games could yield inequitable experiences in these types of courses. 
    more » « less