Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices for games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand gesture devices: Leap Motion, Microsoft's Kinect, and Intel's RealSense. We developed a simple hand-gesture based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants' preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings suggest that not all gesture recognition devices can be suitable for games and that designers need to make better decisions when selecting gesture recognition devices and designing gesture based games to insure the usability, accuracy, and comfort of such games.
A Comparative Study of Hand-Gesture Recognition Devices for Games
Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices to be used in games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand-gesture devices: Leap Motion, Microsoft’s Kinect, and Intel’s RealSense. The comparison provides game designers with an understanding of the main factors to consider when selecting these devices and how to design games that use them. We developed a simple hand-gesture-based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings were supported by players’ accounts of their experiences using these gesture devices. Based on these findings, we discuss how such devices can be used by game designers and provide them with a set of design cautions that provide insights into the design of gesture-based games.
- Publication Date:
- NSF-PAR ID:
- 10174265
- Journal Name:
- Lecture notes in computer science
- Volume:
- 12182
- Page Range or eLocation-ID:
- 57 - 76
- ISSN:
- 1611-3349
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Raynal, Ann M. ; Ranney, Kenneth I. (Ed.)Most research in technologies for the Deaf community have focused on translation using either video or wearable devices. Sensor-augmented gloves have been reported to yield higher gesture recognition rates than camera-based systems; however, they cannot capture information expressed through head and body movement. Gloves are also intrusive and inhibit users in their pursuit of normal daily life, while cameras can raise concerns over privacy and are ineffective in the dark. In contrast, RF sensors are non-contact, non-invasive and do not reveal private information even if hacked. Although RF sensors are unable to measure facial expressions or hand shapes, which would be required for complete translation, this paper aims to exploit near real-time ASL recognition using RF sensors for the design of smart Deaf spaces. In this way, we hope to enable the Deaf community to benefit from advances in technologies that could generate tangible improvements in their quality of life. More specifically, this paper investigates near real-time implementation of machine learning and deep learning architectures for the purpose of sequential ASL signing recognition. We utilize a 60 GHz RF sensor which transmits a frequency modulation continuous wave (FMWC waveform). RF sensors can acquire a unique source of information that ismore »
-
The goal of this research is to provide much needed empirical data on how the fidelity of popular hand gesture tracked based pointing metaphors versus commodity controller based input affects the efficiency and speed-accuracy tradeoff in users’ spatial selection in personal space interactions in VR. We conduct two experiments in which participants select spherical targets arranged in a circle in personal space, or near-field within their maximum arms reach distance, in VR. Both experiments required participants to select the targets with either a VR controller or with their dominant hand’s index finger, which was tracked with one of two popular contemporary tracking methods. In the first experiment, the targets are arranged in a flat circle in accordance with the ISO 9241-9 Fitts’ law standard, and the simulation selected random combinations of 3 target amplitudes and 3 target widths. Targets were placed centered around the users’ eye level, and the arrangement was placed at either 60%, 75%, or 90% depth plane of the users’ maximum arm’s reach. In experiment 2, the targets varied in depth randomly from one depth plane to another within the same configuration of 13 targets within a trial set, which resembled button selection task in hierarchical menusmore »
-
This Work-In-Progress falls within the research category of study and, focuses on the experiences and perceptions of first- and second year engineering students when using an online engineering game that was designed to enhance understanding of statics concepts. Technology and online games are increasingly being used in engineering education to help students gain competencies in technical domains in the engineering field. Less is known about the way that these online games are designed and incorporated into the classroom environment and how these factors can ignite inequitable perspectives and experiences among engineering students. Also, little if any work that combines the TAM model and intersectionality of race and gender in engineering education has been done, though several studies have been modified to account for gender or race. This study expands upon the Technology Acceptance Model (TAM) by exploring perspectives of intersectional groups (defined as women of color who are engineering students). A Mixed Method Sequential Exploratory Research Design approach was used that extends the TAM model. Students were asked to play the engineering educational game, complete an open-ended questionnaire and then to participate in a focus group. Early findings suggest that while many students were open to learning to use themore »
-
Many people are learning programming on their own using various online resources such as educational games. Unfortunately, little is known about how to keep online educational game learners motivated throughout their game play, especially if they become disengaged or frustrated with their task. Keeping online learners engaged is essential for learning programming, as it may have lasting effects on their views and self-efficacy towards computer science. To address this issue, we created a coarse-grained frustration detector that provided users with customized, adaptive feedback to help (re)engage them with the game content. We ran a controlled experiment with 400 participants over the course of 1.5 months, with half of the players playing the original game, and the other half playing the game with the frustration detection and adaptive feed- back. We found that the users who received the adaptive feedback when frustrated completed more levels than their counterparts who did not receive this customized feedback. Based on these findings, we believe that adaptive feedback is essential in keeping educational game learners engaged, and propose future work for researchers and designers of online educational games to better support their users.