Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices to be used in games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand-gesture devices: Leap Motion, Microsoft’s Kinect, and Intel’s RealSense. The comparison provides game designers with an understanding of the main factors to consider when selecting these devices and how to design games that use them. We developed a simple hand-gesture-based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings were supported by players’ accounts of their experiences using these gesture devices. Based on these findings, we discuss how such devices can be used by game designers and provide them with a set of design cautions that provide insights into the design of gesture-based games.
The Usability of the Microsoft HoloLens for an Augmented Reality Game to Teach Elementary School Children
Our objective in this research is to compare the usability of three distinct head gaze-based selection methods in an Augmented Reality (AR) hidden object game for children: voice recognition, gesture, and physical button (clicker). Prior work on AR applications in STEM education has focused on how it compares with non-AR methods rather than how children respond to different interaction modalities. We investigated the differences between voice, gesture, and clicker based interaction methods based on the metrics of input errors produced and elapsed time to complete the tutorial and game. We found significant differences in input errors between the voice and gesture conditions, and in elapsed tutorial time between the voice and clicker conditions. We hope to apply the results of our study to improve the interface for AR educational games aimed at children, which could pave the way for greater adoption of AR games in schools.
- Publication Date:
- NSF-PAR ID:
- 10122178
- Journal Name:
- 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2019)
- Page Range or eLocation-ID:
- 1 to 4
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Children’s early numerical knowledge establishes a foundation for later development of mathematics achievement and playing linear number board games is effective in improving basic numeri- cal abilities. Besides the visuo-spatial cues provided by traditional number board games, learning companion robots can integrate multi-sensory information and offer social cues that can support children’s learning experiences. We explored how young children experience sensory feedback (audio and visual) and social expressions from a robot when playing a linear number board game, “RoboMath.” We present the interaction design of the game and our investigation of children’s (n = 19, aged 4) and parents’ experiences under three conditions: (1) visual-only, (2) audio-visual, and (3) audio- visual-social robot interaction. We report our qualitative analysis, including the themes observed from interviews with families on their perceptions of the game and the interaction with the robot, their child’s experiences, and their design recommendations.
-
Effective storytelling relies on engagement and interaction. This work develops an automated software platform for telling stories to children and investigates the impact of two design choices on children’s engagement and willingness to interact with the system: story distribution and the use of complex gesture. A storyteller condition compares stories told in a third person, narrator voice with those distributed between a narrator and first-person story characters. Basic gestures are used in all our storytellings, but, in a second factor, some are augmented with gestures that indicate conversational turn changes, references to other characters and prompt children to ask questions. An analysis of eye gaze indicates that children attend more to the story when a distributed storytelling model is used. Gesture prompts appear to encourage children to ask questions, something that children did, but at a relatively low rate. Interestingly, the children most frequently asked “why” questions. Gaze switching happened more quickly when the story characters began to speak than for narrator turns. These results have implications for future agent-based storytelling system research.
-
Effective storytelling relies on engagement and interaction. This work develops an automated software platform for telling stories to children and investigates the impact of two design choices on children’s engagement and willingness to interact with the system: story distribution and the use of complex gesture. A storyteller condition compares stories told in a third person, narrator voice with those distributed between a narrator and first-person story characters. Basic gestures are used in all our storytellings, but, in a second factor, some are augmented with gestures that indicate conversational turn changes, references to other characters and prompt children to ask questions. An analysis of eye gaze indicates that children attend more to the story when a distributed storytelling model is used. Gesture prompts appear to encourage children to ask questions, something that children did, but at a relatively low rate. Interestingly, the children most frequently asked “why” questions. Gaze switching happened more quickly when the story characters began to speak than for narrator turns. These results have implications for future agent-based storytelling system research.
-
Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices for games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand gesture devices: Leap Motion, Microsoft's Kinect, and Intel's RealSense. We developed a simple hand-gesture based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants' preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings suggest that not all gesture recognition devices can be suitable for games and that designers need to make better decisions when selecting gesture recognition devices and designing gesture based games to insure the usability, accuracy, and comfort of such games.