skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Designing embodied interactions for informal learning: two open research challenges
Interactive installations that are controlled with gestures and body movements have been widely used in museums due to their tremendous educational potential. The design of such systems, however, remains problematic. In this paper, we reflect on two open research challenges that we observed when crafting a Kinect-based prototype installation for data exploration at a science museum: (1) making the user aware that the system is interactive; and, (2) increasing the discoverability of hand gestures and body movements.  more » « less
Award ID(s):
1848898
PAR ID:
10135092
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
PerDis '19: Proceedings of the 8th ACM International Symposium on Pervasive Displays
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Museums have embraced embodied interaction: its novelty generates buzz and excitement among their patrons, and it has enormous educational potential. Human-Data Interaction (HDI) is a class of embodied interactions that enables people to explore large sets of data using interactive visualizations that users control with gestures and body movements. In museums, however, HDI installations have no utility if visitors do not engage with them. In this paper, we present a quasi-experimental study that investigates how different ways of representing the user ("mode type") next-to a data visualization alters the way in which people engage with a HDI system. We consider four mode types: avatar, skeleton, camera overlay, and control. Our findings indicate that the mode type impacts the number of visitors that interact with the installation, the gestures that people do, and the amount of time that visitors spend observing the data on display and interacting with the system. 
    more » « less
  2. Embodied interaction is particularly useful in museums because it allows to leverage findings from embodied cognition to support the learning of STEM concepts and thinking skills. In this paper, we focus on Human-Data Interaction (HDI), a class of embodied interactions that investigates the design of interactive data visualizations that users control with gestures and body movements. We describe an HDI system that we iteratively designed, implemented, and observed at a science museum, and that allows visitors to explore large sets of data on two 3D globe maps. We present and discuss design strategies and optimization that we implemented to mitigate two sets of design challenges: (1) Dealing with display, interaction, and affordance blindness; and, (2) Supporting multiple functionalities and collaboration. 
    more » « less
  3. This research investigates fatigue’s impact on arm gestures within augmented reality environments. Through the analysis of the gathered data, our goal is to develop a comprehensive understanding of the constraints and unique characteristics affecting the performance of arm gestures when individuals are fatigued. Based on our findings, prolonged engagement in full-arm movement gestures under the influence of fatigue resulted in a decline in muscle strength within upper body segments. Thus, this decline led to a notable reduction in the accuracy of gesture detection in the AR environment, dropping from an initial 97.7% to 75.9%. We also found that changes in torso movements can have a ripple effect on the upper and forearm regions. This valuable knowledge will enable us to enhance our gesture detection algorithms, thereby enhancing their precision and accuracy, even in fatigue-related situations. 
    more » « less
  4. Humans rarely speak without producing co-speech gestures of the hands, head, and other parts of the body. Co-speech gestures are also highly restricted in how they are timed with speech, typically synchronizing with prosodically-prominent syllables. What functional principles underlie this relationship? Here, we examine how the production of co-speech manual gestures influences spatiotemporal patterns of the oral articulators during speech production. We provide novel evidence that words uttered with accompanying co-speech gestures are produced with more extreme tongue and jaw displacement, and that presence of a co-speech gesture contributes to greater temporal stability of oral articulatory movements. This effect–which we term coupling enhancement–differs from stress-based hyperarticulation in that differences in articulatory magnitude are not vowel-specific in their patterning. Speech and gesture synergies therefore constitute an independent variable to consider when modeling the effects of prosodic prominence on articulatory patterns. Our results are consistent with work in language acquisition and speech-motor control suggesting that synchronizing speech to gesture can entrain acoustic prominence. 
    more » « less
  5. null (Ed.)
    Findings from embodied cognition suggest that our whole body (not just our eyes) plays an important role in how we make sense of data when we interact with data visualizations. In this paper, we present the results of a study that explores how different designs of the ”interaction” (with a data visualization) alter the way in which people report and discuss correlation and causation in data. We conducted a lab study with two experimental conditions: Full body (participants interacted with a 65” display showing geo-referenced data using gestures and body movements); and, Gamepad (people used a joypad to control the system). Participants tended to agree less with statements that portray correlation and causation in data after using the Gamepad system. Additionally, discourse analysis based on Conceptual Metaphor Theory revealed that users made fewer remarks based on FORCE schemata in Gamepad than in Full-Body. 
    more » « less