skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Impacts of Referent Display on Gesture and Speech Elicitation
Elicitation studies have become a popular method of participatory design. While traditionally used to examine unimodal gesture interactions, elicitation has started being used with other novel interaction modalities. Unfortunately, there has been no work that examines the impact of referent display on elicited interaction proposals. To address that concern this work provides a detailed comparison between two elicitation studies that were similar in design apart from the way that participants were prompted for interaction proposals (i.e., the referents). Based on this comparison the impact of referent display on speech and gesture interaction proposals are each discussed. The interaction proposals between these elicitation studies were not identical. Gesture proposals were the least impacted by referent display, showing high proposal similarity between the two works. Speech proposals were highly biased by text referents with proposals directly mirroring text-based referents an average of 69.36% of the time. In short, the way that referents are presented during elicitation studies can impact the resulting interaction proposals; however, the level of impact found is dependent on the modality of input elicited.  more » « less
Award ID(s):
1948254
PAR ID:
10357471
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Transactions on Visualization and Computer Graphics
ISSN:
1077-2626
Page Range / eLocation ID:
1 to 11
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Gesture elicitation studies are a popular means of gaining valuable insights into how users interact with novel input devices. One of the problems elicitation faces is that of legacy bias, when elicited interactions are biased by prior technologies use. In response, methodologies have been introduced to reduce legacy bias. This is the first study that formally examines the production method of reducing legacy bias (i.e., repeated proposals for a single referent). This is done through a between-subject study that had 27 participants per group (control and production) with 17 referents placed in a virtual environment using a head-mounted display. This study found that over a range of referents, legacy bias was not significantly reduced over production trials. Instead, production reduced participant consensus on proposals. However, in the set of referents that elicited the most legacy biased proposals, production was an effective means of reducing legacy bias, with an overall reduction of 11.93% for the chance of eliciting a legacy bias proposal. 
    more » « less
  2. null (Ed.)
    This research establishes a better understanding of the syntax choices in speech interactions and of how speech, gesture, and multimodal gesture and speech interactions are produced by users in unconstrained object manipulation environments using augmented reality. The work presents a multimodal elicitation study conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). In this study time windows for gesture and speech multimodal interactions are developed using the start and stop times of gestures and speech as well as the stoke times for gestures. While gestures commonly precede speech by 81 ms we find that the stroke of the gesture is commonly within 10 ms of the start of speech. Indicating that the information content of a gesture and its co-occurring speech are well aligned to each other. Lastly, the trends across the most common proposals for each modality are examined. Showing that the disagreement between proposals is often caused by a variation of hand posture or syntax. Allowing us to present aliasing recommendations to increase the percentage of users' natural interactions captured by future multimodal interactive systems. 
    more » « less
  3. The Open University of Japan, Chiba (Ed.)
    More than two hundred papers on elicitation studies have been published in the last ten years. These works are mainly focused on generating user-defined gesture sets and discovering natural feeling multimodal interaction techniques with virtual objects. Few papers have discussed binning the elicited interaction proposals after data collection. Binning is a process of grouping the entire set of user-generated interaction proposals based on similarity criteria. The binned set of proposals is then analyzed to produce a consensus set, which results in the user-defined interaction set. This paper presents a formula to use when deciding how to bin interaction proposals, thus helping to establish a more consistent binning procedure. This work can provide human-computer interaction (HCI) researchers with the guidance they need for interaction elicitation data processing, which is largely missing from current elicitation study literature. Using this approach will improve the efficiency and effectiveness of the binning process, increase the reliability of us er-defined interaction sets, and most importantly, improve the replicability of elicitation studies. 
    more » « less
  4. Situated human-human communication typically involves a combination of both natural language and gesture, especially deictic gestures intended to draw the listener’s attention to target referents. To engage in natural communication, robots must thus be similarly enabled not only to generate natural language, but to generate the appropriate gestures to accompany that language. In this work, we examine the gestures humans use to accompany spatial language, specifically the way that these gestures continuously degrade in specificity and then discretely transition into non-deictic gestural forms along with decreasing confidence in referent location. We then outline a research plan in which we propose to use data collected through our study of this transition to design more human-like gestures for language-capable robots. 
    more » « less
  5. Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges.In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts. 
    more » « less