skip to main content

Search for: All records

Creators/Authors contains: "Tran, Nhan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Mixed Reality visualizations provide a powerful new approach for enabling gestural capabilities on non-humanoid robots. This paper explores two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). Specifically, we present the results of a within-subjects Mixed Reality HRI experiment (N=23) exploring the trade-offs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show a clear trade-off between performance and social perception, with non-ego-sensitive allocentric gesturesmore »enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability.« less
    Free, publicly-accessible full text available March 8, 2022
  2. We present the first experiment analyzing the effectiveness of robot-generated mixed reality gestures using real robotic and mixed reality hardware. Our findings demonstrate how these gestures increase user effectiveness by decreasing user response time during visual search tasks, and show that robots can safely pair longer, more natural referring expressions with mixed reality gestures without worrying about cognitively overloading their interlocutors.
    Free, publicly-accessible full text available March 8, 2022
  3. Mixed reality visualizations provide a powerful new approach for enabling gestural capabilities for non-humanoid robots. This paper explores two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arrow positioned over the robot (an ego-sensitive allocentric gesture). We explore the trade-offs between these two types of gestures, with respect to both objective performance and subjective social perceptions. We conducted a 24-participant within-subjects experiment in which a HoloLens-wearing participant interacted with a robot that used these two types of gestures to refer to objects at twomore »different distances. Our results demonstrate a clear trade-off between performance and social perception: non-ego-sensitive allocentric gestures led to quicker reaction time and higher accuracy, but ego-sensitive gesture led to higher perceived social presence, anthropomorphism, and likability. These results present a challenging design decision to creators of mixed reality robotic systems« less
  4. In the field of Human-Robot Interaction, researchers often techniques such as the Wizard-of-Oz paradigms in order to better study narrow scientific questions while carefully controlling robots’ capabilities unrelated to those questions, especially when those other capabilities are not yet easy to automate. However, those techniques often impose limitations on the type of collaborative tasks that can be used, and the perceived realism of those tasks and the task context. In this paper, we discuss how Augmented Reality can be used to address these concerns while increasing researchers’ level of experimental control, and discuss both advantages and disadvantages of this approach
  5. This paper explores the tradeoffs between different types of mixed reality robotic communication under different levels of user workload. We present the results of a within-subjects experiment in which we systematically and jointly vary robot communication style alongside level and type of cognitive load, and measure subsequent impacts on accuracy, reaction time, and perceived workload and effectiveness. Our preliminary results suggest that although humans may not notice differences, the manner of load a user is under and the type of communication style used by a robot they interact with do in fact interact to determine their task effectiveness
  6. In previous work, researchers have repeatedly demonstrated that robots' use of deictic gestures enables effective and natural human-robot interaction. However, new technologies such as augmented reality head mounted displays enable environments in which mixed-reality becomes possible, and in such environments, physical gestures become but one category among many different types of mixed reality deictic gestures. In this paper, we present the first experimental exploration of the effectiveness of mixed reality deictic gestures beyond physical gestures. Specifically, we investigate human perception of videos simulating the display of allocentric gestures, in which robots circle their targets in users' fields of view. Ourmore »results suggest that this is an effective communication strategy, both in terms of objective accuracy and subjective perception, especially when paired with complex natural language references.« less