Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants' ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture's motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers. 
                        more » 
                        « less   
                    
                            
                            Observer-Aware Legibility for Social Navigation
                        
                    
    
            We designed an observer-aware method for creating navigation paths that simultaneously indicate a robot’s goal while attempting to remain in view for a particular observer. Prior art in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience. Our observer-aware legibility algorithm directly models the locations and perspectives of observers, and places legible movements where they can be easily seen. To explore the effectiveness of this technique, we performed a 300-person online user study. Users viewed first-person videos of restaurant scenes with robot waiters moving along paths optimized for different observer perspectives, along with a baseline path that did not take into account any observer’s field of view. Participants were asked to report their estimate of how likely it was the robot was heading to their table versus the other goal table as it moved along each path. We found that for observers with incomplete views of the restaurant, observer-aware legibility is effective at increasing the period of time for which observers correctly infer the goal of the robot. Non-targeted observers have lower performance on paths created for other observers than themselves, which is the natural drawback of personalizing legible motion to a particular observer. We also find that an observer’s relationship to the environment (e.g. what is in their field of view) has more influence on their inferences than the observer’s relative position to the targeted observer, and discuss how this implies knowledge of the environment is required in order to effectively plan for multiple observers at once. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1943072
- PAR ID:
- 10413458
- Date Published:
- Journal Name:
- EEE International Conference on Robot and Human Interactive Communication (RO-MAN)
- Page Range / eLocation ID:
- 1115 to 1122
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants’ ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture’s motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers.more » « less
- 
            null (Ed.)This work has developed an iteratively refined understanding of participants’ natural perceptions and responses to unmanned aerial vehicle (UAV) flight paths, or gestures. This includes both what they believe the UAV is trying to communicate to them, in addition to how they expect to respond through physical action. Previous work in this area has focused on eliciting gestures from participants to communicate specific states, or leveraging gestures that are observed in the world rather than on understanding what the participants believe is being communicated and how they would respond. This work investigates previous gestures either created or categorized by participants to understand the perceived content of their communication or expected response, through categories created by participant free responses and confirmed through forced choice testing. The human-robot interaction community can leverage this work to better understand how people perceive UAV flight paths, inform future designs for non-anthropomorphic robot communications, and apply lessons learned to elicit informative labels from people who may or may not be operating the vehicle. We found that the Negative Attitudes towards Robots Scale (NARS) can be a good indicator of how we can expect a person to react to a robot. Recommendations are also provided to use motion approaching/retreating from a person to encourage following, perpendicular to their field of view for blocking, and to use either no motion or large altitude changes to encourage viewing.more » « less
- 
            null (Ed.)This paper reports on developing an integrated framework for safety-aware informative motion planning suitable for legged robots. The information-gathering planner takes a dense stochastic map of the environment into account, while safety constraints are enforced via Control Barrier Functions (CBFs). The planner is based on the Incrementally-exploring Information Gathering (IIG) algorithm and allows closed-loop kinodynamic node expansion using a Model Predictive Control (MPC) formalism. Robotic exploration and information gathering problems are inherently path-dependent problems. That is, the information collected along a path depends on the state and observation history. As such, motion planning solely based on a modular cost does not lead to suitable plans for exploration. We propose SAFE-IIG, an integrated informative motion planning algorithm that takes into account: 1) a robot’s perceptual field of view via a submodular information function computed over a stochastic map of the environment, 2) a robot’s dynamics and safety constraints via discrete-time CBFs and MPC for closedloop multi-horizon node expansions, and 3) an automatic stopping criterion via setting an information-theoretic planning horizon. The simulation results show that SAFE-IIG can plan a safe and dynamically feasible path while exploring a dense map.more » « less
- 
            When observing others’ behavior, people use Theory of Mind to infer unobservable beliefs, desires, and intentions. And when showing what activity one is doing, people will modify their behavior in order to facilitate more accurate interpretation and learning by an observer. Here, we present a novel model of how demonstrators act and observers interpret demonstrations corresponding to different levels of recursive social reasoning (i.e. a cognitive hierarchy) grounded in Theory of Mind. Our model can explain how demonstrators show others how to perform a task and makes predictions about how sophisticated observers can reason about communicative intentions. Additionally, we report an experiment that tests (1) how well an observer can learn from demonstrations that were produced with the intent to communicate, and (2) how an observer’s interpretation of demonstrations influences their judgments.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    