skip to main content


Title: Aerial Flight Paths for Communication
This article presents an understanding of naive users’ perception of the communicative nature of unmanned aerial vehicle (UAV) motions refined through an iterative series of studies. This includes both what people believe the UAV is trying to communicate, and how they expect to respond through physical action or emotional response. Previous work in this area prioritized gestures from participants to the vehicle or augmenting the vehicle with additional communication modalities, rather than communicating without clear definitions of the states attempting to be conveyed. In an attempt to elicit more concrete states and better understand specific motion perception, this work includes multiple iterations of state creation, flight path refinement, and label assignment. The lessons learned in this work will be applicable broadly to those interested in defining flight paths, and within the human-robot interaction community as a whole, as it provides a base for those seeking to communicate using non-anthropomorphic robots. We found that the Negative Attitudes towards Robots Scale (NARS) can be an indicator of how a person is likely to react to a UAV, the emotional content they are likely to perceive from a message being conveyed, and it is an indicator for the personality characteristics they are likely to project upon the UAV. We also see that people commonly associate motions from other non-verbal communication situations onto UAVs. Flight specific recommendations are to use a dynamic retreating motion from a person to encourage following, use a perpendicular motion to their field of view for blocking, simple descending motion for landing, and to use either no motion or large altitude changes to encourage watching. Overall, this research explores the communication from the UAV to the bystander through its motion, to see how people respond physically and emotionally.  more » « less
Award ID(s):
1638099 1750750 1757908 1925368
NSF-PAR ID:
10315039
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Frontiers in Robotics and AI
Volume:
8
ISSN:
2296-9144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This work has developed an iteratively refined understanding of participants’ natural perceptions and responses to unmanned aerial vehicle (UAV) flight paths, or gestures. This includes both what they believe the UAV is trying to communicate to them, in addition to how they expect to respond through physical action. Previous work in this area has focused on eliciting gestures from participants to communicate specific states, or leveraging gestures that are observed in the world rather than on understanding what the participants believe is being communicated and how they would respond. This work investigates previous gestures either created or categorized by participants to understand the perceived content of their communication or expected response, through categories created by participant free responses and confirmed through forced choice testing. The human-robot interaction community can leverage this work to better understand how people perceive UAV flight paths, inform future designs for non-anthropomorphic robot communications, and apply lessons learned to elicit informative labels from people who may or may not be operating the vehicle. We found that the Negative Attitudes towards Robots Scale (NARS) can be a good indicator of how we can expect a person to react to a robot. Recommendations are also provided to use motion approaching/retreating from a person to encourage following, perpendicular to their field of view for blocking, and to use either no motion or large altitude changes to encourage viewing. 
    more » « less
  2. Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants' ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture's motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers. 
    more » « less
  3. null (Ed.)
    Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants’ ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture’s motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers. 
    more » « less
  4. The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other based on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

     
    more » « less
  5. Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to underappreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches. In two preregistered laboratory studies, we approached a total of 209 participants (“Experiencers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A separate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request. Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers—100 out of 103 people—promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically. Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to withhold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Forecasters remained significant. These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indicates that a key justification for suspicionless consent searches—that they are voluntary—relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states. The results also suggest that a popular reform proposal—requiring police to advise citizens of their right to refuse consent—may have little effect. This corroborates previous observational studies that find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights. 
    more » « less