skip to main content


Title: Vulnerable robots positively shape human conversational dynamics in a human–robot team
Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game ( n = 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.  more » « less
Award ID(s):
1813651
NSF-PAR ID:
10170647
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
117
Issue:
12
ISSN:
0027-8424
Page Range / eLocation ID:
6370 to 6375
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deployed social robots are increasingly relying on wakeword-based interaction, where interactions are human-initiated by a wakeword like “Hey Jibo”. While wakewords help to increase speech recognition accuracy and ensure privacy, there is concern that wakeword-driven interaction could encourage impolite behavior because wakeword-driven speech is typically phrased as commands. To address these concerns, companies have sought to use wake- word design to encourage interactant politeness, through wakewords like “⟨Name⟩, please”. But while this solution is intended to encourage people to use more “polite words”, researchers have found that these wakeword designs actually decrease interactant politeness in text-based communication, and that other wakeword designs could better encourage politeness by priming users to use Indirect Speech Acts. Yet there has been no previous research to directly compare these wakewords designs in in-person, voice-based human-robot interaction experiments, and previous in-person HRI studies could not effectively study carryover of wakeword-driven politeness and impoliteness into human-human interactions. In this work, we conceptually reproduced these previous studies (n=69) to assess how the wakewords “Hey ⟨Name⟩”, “Excuse me ⟨Name⟩”, and “⟨Name⟩, please” impact robot-directed and human-directed politeness. Our results demonstrate the ways that different types of linguistic priming interact in nuanced ways to induce different types of robot-directed and human-directed politeness. 
    more » « less
  2. A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems. 
    more » « less
  3. null (Ed.)
    The robotics community continually strives to create robots that are deployable in real-world environments. Often, robots are expected to interact with human groups. To achieve this goal, we introduce a new method, the Robot-Centric Group Estimation Model (RoboGEM), which enables robots to detect groups of people. Much of the work reported in the literature focuses on dyadic interactions, leaving a gap in our understanding of how to build robots that can effectively team with larger groups of people. Moreover, many current methods rely on exocentric vision, where cameras and sensors are placed externally in the environment, rather than onboard the robot. Consequently, these methods are impractical for robots in unstructured, human-centric environments, which are novel and unpredictable. Furthermore, the majority of work on group perception is supervised, which can inhibit performance in real-world settings. RoboGEM addresses these gaps by being able to predict social groups solely from an egocentric perspective using color and depth (RGB-D) data. To achieve group predictions, RoboGEM leverages joint motion and proximity estimations. We evaluated RoboGEM against a challenging, egocentric, real-world dataset where both pedestrians and the robot are in motion simultaneously, and show RoboGEM outperformed two state-of-the-art supervised methods in detection accuracy by up to 30%, with a lower miss rate. Our work will be helpful to the robotics community, and serve as a milestone to building unsupervised systems that will enable robots to work with human groups in real-world environments. 
    more » « less
  4. null (Ed.)
    Robotic social intelligence is increasingly important. However, measures of human social intelligence omit basic skills, and robot-specific scales do not focus on social intelligence. We combined human robot interaction concepts of beliefs, desires, and intentions with psychology concepts of behaviors, cognitions, and emotions to create 20 Perceived Social Intelligence (PSI) Scales to comprehensively measure perceptions of robots with a wide range of embodiments and behaviors. Participants rated humanoid and non-humanoid robots interacting with people in five videos. Each scale had one factor and high internal consistency, indicating each measures a coherent construct. Scales capturing perceived social information processing skills (appearing to recognize, adapt to, and predict behaviors, cognitions, and emotions) and scales capturing perceived skills for identifying people (appearing to identify humans, individuals, and groups) correlated strongly with social competence and constituted the Mind and Behavior factors. Social presentation scales (appearing friendly, caring, helpful, trustworthy, and not rude, conceited, or hostile) relate more to Social Response to Robots Scales and Godspeed Indices, form a separate factor, and predict positive feelings about robots and wanting social interaction with them. For a comprehensive measure, researchers can use all PSI 20 scales for free. Alternatively, they can select the most relevant scales for their projects. 
    more » « less
  5. null (Ed.)
    Robots are entering various domains of human societies, potentially unfolding more opportunities for people to perceive robots as social agents. We expect that having robots in proximity would create unique social learning situations where humans spontaneously observe and imitate robots’ behaviors. At times, these occurrences of humans’ imitating robot behaviors may result in a spread of unsafe or unethical behaviors among humans. For responsible robot designing, therefore, we argue that it is essential to understand physical and psychological triggers of social learning in robot design. Grounded in the existing literature of social learning and the uncanny valley theories, we discuss the human-likeness of robot appearance and affective responses associated with robot appearance as likely factors that either facilitate or deter social learning. We propose practical considerations for social learning and robot design. 
    more » « less