skip to main content


Title: “Excuse Me, Robot”: Impact of Polite Robot Wakewords on Human-Robot Politeness
While the ultimate goal of natural-language based Human-Robot Interaction (HRI) may be free-form, mixed-initiative dialogue,social robots deployed in the near future will likely primarily engage in wakeword-driven interaction, in which users’ commands are prefaced by a wakeword such as “Hey, Robot.” This style of interaction helps to allay user privacy concerns, as the robot’s full speech recognition module need not be employed until the target wakeword is used. Unfortunately, there are a number of concerns in the popular media surrounding this style of interaction, with consumers fearing that it is training users (in particular,children) to be rude towards technology, and by extension, rude towards other humans. In this paper, we present a study that demonstrates how an alternate style of wakeword, i.e., “Excuse me, Robot” may allay this concern, by priming users to phrase commands as Indirect Speech Acts  more » « less
Award ID(s):
1909847 1849348
NSF-PAR ID:
10185832
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
International Conference on Social Robotics
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deployed social robots are increasingly relying on wakeword-based interaction, where interactions are human-initiated by a wakeword like “Hey Jibo”. While wakewords help to increase speech recognition accuracy and ensure privacy, there is concern that wakeword-driven interaction could encourage impolite behavior because wakeword-driven speech is typically phrased as commands. To address these concerns, companies have sought to use wake- word design to encourage interactant politeness, through wakewords like “⟨Name⟩, please”. But while this solution is intended to encourage people to use more “polite words”, researchers have found that these wakeword designs actually decrease interactant politeness in text-based communication, and that other wakeword designs could better encourage politeness by priming users to use Indirect Speech Acts. Yet there has been no previous research to directly compare these wakewords designs in in-person, voice-based human-robot interaction experiments, and previous in-person HRI studies could not effectively study carryover of wakeword-driven politeness and impoliteness into human-human interactions. In this work, we conceptually reproduced these previous studies (n=69) to assess how the wakewords “Hey ⟨Name⟩”, “Excuse me ⟨Name⟩”, and “⟨Name⟩, please” impact robot-directed and human-directed politeness. Our results demonstrate the ways that different types of linguistic priming interact in nuanced ways to induce different types of robot-directed and human-directed politeness. 
    more » « less
  2. Machine learning models that sense human speech, body placement, and other key features are commonplace in human-robot interaction. However, the deployment of such models in themselves is not without risk. Research in the security of machine learning examines how such models can be exploited and the risks associated with these exploits. Unfortunately, the threat models of risks produced by machine learning security do not incorporate the rich sociotechnical underpinnings of the defenses they propose; as a result, efforts to improve the security of machine learning models may actually increase the difference in performance across different demographic groups, yielding systems that have risk mitigation that work better for one group than another. In this work, we outline why current approaches to machine learning security present DEI concerns for the human-robot interaction community and where there are open areas for collaboration. 
    more » « less
  3. Despite the phenomenal advances in the computational power and functionality of electronic systems, human-machine interaction has largely been limited to simple control panels, keyboard, mouse and display. Consequently, these systems either rely critically on close human guidance or operate almost independently from the user. An exemplar technology integrated tightly into our lives is the smartphone. However, the term “smart” is a misnomer, since it has fundamentally no intelligence to understand its user. The users still have to type, touch or speak (to some extent) to express their intentions in a form accessible to the phone. Hence, intelligent decision making is still almost entirely a human task. A life-changing experience can be achieved by transforming machines from passive tools to agents capable of understanding human physiology and what their user wants [1]. This can advance human capabilities in unimagined ways by building a symbiotic relationship to solve real world problems cooperatively. One of the high-impact application areas of this approach is assistive internet of things (IoT) technologies for physically challenged individuals. The Annual World Report on Disability reveals that 15% of the world population lives with disability, while 110 to 190 million of these people have difficulty in functioning [1]. Quality of life for this population can improve significantly if we can provide accessibility to smart devices, which provide sensory inputs and assist with everyday tasks. This work demonstrates that smart IoT devices open up the possibility to alleviate the burden on the user by equipping everyday objects, such as a wheelchair, with decision-making capabilities. Moving part of the intelligent decision making to smart IoT objects requires a robust mechanism for human-machine communication (HMC). To address this challenge, we present examples of multimodal HMC mechanisms, where the modalities are electroencephalogram (EEG), speech commands, and motion sensing. We also introduce an IoT co-simulation framework developed using a network simulator (OMNeT++) and a robot simulation platform Virtual Robot Experimentation Platform (V-REP). We show how this framework is used to evaluate the effectiveness of different HMC strategies using automated indoor navigation as a driver application. 
    more » « less
  4. This work challenges the common assumption in physical human-robot interaction (pHRI) that the movement intention of a human user can be simply modeled with dynamic equations relating forces to movements, regardless of the user. Studies in physical human-human interaction (pHHI) suggest that interaction forces carry sophisticated information that reveals motor skills and roles in the partnership and even promotes adaptation and motor learning. In this view, simple force-displacement equations often used in pHRI studies may not be sufficient. To test this, this work measured and analyzed the interaction forces (F) between two humans as the leader guided the blindfolded follower on a randomly chosen path. The actual trajectory of the follower was transformed to the velocity commands (V) that would allow a hypothetical robot follower to track the same trajectory. Then, possible analytical relationships between F and V were obtained using neural network training. Results suggest that while F helps predict V, the relationship is not straightforward, that seemingly irrelevant components of F may be important, that force-velocity relationships are unique to each human follower, and that human neural control of movement may affect the prediction of the movement intent. It is suggested that user-specific, stereotype-free controllers may more accurately decode human intent in pHRI. 
    more » « less
  5. null (Ed.)
    Unmanned aerial vehicles (UAVs), equipped with a variety of sensors, are being used to provide actionable information to augment first responders’ situational awareness in disaster areas for urban search and rescue (SaR) operations. However, existing aerial robots are unable to sense the occluded spaces in collapsed structures, and voids buried in disaster rubble that may contain victims. In this study, we developed a framework, AiRobSim, to simulate an aerial robot to acquire both aboveground and underground information for post-disaster SaR. The integration of UAV, ground-penetrating radar (GPR), and other sensors, such as global navigation satellite system (GNSS), inertial measurement unit (IMU), and cameras, enables the aerial robot to provide a holistic view of the complex urban disaster areas. The robot-collected data can help locate critical spaces under the rubble to save trapped victims. The simulation framework can serve as a virtual training platform for novice users to control and operate the robot before actual deployment. Data streams provided by the platform, which include maneuver commands, robot states and environmental information, have potential to facilitate the understanding of the decision-making process in urban SaR and the training of future intelligent SaR robots. 
    more » « less