Robots operating in close proximity to humans rely heavily on human trust to successfully complete their tasks. But what are the real outcomes when this trust is violated? Self-defense law provides a framework for analyzing tangible failure scenarios that can inform the design of robots and their algorithms. Studying self-defense is particularly important for ground robots since they operate within public environments, where they can pose a legitimate threat to the safety of nearby humans. Moreover, even if ground robots can guarantee human safety, the perception of a physical threat is sufficient to justify human self-defense against robots. In this paper, we synthesize works in law, engineering, and social science to present four actionable recommendations for how the robotics community can craft robots to mitigate the likelihood of self-defense situations arising. We establish how current U.S. self-defense law can justify a human protecting themselves against a robot, discuss the current literature on human attitudes toward robots, and analyze methods that have been produced to allow robots to operate close to humans. Finally, we present hypothetical scenarios that underscore how current robot navigation methods can fail to sufficiently consider self-defense concerns and the need for the recommendations to guide improvements in the field.
more »
« less
Analyzing Human-Robot Trust in Police Work Using a Teleoperated Communicative Robot
Recent advances in robotics have accelerated their widespread use in nontraditional domains such as law enforcement. The inclusion of robotics allows for the introduction of time and space in dangerous situations, and protects law enforcement officers (LEOs) from the many potentially dangerous situations they encounter. In this paper, a teleoperated robot prototype was designed and tested to allow LEOs to remotely and transparently communicate and interact with others. The robot featured near face-to-face interactivity and accuracy across multiple verbal and non-verbal modes using screens, microphones, and speakers. In cooperation with multiple law enforcement agencies, results are presented on this dynamic and integrative teleoperated communicative robot platform in terms of attitudes towards robots, trust in robot operation, and trust in human-robot-human interaction and communication.
more »
« less
- Award ID(s):
- 2026658
- NSF-PAR ID:
- 10304129
- Date Published:
- Journal Name:
- 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework.more » « less
-
Background: The increasing prevalence of robots in industrial environments is attributed in part to advancements in collaborative robot technologies, enabling robots to work in close proximity to humans. Simultaneously, the rise of teleoperation, involving remote robot control, poses unique opportunities and challenges for human-robot collaboration (HRC) in diverse and distributed workspaces. Purpose: There is not yet a comprehensive understanding of HRC in teleoperation, specifically focusing on collaborations involving the teleoperator, the robot, and the local or onsite workers in industrial settings, here referred to as teleoperator-robot-human collaboration (tRHC). We aimed to identify opportunities, challenges, and potential applications of tRHC through insights provided from industry stakeholders, thereby supporting effective future industrial implementations. Methods: Thirteen stakeholders in robotics, specializing in different domains (i.e., safety, robot manufacturing, aerospace/automotive manufacturing, and supply chains), completed semi-structured interviews that focused on exploring diverse aspects relevant to tRHC. The interviews were then transcribed and thematic analysis was applied to group responses into broader categories, which were further compared across stakeholder industries. Results We identified three main categories and 13 themes from the interviews. These categories include Benefits, Concerns, and Technical Challenges. Interviewees highlighted accessibility, ergonomics, flexibility, safety, time & cost saving, and trust as benefits of tRHC. Concerns raised encompassed safety, standards, trust, and workplace optimization. Technical challenges consisted of critical issues such as communication time delays, the need for high dexterity in robot manipulators, the importance of establishing shared situational awareness among all agents, and the potential of augmented and virtual reality in providing immersive control interfaces. Conclusions: Despite important challenges, tRHC could offer unique benefits, facilitating seamless collaboration among the teleoperator, teleoperated robot(s), and onsite workers across physical and geographic boundaries. To realize such benefits and address the challenges, we propose several research directions to further explore and develop tRHC capabilities.more » « less
-
Recent advances in construction automation increased the need for cooperation between workers and robots, where workers have to face both success and failure in human-robot collaborative work, ultimately affecting their trust in robots. This study simulated a worker-robot bricklaying collaborative task to examine the impacts of blame targets (responsibility attributions) on trust and trust transfer in multi-robots-human interaction. The findings showed that workers’ responsibility attributions to themselves or robots significantly affect their trust in the robot. Further, in a multi-robots-human interaction, observing one robot’s failure to complete the task will affect the trust in the other devices, aka., trust transfer.more » « less
-
Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.more » « less