Abstract Prior research into the conceptual underpinnings of the public's institutional trust in zoos and aquariums has suggested a range of ethical dimensions that set these types of cultural institutions apart from others in the museum sector. As the recognized holders, care‐takers, and nurturers of wild animals, zoos and aquariums are sustained at least in part by the public's perception that these activities are legitimate pursuits and essential to the long‐term conservation of the natural world. This paper builds on recent research that identified the ethical dimensions of trust in zoos and aquariums and assessed their distribution among the U.S. public by analyzing survey responses with respect to the importance of trust criteria. We hypothesized that distinct clusters of individuals, as defined by their response to trust criteria items, would emerge and that these clusters would prioritize different dimensions in their trust of zoos and aquariums. Usingk‐means clustering, we identified four relevant clusters of individuals on seven dimensions of institutional trust in zoos and aquariums. Based on these clusters, we suggest strategies for addressing what may be necessary for zoos and aquariums to claim authority as agents promoting conservation behaviors in society.
more »
« less
Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.
more »
« less
- Award ID(s):
- 1717701
- PAR ID:
- 10170792
- Date Published:
- Journal Name:
- 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- Page Range / eLocation ID:
- 618 to 619
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework.more » « less
-
Using the context of human-supervised object collection tasks, we explore policies for a robot to seek assistance from a human supervisor and avoid loss of human trust in the robot. We consider a human-robot interaction scenario in which a mobile manipulator chooses to collect objects either autonomously or through human assistance; while the human supervisor monitors the robot’s operation, assists when asked, or intervenes if the human perceives that the robot may not accomplish its goal. We design an optimal assistance-seeking policy for the robot using a Partially Observable Markov Decision Process (POMDP) setting in which human trust is a hidden state and the objective is to maximize collaborative performance. We conduct two sets of human-robot interaction experiments. The data from the first set of experiments is used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy that is used in the second experiment. For most participants, the estimated POMDP reveals that humans are more likely to intervene when their trust is low and the robot is performing a high-complexity task; and that the robot asking for assistance in high-complexity tasks can increase human trust in the robot. Our experimental results show that the proposed trust-aware policy yields superior performance compared with an optimal trust-agnostic policy.more » « less
-
Recent advances in robotics have accelerated their widespread use in nontraditional domains such as law enforcement. The inclusion of robotics allows for the introduction of time and space in dangerous situations, and protects law enforcement officers (LEOs) from the many potentially dangerous situations they encounter. In this paper, a teleoperated robot prototype was designed and tested to allow LEOs to remotely and transparently communicate and interact with others. The robot featured near face-to-face interactivity and accuracy across multiple verbal and non-verbal modes using screens, microphones, and speakers. In cooperation with multiple law enforcement agencies, results are presented on this dynamic and integrative teleoperated communicative robot platform in terms of attitudes towards robots, trust in robot operation, and trust in human-robot-human interaction and communication.more » « less
-
Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.more » « less