Abstract Prior research into the conceptual underpinnings of the public's institutional trust in zoos and aquariums has suggested a range of ethical dimensions that set these types of cultural institutions apart from others in the museum sector. As the recognized holders, care‐takers, and nurturers of wild animals, zoos and aquariums are sustained at least in part by the public's perception that these activities are legitimate pursuits and essential to the long‐term conservation of the natural world. This paper builds on recent research that identified the ethical dimensions of trust in zoos and aquariums and assessed their distribution among the U.S. public by analyzing survey responses with respect to the importance of trust criteria. We hypothesized that distinct clusters of individuals, as defined by their response to trust criteria items, would emerge and that these clusters would prioritize different dimensions in their trust of zoos and aquariums. Usingk‐means clustering, we identified four relevant clusters of individuals on seven dimensions of institutional trust in zoos and aquariums. Based on these clusters, we suggest strategies for addressing what may be necessary for zoos and aquariums to claim authority as agents promoting conservation behaviors in society.
more »
« less
Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.
more »
« less
- Award ID(s):
- 1717701
- PAR ID:
- 10170792
- Date Published:
- Journal Name:
- 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- Page Range / eLocation ID:
- 618 to 619
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework.more » « less
-
Using the context of human-supervised object collection tasks, we explore policies for a robot to seek assistance from a human supervisor and avoid loss of human trust in the robot. We consider a human-robot interaction scenario in which a mobile manipulator chooses to collect objects either autonomously or through human assistance; while the human supervisor monitors the robot’s operation, assists when asked, or intervenes if the human perceives that the robot may not accomplish its goal. We design an optimal assistance-seeking policy for the robot using a Partially Observable Markov Decision Process (POMDP) setting in which human trust is a hidden state and the objective is to maximize collaborative performance. We conduct two sets of human-robot interaction experiments. The data from the first set of experiments is used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy that is used in the second experiment. For most participants, the estimated POMDP reveals that humans are more likely to intervene when their trust is low and the robot is performing a high-complexity task; and that the robot asking for assistance in high-complexity tasks can increase human trust in the robot. Our experimental results show that the proposed trust-aware policy yields superior performance compared with an optimal trust-agnostic policy.more » « less
-
Previous research into trust dynamics in human-autonomy interaction has demonstrated that individuals exhibit specific patterns of trust when interacting repeatedly with automated systems. Moreover, people with different types of trust dynamics have been shown to differ across seven personal characteristic dimensions: masculinity, positive affect, extraversion, neuroticism, intellect, performance expectancy, and high expectations. In this study, we develop classification models aimed at predicting an individual’s trust dynamics type–categorized as Bayesian decision-maker, disbeliever, or oscillator–based on these key dimensions. We employed multiple classification algorithms including the random forest classifier, multinomial logistic regression, Support Vector Machine, XGBoost, and Naive Bayes, and conducted a comparative evaluation of their performance. The results indicate that personal characteristics can effectively predict the type of trust dynamics, achieving an accuracy rate of 73.1%, and a weighted average F1 score of 0.64. This study underscores the predictive power of personal traits in the context of human-autonomy interaction.more » « less
-
Using a dual-task paradigm, we explore how robot actions, performance, and the introduction of a secondary task influence human trust and engagement. In our study, a human supervisor simultaneously engages in a target-tracking task while supervising a mobile manipulator performing an object collection task. The robot can either autonomously collect the object or ask for human assistance. The human supervisor also has the choice to rely on or interrupt the robot. Using data from initial experiments, we model the dynamics of human trust and engagement using a linear dynamical system (LDS). Furthermore, we develop a human action model to define the probability of human reliance on the robot. Our model suggests that participants are more likely to interrupt the robot when their trust and engagement are low during high-complexity collection tasks. Using Model Predictive Control (MPC), we design an optimal assistance-seeking policy. Evaluation experiments demonstrate the superior performance of the MPC policy over the baseline policy for most participants.more » « less
An official website of the United States government

