Many papers make claims about specific visualization techniques that are said to enhance or calibrate trust in AI systems. But a design choice that enhances trust in some cases appears to damage it in others. In this paper, we explore this inherent duality through an analogy with “knobs”. Turning a knob too far in one direction may result in under-trust, too far in the other, over-trust or, turned up further still, in a confusing distortion. While the designs or so-called “knobs” are not inherently evil, they can be misused or used in an adversarial context and thereby manipulated to mislead users or promote unwarranted levels of trust in AI systems. When a visualization that has no meaningful connection with the underlying model or data is employed to enhance trust, we refer to the result as “trust junk.” From a review of 65 papers, we identify nine commonly made claims about trust calibration. We synthesize them into a framework of knobs that can be used for good or “evil,” and distill our findings into observed pitfalls for the responsible design of human-AI systems.
more »
« less
"What If It Is Wrong": Effects of Power Dynamics and Trust Repair Strategy on Trust and Compliance in HRI
- Award ID(s):
- 2141335
- PAR ID:
- 10425916
- Date Published:
- Journal Name:
- the 2023 ACM/IEEE International Conference on Human-Robot Interaction
- Page Range / eLocation ID:
- 271 to 280
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Our review of common, popular risk analysis frameworks finds that they are very homogenous in their approach. These are considered IT Security Industry ”best practices.” However, one wonders if they are indeed ”best”, as evinced by the almost daily news of large companies suffering major compromises. Embedded in these ”best practices” is the notion that ”trust” is ”good”, i.e. is a desirable feature: ”trusted computing,” ”trusted third party,” etc. We argue for the opposite: that vulnerabilities stem from trust relationships. We propose a a paradigm for risk analysis centered around identifying and minimizing trust relationships. We argue that by bringing trust relationships to the foreground, we can identify paths to compromise that would otherwise go undetected; a more comprehensive assessment of vulnerability, from which one can better prioritize and reduce risk.more » « less
-
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.more » « less
An official website of the United States government

