Our review of common, popular risk analysis frameworks finds that they are very homogenous in their approach. These are considered IT Security Industry ”best practices.” However, one wonders if they are indeed ”best”, as evinced by the almost daily news of large companies suffering major compromises. Embedded in these ”best practices” is the notion that ”trust” is ”good”, i.e. is a desirable feature: ”trusted computing,” ”trusted third party,” etc. We argue for the opposite: that vulnerabilities stem from trust relationships. We propose a a paradigm for risk analysis centered around identifying and minimizing trust relationships. We argue that by bringing trust relationships to the foreground, we can identify paths to compromise that would otherwise go undetected; a more comprehensive assessment of vulnerability, from which one can better prioritize and reduce risk.
more »
« less
Templates and Trust-o-meters: Towards a widely deployable indicator of trust in Wikipedia
- Award ID(s):
- 1928631
- PAR ID:
- 10358961
- Date Published:
- Journal Name:
- CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1 to 17
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Many papers make claims about specific visualization techniques that are said to enhance or calibrate trust in AI systems. But a design choice that enhances trust in some cases appears to damage it in others. In this paper, we explore this inherent duality through an analogy with “knobs”. Turning a knob too far in one direction may result in under-trust, too far in the other, over-trust or, turned up further still, in a confusing distortion. While the designs or so-called “knobs” are not inherently evil, they can be misused or used in an adversarial context and thereby manipulated to mislead users or promote unwarranted levels of trust in AI systems. When a visualization that has no meaningful connection with the underlying model or data is employed to enhance trust, we refer to the result as “trust junk.” From a review of 65 papers, we identify nine commonly made claims about trust calibration. We synthesize them into a framework of knobs that can be used for good or “evil,” and distill our findings into observed pitfalls for the responsible design of human-AI systems.more » « less
An official website of the United States government

