skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Trust-Based Security; Or, Trust Considered Harmful
Our review of common, popular risk analysis frameworks finds that they are very homogenous in their approach. These are considered IT Security Industry ”best practices.” However, one wonders if they are indeed ”best”, as evinced by the almost daily news of large companies suffering major compromises. Embedded in these ”best practices” is the notion that ”trust” is ”good”, i.e. is a desirable feature: ”trusted computing,” ”trusted third party,” etc. We argue for the opposite: that vulnerabilities stem from trust relationships. We propose a a paradigm for risk analysis centered around identifying and minimizing trust relationships. We argue that by bringing trust relationships to the foreground, we can identify paths to compromise that would otherwise go undetected; a more comprehensive assessment of vulnerability, from which one can better prioritize and reduce risk.  more » « less
Award ID(s):
1739025
PAR ID:
10317421
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the 2020 New Security Paradigms Workshop
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this essay, we argue that the advent of the Fourth Industrial Revolution calls for a reexamination of trust patterns within and across organizations. We identify fundamental changes in terms of (1) what form organizational trust takes, (2) how it is produced, and (3) who needs to be trusted. First, and most broadly, trust is likely to become more impersonal and systemic. Trust between actors is increasingly substituted by trust in a system based on digital technology. Second, in terms of trust production modes, characteristic- and institution-based trust production will gain in importance. Third, despite the move toward system trust, there will nonetheless be a need to trust certain individuals; however, these trustees are no longer the counterparts to the interaction but rather third parties in charge of the technological systems and data. Thus, the focal targets of interpersonal trust are changing. 
    more » « less
  2. Abstract Although Internet routing security best practices have recently seen auspicious increases in uptake, Internet Service Providers (ISPs) have limited incentives to deploy them. They are operationally complex and expensive to implement and provide little competitive advantage. The practices with significant uptake protect only against origin hijacks, leaving unresolved the more general threat of path hijacks. We propose a new approach to improved routing security that achieves four design goals: improved incentive alignment to implement best practices; protection against path hijacks; expanded scope of such protection to customers of those engaged in the practices; and reliance on existing capabilities rather than needing complex new software in every participating router. Our proposal leverages an existing coherent core of interconnected ISPs to create a zone of trust, a topological region that protects not only all networks in the region, but all directly attached customers of those networks. Customers benefit from choosing ISPs committed to the practices, and ISPs thus benefit from committing to the practices. We discuss the concept of a zone of trust as a new, more pragmatic approach to security that improves security in a region of the Internet, as opposed to striving for global deployment. We argue that the aspiration for global deployment is unrealistic, since the global Internet includes malicious actors. We compare our approach to other schemes and discuss how a related proposal, ASPA, could be used to increase the scope of protection our scheme achieves. We hope this proposal inspires discussion of how the industry can make practical, measurable progress against the threat of route hijacks in the short term by leveraging institutionalized cooperation rooted in transparency and accountability. 
    more » « less
  3. Modern software installation tools often use packages from more than one repository, presenting a unique set of security challenges. Such a configuration increases the risk of repository compromise and introduces attacks like dependency confusion and repository fallback. In this paper, we offer the first exploration of attacks that specifically target multiple repository update systems, and propose a unique defensive strategy we call articulated trust. Articulated trust is a principle that allows software installation tools to specify trusted developers and repositories for each package. To implement articulated trust, we built Artemis, a framework that introduces several new security techniques, such as per-package prioritization of repositories, multi-role delegations, multiple-repository consensus, and key pinning. These techniques allow for a greater diversity of trust relationships while eliminating the security risk of single points of failure. To evaluate Artemis, we examine attacks on software update systems from the Cloud Native Computing Foundation’s Catalog of Supply Chain Compromises, and find that the most secure configuration of Artemis can prevent all of them, compared to 14-59% for the best existing system. We also cite real-world deployments of Artemis that highlight its practicality. These include the JDF/Linux Foundation Uptane Standard that secures over-the-air updates for millions of automobiles, and TUF, which is used by many companies for secure software distribution. 
    more » « less
  4. The prevalence of inadequate SARS-COV-2 (COVID-19) responses may indicate a lack of trust in forecasts and risk communication. However, no work has empirically tested how multiple forecast visualization choices impact trust and task-based performance. The three studies presented in this paper (N=1299) examine how visualization choices impact trust in COVID-19 mortality forecasts and how they influence performance in a trend prediction task. These studies focus on line charts populated with real-time COVID-19 data that varied the number and color encoding of the forecasts and the presence of best/worst-case forecasts. The studies reveal that trust in COVID-19 forecast visualizations initially increases with the number of forecasts and then plateaus after 6–9 forecasts. However, participants were most trusting of visualizations that showed less visual information, including a 95% confidence interval, single forecast, and grayscale encoded forecasts. Participants maintained high trust in intervals labeled with 50% and 25% and did not proportionally scale their trust to the indicated interval size. Despite the high trust, the 95% CI condition was the most likely to evoke predictions that did not correspond with the actual COVID-19 trend. Qualitative analysis of participants' strategies confirmed that many participants trusted both the simplistic visualizations and those with numerous forecasts. This work provides practical guides for how COVID-19 forecast visualizations influence trust, including recommendations for identifying the range where forecasts balance trade-offs between trust and task-based performance. 
    more » « less
  5. As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation. 
    more » « less