skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: PUBLIC TRUST AND BIOTECH INNOVATION: A THEORY OF TRUSTWORTHY REGULATION OF (SCARY!) TECHNOLOGY
Abstract Regulatory agencies aim to protect the public by moderating risks associated with innovation, but a good regulatory regime should also promote justified public trust. After introducing the USDA 2020 SECURE Rule for regulation of biotech innovation as a case study, this essay develops a theory of justified public trust in regulation. On the theory advanced here, to be trustworthy, a regulatory regime must (1) fairly and effectively manage risk, must be (2) “science based” in the relevant sense, and must in addition be (3) truthful, (4) transparent, and (5) responsive to public input. Evaluated with these norms, the USDA SECURE Rule is shown to be deeply flawed, since it fails appropriately to manage risk, and similarly fails to satisfy other normative requirements for justified trust. The argument identifies ways in which the SECURE Rule itself might be improved, but more broadly provides a normative framework for the evaluation of trustworthy regulatory policy-making.  more » « less
Award ID(s):
1739551
PAR ID:
10374085
Author(s) / Creator(s):
Date Published:
Journal Name:
Social Philosophy and Policy
Volume:
38
Issue:
2
ISSN:
0265-0525
Page Range / eLocation ID:
29 to 49
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This research examines the contrasting artificial intelligence (AI) governance strategies of the European Union (EU) and China, focusing on the dichotomy between human-centric and state-driven policies. The EU's approach, exemplified by the EU AI Act, emphasizes transparency, fairness, and individual rights protection, enforcing strict regulations for high-risk AI applications to build public trust. Conversely, China's state-driven model prioritizes rapid AI deployment and national security, often at the expense of individual privacy, as seen through its flexible regulatory framework and substantial investment in AI innovation. By applying the United States' National Institute of Standards and Technology (NIST) AI Risk Management Framework's Map, Measure, Manage, and Govern functions, this study explores how both regions balance technological advancement with ethical oversight. The study ultimately suggests that a harmonized approach, integrating elements of both models, could promote responsible global AI development and regulation. 
    more » « less
  2. Abstract Demands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Potentially underappreciated in the development of trustworthy AI for environmental sciences is the importance of engaging AI users and other stakeholders, which human–AI teaming perspectives on AI development similarly underscore. Co‐development strategies may also help reconcile efforts to develop performance‐based trustworthiness standards with dynamic and contextual notions of trust. We illustrate the importance of these themes with applied examples and show how insights from research on trust and the communication of risk and uncertainty can help advance the understanding of trust and trustworthiness of AI in the environmental sciences. 
    more » « less
  3. How do cultural biases, trust in government, and perceptions of risk and protective actions influence compliance with regulation of COVID-19? Analyzing Chinese (n = 646) and American public opinion samples (n = 1,325) from spring 2020, we use Grid–Group Cultural Theory and the Protective Action Decision Model to specify, respectively, cultural influences on public risk perceptions and decision-making regarding protective actions. We find that cultural biases mostly affect protective actions indirectly through public perceptions. Regardless of country, hierarchical cultural biases increase protective behaviors via positive perceptions of protective actions. However, other indirect effects of cultural bias via public perceptions vary across both protective actions and countries. Moreover, trust in government only mediates the effect of cultural bias in China and risk perception only mediates the effect of cultural bias in the United States. Our findings suggest that regulators in both countries should craft regulations that are congenial to culturally diverse populations. 
    more » « less
  4. Abstract Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and technology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism, egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance be emphasized as strongly as trustworthy AI. 
    more » « less
  5. Abstract The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) focuses on creating trustworthy AI for a variety of environmental and Earth science phenomena. AI2ES includes leading experts from AI, atmospheric and ocean science, risk communication, and education, who work synergistically to develop and test trustworthy AI methods that transform our understanding and prediction of the environment. Trust is a social phenomenon, and our integration of risk communication research across AI2ES activities provides an empirical foundation for developing user‐informed, trustworthy AI. AI2ES also features activities to broaden participation and for workforce development that are fully integrated with AI2ES research on trustworthy AI, environmental science, and risk communication. 
    more » « less